Bill Blogs in C# -- Software Engineering

Bill Blogs in C# -- Software Engineering

Created: 11/6/2013 8:00:59 PM

I just finished reading "Visual Studio Team Foundation Server 2012: Adopting Agile Software Practices”, by Sam Guckenheimer & Nino Loje. (http://www.informit.com/store/visual-studio-team-foundation-server-2012-adopting-9780321864871)

Despite the title, the audience for this book should not be limited to teams that are using TFS as their main ALM tool. Most of the book discusses the practices for The Agile Consensus, not specific TFS features. The authors do spend some of the book discussing how TFS supports these practices. The core of the material is about the Agile Consensus, and how to help teams adopt it more effectively.

The first chapter defines the Agile Consensus and provides background and examples of Agile practices.

Chapters 2 - 8 define the Scrum process, and the support for that process in TFS. These chapters provide theory, practice, and how those practices are supported in Visual Studio 2012.

Chapter 9 is the most interesting. It discusses the process and the improvements made when Microsoft's Developer Division adopted Agile practices in 2005. It's an excellent case study for adopting agile in a large organization. In fact, this chapter alone makes the book worth buying.

The final chapter discusses processes for using agile across releases. How to manage continuous feedback, and drive future work.

There's a lot to learn here, whether you are using TFS or other agile toolsets. In fact, I think it's very useful if you use other tools: you'll learn to compare your tools with TFS. Maybe you'll see some practices you like, and make some modifications in your process. And that would be an agile improvement.

It’s a relatively short book, and it’s packed with good information.  It’s worth the time investment.

Created: 10/23/2012 5:56:58 PM

I often find myself in conversations with customers and prospective customers regarding how we build software. How do we build software? What’s our process? How are we tracking tasks? What documentation do we create?

With some customers, we get a lot of pushback on the lean, fast process we use. According to these people, we don’t generate enough documentation. We don’t do enough manual testing. We start coding too soon. I’ve observed an interesting quality to these conversations:  The person (or people) questioning our process is most familiar and has extensive experience developing software at least 20 years ago. Equally importantly, they no longer develop software, either professionally or casually.

Like all conversations, everyone brings their own perspective to the conversation.  A couple decades ago, our tools were different. That meant a different process was better. These customers are bringing that natural bias to the conversations, and it comes out in their questions. Knowing where that bias starts makes it easier to explain the differences in a positive way.

Picking up that new codebase today

Let’s start with how today’s tools support and enable our current process. Suppose I was handed a large codebase we’ve created, and I needed to make some changes.  The first thing I would do is to get that project from source control, build it, and run the automated tests. That would give me the following information:  I’d expect all the tests to pass. If they didn’t, we’d have a serious problem. I’d expect that the tests cover a strong cross-section of the codebase. (I wouldn’t expect that we’d have 100% coverage, but I’d expect something in the high 70s, at a minimum).

Already, I’ve got a level of confidence. If I start making changes to this codebase, without doing any deeper investigation, I would expect that the automated tests would alert me if I’d made any change that introduced a regression bug.

Read that sentence again, because it’s a huge confidence builder for a new developer on any project: As a developer, I will know if I’ve broken anything even before I check the code into source control.

Next, I’d string trying to find the area of the code I need to modify. I’d use the search function in my IDE to find classes, methods, or modules with names that make sense. I’d pay special attention to tests the exercise the feature I’m going to work on. I’d read the test code. I’d use the “go to definition” feature of my IDE to find the code being exercised. I’d learn about the sections of code I’d need to modify.

Once I felt a bit comfortable, I’d start writing new tests to express the changes I would make. I’d leverage intellisense to see what the capabilities of the types I’m using are. I’d expect some reasonable method names to help me understand what’s already implemented.

Overall, I’d feel reasonably comfortable making changes and adding features within a day. I’d probably be quicker if the codebase was smaller. 

Many open source developers learn a new project the same way. They look, they learn about the project using the source, they dive in.

Picking up a new codebase in years past

But it wasn’t always like this. Decades ago, we didn’t have powerful IDEs. In those dark ages, we started by reading voluminous documents. Those documents gave developers the roadmap and that important first handle into a large codebase. Next, a developer would read and digest detailed specs on the modules to be modified. Then, and only then, would someone make the first tentative steps into modifying the code.

Unit tests weren’t standard practice yet. Developers approaching a new codebase wouldn’t have that security and that safety of knowing that mistakes would be caught early. Any mistakes will escape to a QA department. Even worse, any mistake might even make it into production or distribution.

The common languages we used were very different. C was by far the most common language. The other main players were PASCAL and FORTRAN. Today’s common idioms for encapsulation were rare. You had to be careful about modifying global variables. Global variables weren’t a code smell. They were necessary. That introduced more risk into each change. That increased risk meant more process.

The extra process was necessary because the tools weren’t able to support greater speed. The costs of mistakes were high; we didn’t have a high probability of catching problems before customers did. Worse, we couldn’t give new developers the confidence to move quickly, especially on an unfamiliar codebase.

That extra process also carried real overhead. At my first job, we had librarians. They were responsible for helping developers find the necessary documents or code for a particular problem. Yes, their entire job was managing the documents we created. That overhead may sound like waste today, but it wasn’t. These people were instrumental in helping developers be productive.

We also had a much larger QA and test department than you would expect for a similar sized organization today. That’s because so much of the testing was done by hand. And, of course, the test organization meticulously followed test plan documents. There was also the science of sampling those test plans for sniff tests and regression tests (and rotating the sample so that over time every test was executed).

Understand Upper Level Management

The upper level managers I talk with had similar first job experiences to mine. One big difference is that they often left the ranks of main line developers to become managers many years ago. When they first hear about the lightweight process we use, they don’t see it as leveraging the tools. Instead, they see it as being sloppy. I have to frame that discussion in terms of moving fast and leveraging modern tools if I want to make any headway.

I can’t talk about the lack of test plans; I have to talk about the overwhelming number of automated tests.

I can’t talk about lack of documentation; I have to talk about documentation in the form of an Open Source project’s wiki, and IDE tools to browse, search, and learn about a codebase. Most importantly, I have to separate the act of creating a design (thinking and collaborating) from the act of writing a design document.

I can’t talk about lack of a firm plan; I have to talk about measuring velocity and responsiveness (even to change).

Most importantly, I have to explain in very clear terms how the artifacts we needed to create in the past are not necessary because the tools enable us to do more, do it faster, and become familiar with a project much faster.

I’m not saying that the people I’m referring to are dinosaurs, out of touch, or anything of the kind. I’m writing this to point out that they, like all of us, bring our own biases to every conversation. These same biases color our decisions. If you want to move these managers away from those processes they remember as necessary, you must explain why they are no longer necessary.

Remember this anecdote when you find yourself trying to change a process that’s no longer bringing value:

A newlywed couple was working on a roast for dinner. The young bride cut the ends off the roast and put them into a second smaller pan. The young husband had never prepared a roast this way, so he asked why. The bride said she didn’t know why, but it’s what her mother always did. The called the bride’s mother and asked why. She said she didn’t know either, but it was what her mother had always done. So the young couple called the bride’s grandmother. The wise grandmother laughed and laughed. When she finally stopped laughing, she said “I can’t believe you’re still doing that. I only cut the ends off the roast because when your grandpa and I were young we didn’t have a roasting pan big enough for a roast that would feed all of us.”

Don’t continue to use a process that’s no longer necessary with modern tools. Equally important, remember that those processes once had a purpose, and you need to explain why modern tools render them obsolete.

Created: 8/8/2011 1:51:58 PM

Filed under “Career advice for new grads.” Experiences from working with our new hires.

One of the toughest challenges for young people in our industry is moving from an academic career to a professional career. While in school, especially in a computer science degree, you are trying to gain a deep understanding of the core computer science theory and principles. Every detail is important. If you are going to be successful in that environment, you need to obsessively focus on each detail, taking nothing for granted. That’s possible in school. Projects are scoped for a single semester, by a single person or a small group. That’s as it should be for an academic institution. The goal is to teach students. The structure is built around that goal. New grads bring that obsessive attention to detail to everything they do. New grads carry that obsessive attention to every detail into their first assignment.

The professional world is different. The projects are larger. The teams are larger. Projects aren’t designed around semesters and small teams; projects are scoped to business needs. Projects often build on existing libraries (either from the current application, or third party components).

As a developer, you still must pay attention to detail: that’s part of what coding is about. But, you also need a new skill: You must ignore those details that don’t help you achieve your goal. You must learn to trust that libraries and components do what they claim. The codebases are too big to understand everything at a fine level of detail. The farther you get from the code you are writing or modifying, the less detail you need to understand.

It’s a big leap to change your thinking habits. However, it is critical to your success in the software industry. Knowing everything about everything is not possible. If you want to be successful, keep that attention to detail, but know when to turn it off. You need to trust when you are leveraging the work of other team members, or the industry at large.

Created: 8/1/2011 2:10:11 PM

Because of some meetings, I missed our company wide standup for 3 straight days, finally attending again yesterday afternoon. I realized how important it is, and how much I missed it.

I don’t think anyone missed my contributions, but I missed hearing what everyone else was working on. A few statistics will explain. We have roughly 20 people participating in Standup. It’s almost always over in 10 minutes. That means you’re speaking for 30 seconds, and listening for 9:30.

That’s a pretty big listening to speaking ratio. And, that’s the important value in the company wide standup. The real value is listening to what everyone else is doing, and contributing help where you can.

Mary and Tom Poppendieck use the phrase “optimize the whole” to describe the importance of seeing the big picture, and making that better, even if it costs more locally. That’s a good way to look at standup: If you spend 30 minutes to save another project team hours, that helps optimize the whole. We’re all the better for it. Of course, one day, some one else will probably be able to save you hours by spending 30 minutes helping you.

That’s why we have continued these company wide standups, even though we have grown to the point where we now have anywhere from 5 to 10 projects running concurrently. That cross-project communication helps all our projects. It’s also one way to evaluate readiness for leadership roles: Participating across projects, and outside your own tasks shows that global view that is necessary for taking on more of a leadership role.

Created: 7/12/2011 4:16:28 PM

In an earlier post, I discussed that you should decide and promote new ideas as either incremental changes, or disruptive new innovations.  If you try to describe it the wrong way, you won’t get the adoption you want, and many of your users will be confused and annoyed by your new idea.

Distributed Version Control has walked into that problem.

I don’t think it was a conscious decision, nor am I blaming the DVCS community. However, the fact is that many people in the development community look at DVCS with more confusion and less acceptance than they should.

If you search for comparisons between a centralized VCS and DVCS, you’ll find one consistent answer:  “DVCS doesn’t need a central server.”  Unfortunately, that answer is correct, but unhelpful.

First of all, most successful uses of DVCS have a central server. If you’re going to ship code, you need to build it from some location. That’s probably a central VCS server. The claim that DVCS doesn’t need a central server makes users think it is really just an incremental change from other systems such as SVN.

That sets them up to fail when migrating from VCS to DVCS. In fact,Joel Spolsky has written a great article on “Subversion Re-Education” to address this problem. If you’re struggling with migrating from SVN to Hg, or your team is struggling, check out that article, and the entire tutorial.

How I think about DVCS

To me, DVCS is a major disruptive change over traditional Centralized VCS system. Where traditional VCS systems manage changes to files, DVCS manages changes to entire repositories. When you commit changes to a traditional VCS server, you are committing the current version of each file into the central repository. When you push changes in DVCS, you are applying all the changesets in your local repository to a shared repository. When you issue an update command in a traditional VCS system, you are getting the current version of every file from the central repository. When you issue a pull request in a DVCS system, you are merging all changesets in a shared repository into your local repository.

The differences in the terms above may seem small, but they have a large impact on the mindset you should have regarding operations using DVCS. It’s a disruptive change:  You have your own repository. Any sharing activity involves a merge operation between repositories.

This change helps you see why DVCS has so many proponents. It fits the way teams work now. You can make small changes, make small commits, and have many small incremental changes in your local repository. I do a commit after writing a test. I commit after making that test past. I commit again after any refactorings. Rinse, Repeat. I push (to share the code) after I finish a story card. Commit is at the granularity of a code change. Push is at the granularity of sharing with other team members. I can’t do that in a centralized VCS system.  “commit” and “share” are the same thing. That change is disruptive; it’s not incremental.

By thinking about DVCS as operations between repositories, you likely won’t accidentally create multi-headed repositories or any of the other issues that sometimes plague DVCS systems. Once again, I won’t exhaustively list what can happen, but refer to Joel’s hgInit tutorial. If you are interested in migrating to DVCS, read that tutorial. It’s a great explanation of DVCS systems.  If you’ve got experience in traditional VCS systems, remember this: 

DVCS systems give each user their own repository. Sharing code with teammates means sharing repositories and history between  repositories.

The Mercurial (or git) documentation is quite good, and will explain its commands in these terms.  It will be clear when you work through docs or examples whether a command works with your local repo, or between repositories.

Some Conclusions

I’ve written before that I do like DVCS, and I think it is a big improvement over traditional centralized VCS systems.  Anytime I’ve had trouble getting someone to appreciate DVCS, they’ve been viewing a DVCS system as an incremental improvement over traditional VCS systems. That mental model made them under-appreciate the advances DVCS offered. It also led them to use a traditional workflow that played to DVCS’s weaknesses rather than its strengths.

DVCS may not be right for every organization, or every team. But, viewing it as a traditional VCS without a central server will not lead to a successful adoption. Viewing it as a disruptive change that fits better with today’s team-oriented development processes will put you on a better path to success.

Created: 7/8/2011 7:40:42 PM

More and more, I’m convinced there are two islands where new products are successful:  They are either small incremental improvements over the status quo, or massive disruptive changes that represent an entirely new invention. The area in the middle is filled with failures. I think the reason has to do with how the human brain works and adapts to new ideas.

Small changes to something familiar are easy to assimilate. So much is familiar and feels comfortable. Those few small things represent a minimal change, and our minds can concentrate on those changes while still being productive doing familiar tasks in familiar ways. There are many examples of successful software products that have followed this model:  SVN felt familiar to CVS users.  Google Chrome feels familiar to Firefox or IE users. C# felt familiar to C++ and Java developers. Going back further, C++ felt familiar to C developers.

At the other end of the scale are those products that feel so different that the make you learn totally different workflows. You may be accomplishing the same  goal, but the process is so different, you’re forced to adapt and change everything. The iPhone redefined the smart phone market. The word processor introduced totally different workflows from the typewriter. Google maps and Bing maps have completely different workflows from paper maps. Windows 8 promises a completely different shell than Windows 7.

Describe your product the right way

It’s only half the battle to understand that your customers are more comfortable farther from the middle in this continuum of familiarity and disruption. It’s just as important to market and sell your products in such a way that you push your users to expect either familiarity or newness.

Remember Steve Jobs’ introduction of the iPhone:  “The phone will never be the same.” He set customer expectations that this was a disruptive product. At the other end of the spectrum, C# was introduced as a component – oriented language that would be familiar to C++ and Java developers. Microsoft understands this in how they market Windows:  Windows 7 was described as familiar to Vista users, with higher quality and a few new features. Windows 8 is being described as revolutionary. They are setting expectations about how people should react and adapt to the new features.

Successful products can make changes in their design and their marketing to pull user expectations in one direction or the other.  Early digital phone systems included weights in the handsets. That’s because the analog equipment had iron cores in both the speakers and the microphones. They felt heavy. Early digital phones were marketed to be familiar to analog phone users. Yet, they felt too light and were thought ‘cheap’ by early adopters. Adding weight made them feel substantial. And, that was consistent with how they were described. Later, digital device manufacturers highlighted the differences, including the (lack of) weight as a differentiator. That set expectations differently for the customers.

Success in the Middle is harder

The products and innovations that are closer to the middle have a more difficult road to adoption. I’ll mention a couple examples.

Distributed Version Control (DVCS) systems have a quandary. They are quite different from a central VCS. And yet, the commands and the workflow feel similar. This tends to introduce confusion for users. So much experience will tell them that using DVCS is “just like” using whatever VCS system they are already using. And yet, the workflow really is different:  push and pull don’t have any analog in a central VCS. Two new commands aren’t enough for most people to appreciate how different DVCS is from traditional VCS. This introduces confusion and frustration.

Programming languages, like Scala or C# 4 have the same problem. Scala is often presented as being easy to adopt for Java developers. C# has been presented as being easy for C++ and Java developers. As I mentioned earlier, it helped C# adoption that earlier versions of the language were very similar to familiar curly-brace languages. But, idiomatic C# 4.0 is quite different from either C++ or Java:  Lambdas, query expressions, dynamic typing, covariant and contravariant generics, partial methods and partial classes. Those concepts are not small tweaks from other curly brace languages. I think it would be a mistake to teach developers C# by showing them a feature by feature comparison with Java or C++ in 2011. It’s better to teach them idiomatic C# 4 as a new and disruptive concept that just happens to have similar syntax.

Similarly, Scala is approachable to Java developers. But, idiomatic Scala is quite different than idiomatic Java. It would be a mistake to take developers on a feature-by-feature comparison of Java and Scala.That will cause new developers false expectations. Better to teach it as new, and point to the occasional familiar construct.

Enough Rambling.  Do you have a point?

I actually have 2 points. First, when you are learning something new, decide if you should treat it like something familiar, with a few changes or something totally new that solves a similar problem. You’ll have a better frame of reference to tackle whatever new concepts you have if you’ve made that decision.

Secondly, if you are introducing new products or technology, decide how to present it: Is it something new, or something familiar? Guide new users to the best way for them to learn the concepts.

Created: 5/5/2011 2:44:51 PM

On Friday May 27th, we’re hosting Richard Hale Shaw for two new seminars: “Habits of Successful Software Developers”, and “On Time and Under Budget: How to Stop Missing – and Start Meeting – Software Project Deadlines.”

Richard has been a good friend, colleague, and mentor for a long time. I’ve learned a lot from him over the years. He was a driving force behind the growth in Ann Arbor’s software development community, founded the Ann Arbor Computer Society, and helped many of us advance and accept new challenges. (He helped me get my first two writing contracts, including my first recurring column.)

I have always respected his opinions and I’m thrilled to see him taking on software engineering and development process topics in this new seminars. These seminars will make you a better developers, regardless of the technology stack, programming language, or platform you choose.

From the conversations I’ve had with Richard, I’m convinced these seminars will help every developer do their job better. We’re happy to host Richard, and grateful that he’s coming back to Ann Arbor to do these seminars.

For more information, and to register, click here.

Created: 4/14/2011 5:48:13 PM

SRT Solutions is a very collaborative environment. We enable our employees to choose the best way to collaborate. There are with a minimum number of rules and processes. If you’re in the Ann Arbor area, stop by.  We’d be happy to let you see what goes on inside. For those that can’t see us in person, you’ll see some people pairing. You’ll see some people pairing remotely with telecommuters. You’ll see teams working together in larger groups. You’ll also see people working on their own. Every one of those different styles is important.

  • People working in groups will likely be working on larger design or architecture tasks.
  • People working in pairs are probably implementing features or working on diagnosing issues.
  • People working on their own are doing detailed algorithm design, or implementing different features.

We use all these different working styles because no one configuration is perfect. Groups engender dynamic, thought-provoking ideas. We get brainstorming, new ideas, great innovations, and critical discussions. Pairing provides that extra pair of eyes, the constant review, and some knowledge transfer. Working alone is also important. Sometimes you need to think through a problem or a technique before you’re ready to share it with others.

An important result from working alone is that you gain deeper knowledge and confidence from performing tasks on your own. I see examples of this from my experience helping high school students with advanced math classes. My rule in any given session is that I do one problem, and only one problem for a student to explain a technique. After that one problem, they do all the work. I look over their shoulder, and provide some guidance if they make mistakes or get stuck, but I don’t ever take control and solve another problem.

That’s easy in a teaching session, because my only goal is that the student becomes self-sufficient. I have no deadline for finishing the problems. Sure, the student needs to turn in his homework, but I have no deliverables. I can look over a student’s shoulder for as long as it takes, my only goal is that the student grows her skills.

Contrast that relationship with two developers pairing on a feature. Because both developers have a shared deliverable (the feature), there is a natural motivation to put the keyboard in the hands of the person who will finish fastest. While one goal of pairing is to grow skills among team members, that goal is balanced against the goal of delivering software in a timely fashion. In fact, that second goal (delivering the software) often wins. The pair partner who has less knowledge about the task at hand can slowly become a passive partner.

That’s not to say that it always goes this way. Some people have strong enough personalities to ensure that they learn, and become proficient and productive without their pair partner. But, the risk is real, and it’s one reason why we allow people to work on their own some of the time, and to collaborate some of the time. You’ll learn, and grow different skills in each environment. 

We value both real collaboration, and real independent thought. That requires giving people the freedom to work collaboratively, the freedom to work independently, and the trust that everyone will recognize which is best at which time. It makes for a very dynamic workplace. People do collaborate much more often than they work alone. It also changes very quickly. Someone may work alone for a bit, and then as soon as they find they’d be more productive collaborating, they get up and find someone to join them.

Created: 3/28/2011 11:40:49 AM

One of the catch phrases surrounding software is that software people need to “understand the business”. I couldn’t agree more, but there’s quite a bit more to that simple statement.

The most successful people respect and learn about many different disciplines, while retaining a core expertise in their chosen field. One trick ponies have very short careers. If you are going to succeed, you must have both depth in your chosen field, and breadth of knowledge in different disciplines. In addition, you must respect the the people you interact with everyday that are experts in other areas.

Teams that work effectively across disciplines are more successful, just like individuals that work across disciplines.

Creating those teams begins with respect for different disciplines. Everyone involved has to recognize that someone with different skills is pulling for the same goal, the team gets stronger and creates greater success. Each individual has their own role to play, but unless the team functions across disciplines, the final result suffers.

Let me give you an example from outside the business world.  I volunteer with our local high school drama plays. When they work well, they have to build a team with kids that have skills in different areas. There are kids on stage. There are kids doing the tech work (lights, stage direction, effects and so on). There are other kids that help build sets, creating the illusion of the location. Musicals add more students in the pit band.

These students can create a great experience if they work together. The lighting and effects have to match the actors’ motions. The set needs to look right and support the actions and dialog. The actors must interact with each other, and respond to the technical cues. If any of these groups lose respect for each other, everything falls apart.

The same is true in business arrangements. We should have learned to respect people that have different strengths. And yet, we still fall into the same traps. Do your technical people respect those that have chosen marketing as a career? What about sales? Do they respect the technical team? Or, does each group believe they are the only and only key to success, and everyone else on the team must support them?

When we start projects, we work hard to understand everything we can from our customer’s perspective. That’s the only way we can build what they really need. Then, we work hard to explain our ideas from our perspective. They need to understand what we do, and why it helps them reach their goals. We’ve found the most successful customers teach us about their business while we teach them about ours. Cross-company, cross-functional teams do great things. But only with mutual respect, and valuing different perspectives.

Created: 3/22/2011 2:11:57 AM

We’ll be hosting two seminars by Richard Hale Shaw, a long been a mentor and colleague, at SPARK Central on Friday May 27th (details below, registration link here.)

Habits of Successful Software Developers

For Software Developers, success comes in a variety of forms:

Shipping a new product – or a new version of it – on-time and under budget with an acceptable minimum number of defects;

  • Confidence in being able to define a problem and implement a solution;
  • Understanding  – and internalizing – User’s requirements (even when they don’t know this themselves);
  • Writing easily understood, easily modified code with sufficient commenting, Unit Tests, and regular check-in.

There’s more…but these are all traits of successful software development, the results that are produced by successful programmers. The question is: how do you get there? How do you become a developer that delivers this way?

The operative word: Habits.

In this highly interactive session, Richard will lead you through an iterative process of discerning many (but not all) of the traits or characteristics of Successful Software Developers. We’ll then look at why those characteristics are found and what habits had to be put into place to develop them. Then examine what these habits are, how you create them and how you ensure that they take root and grow – and to do so, Richard will draw off of nearly 28 years of programming, development, team leadership, consulting and other experience in the software industry.

If you’re unhappy with whom you are as a programmer, or think that you can vastly improve your ability to perform as a software developer, you’ll not want to miss this session.

On Time and Under Budget: How to Stop Missing – and Start Meeting – Software Project Deadlines

What’s your biggest challenge as a software developer?

Maybe you think it’s learning and developing new skills, or keeping up with the latest technologies and tools? These can be tough – but an abundance of resources (such as books, training, and conferences – not to mention help from colleagues) is available. Or perhaps you’d say that requirements gathering and analysis is difficult? Granted, collecting, organizing and internalizing your understanding of users’ needs isn’t easy. But there are lots of great methodologies at hand that are designed to help you address just this issue.

So let me ask the question another way: if brought before a jury of your peers and accused of delivering your software projects on time, would there be enough evidence to convict you?

It’s likely that the answer is no – and that this may be your biggest challenge as a software developer.

Arguably the biggest problem facing every software developer – not to mention entire teams and perhaps the entire industry – is that of setting and meeting deadlines. The issue is a complex one: without deadlines projects would likely languish; but deadlines are often set by almost anyone other than the developer or team that’s responsible for meeting it. And while deadlines are supposed to be inflexible, product feature sets appear to be highly flexible – and completely out of our control.

In this highly interactive session, Richard will help you look at deadlines as contracts – where a contract is an agreement by both parties, and not open to change by one party without the other’s consent. We’ll talk about why deadlines are valuable – and to whom – when you should set them (or at least, agree to them) and when you shouldn’t. We’ll look at how deadlines are set, how they’re changed, who gets to change them – and why.

Finally, we’ll look at a number of strategic solutions and tactics that you can implement, turning deadlines from impossible tasks into achievable goals.

Created: 3/14/2011 8:49:33 PM

I was catching up on some blog posts over the weekend, and two by Stephen Forte caught my eye. He wrote about his initial experience with Agile methodologies, and how he started breaking all the rules.

That made me start thinking about our own process, and how it has changed over the past several years. We don’t strictly follow any process brand, but we work hard to develop the best software we can as fast as we can. We work they way we do because of our team. We have the luxury of a wicked smart team, and a team that knows how much we respect them. I can distill our ongoing process changes down to three points:

  1. If it’s working, do more of it.
  2. If it’s not working, change it.
  3. Search for new ideas from the leaders in our industry.
  4. Constantly try, evaluate, and make improvements to the best new ideas.

Looking back at our company’s history, most of the practices we’ve adopted started with these goals. And they weren’t driven by either Dianne or I. They came from the people that work for us. Then, they quickly move and change based on how it’s working, or how it’s not working. I’ll give three examples.

Standups

Early in our history, we did not have a daily standup. We didn’t need one. We were so small that we always knew what others were doing. Mike Woelmer suggested adding one. He is one of our first employees, and he recognized that as we added people, we didn’t information about what others were doing. We started a daily company-wide standup. That helped, but it wasn’t perfect. We’ve always prided ourselves in being flexible. Some days people work at home. That’s fine, but participating in standup is important. We added a free conference line, and asked people telecommuting, or at client sites, to call in. For those in the office, the phone is now our token for standup. Anyone working remotely chimes in in the order they joined the call.

As we continued to grow, we added project specific standups. Customers are encouraged to join their specific standup. We still have the company wide standup for all of us to learn about other projects. That just provides extra motivation to keep standups short. It may sound like a lot of meetings, but two standup meetings, both under 10 minutes, really increases communication and doesn’t have much of an impact on other tasks.

Oh, and if you can’t make standup because you are in a meeting at that time, email your standup report.

Room Swap

Our space has 5 offices, so we need to split up among the rooms. We still want everyone to get to know each other, and work together.  Chris Marinos suggested that we switch offices every two months. If you’ve come to visit, and wonder why people are not where you last saw them, that’s why. Every two months (or so), we switch offices and change who we share space with.

This is one of those ideas that changed quite a bit during the last few years. We found we wanted to keep project teams in the same room. This is critical as deadlines and deliverables near. We wanted to time moves so they didn’t disrupt anything for our customers. Many projects last more than two months. Some people may decide not to move on a single rotation. There were times when Dianne and I needed to be in the same room so that we could concentrate on company direction. Other times, we needed to spend more time with other developers in the company.

Sometimes we assign people randomly to rooms. Sometimes we assign people based on the project they are on. Sometimes we assign people because someone hasn’t shared a room with someone else yet or in quite a while. It’s always changing. It’s always getting better.

At this time, I can say that almost everyone in the company has spent at least two months sharing an office with everyone else in the company. The only excxpetions are people that have been with the company a very short time.

Pairing

Pairing is an emotional subject. Some people love it. Some people hate it. Some people refuse to try it. We experiment with it all the time.

We’ve let anyone on the team work in pairs whenever they want. Sometimes people need time to think individually. Personally, I find I work on my own when I’m working on books, articles, and technical presentations. I get in a flow and create content. Any interruptions throws me off. I don’t think I could pair on those tasks.  Other times, I really need to bounce ideas off of others. I’ll pair with someone. Maybe using a computer and a screen, or maybe at a whiteboard. Either way, the collaboration is important. We have found ways to respect that both solitary time and collaborative time is important. we need to support both.

We’ve let people experiment with how and when they want to pair. That’s given us several new ideas.  I can’t name everyone, because so many ideas came from so many different people. Some people pair by hooking up multiple monitors, keyboards, and mice to one machine. They can sit across from each other, making eye contact while working together.

Other people have experimented by using a server as a shared development machine and remotely accessing the ‘pair computer’. This gives them the advantages of pairing, and the advantages of having another machine to drive for searching, finding answers, or looking at docs for the problem at hand.

Others have paired remotely, using shared desktop applications. This isn’t the same as being together, but is better than being isolated because you can’t make it to the office.

Was there anything that didn’t work?

Yes. I’m not going to give specific examples, because I won’t discourage anyone from suggesting new ideas because a previous idea didn’t work the way we’d hoped.

But I will say that we’ve discarded some ideas after giving them a small pilot and deciding it didn’t help as much as we’d hoped. Even ideas Dianne or I suggest are given the same evaluation. If it helps, we’ll do more of it. If it doesn’t help, it goes away, or changes.

What comes next?

We’re happy with the process we have now. We think we do a great job creating software for our customers. But we’re also convinced we can do better. Everyone on our team reads, explores, and learns constantly. They bring back the ideas the like, and we try them. We refine them, and we make them our own.

Today at lunch, new ideas came out and we’ll be experimenting with them shortly We continue to evolve, and continue to improve.

I can’t wait to hear the next great idea.

Created: 3/10/2011 7:09:09 PM

Distributed Version Control (DVCS) is generating lots of interest, buzz, and rhetoric. You’ve heard of Git, Mercurial (Hg), and Bzr. You’ve probably been told that all the cool kids are using it. Unfortunately, for these tools, and you, too often the proponents of these tools do a terrible job of explaining the benefits (at least in my opinion). This post explains why I like DVCS, from several perspectives. I’ve used Bazaar in the past, and have switched to Mercurial for the past several months.

If I get to choose the version control for a project, I will choose Mercurial. The rest of this post explains why.

The ‘D’ in Distributed Version Control

DVCS is similar to classic VCS, where you have a single repository in many ways. You get code, you modify code, you checkin code. All the DVCS tools I’ve used support the Update / Modify / Commit model used by other systems, like SVN, or CVS.

The difference is that DVCS systems have multiple copies of the entire repository. Every team member that works with the code will have a local copy of the full repo. Instead of one copy of the full history, there are several. I think this supports new workflows that makes everyone on the team more productive.

While many proponents say the DVCS means you don’t have a master server, I find that does a disservice to DVCS. You can have a master server, and in fact, all uses I’ve seen do. At some point, you’re going to build and deploy the software you’re creating. That build has to come from some single, known place. It’s the central server. You will replicate that central server, including all the history (see below). But that is a very different statement than saying there isn’t a central server.

DVCS adds two new commands you’ll need:  Push and Pull. Pull brings changes from a central repository into your local repository. Push takes the changes in your local repository and commits them (merging if necessary) into the central repository.

Key Point: DVCS, in practice, doesn’t mean “there’s no central server”. It means “The central repository can be replicated as many times as you need.”

You can also use the Push and Pull commands to share changes among individual developers as well. I’ll discuss why I like that later.

Nerd Note: I know other commands exist as well. Notable, I’m ignoring clone. This isn’t meant to be a how to or a tutorial, but rather a conceptual discussion around what I perceive as the benefits of DVCS. If you want a great tutorial on how DVCS works, I suggest this tutorial by our friends at EdgeCase.

As a Developer

DVCS has made it easier to experiment, re-start, try different designs, and not lose anything. As soon as I feel like I’m on the wrong road, I’ll checkin my changes to my local repository. I won’t push those changes to the shared repository. I just want to make sure I don’t lose them. After that commit, I can delete the code that made me think I had taken a wrong turn, and start a new path. I have my changes safely in the source archive, and I haven’t negatively effected others. It makes it easier for me to try different approaches, and eventually settle on the best one.

DVCS also means I can do small spikes with other developers. I can start on a feature, and ask someone to pair, and share code between our local repositories. I’ll make a change my pair can pull those changes into her local repository. She’ll make changes, and I’ll pull those changes to my repo. Neither of us push to the central repository yet. One of us will do that when we’ve hit a delivery point (finish a task, a story, or whatever the smallest unit of measure into the main repository is.)

In short, the best feature of a DVCS is that I can submit works-in-progress that aren’t ready to share with anyone, or with the whole team. Later, after committing and pushing the finished set of changes, all the interim steps are available to the team. They can see the missteps and trials as well as the finished version. It provides a more complete history of what happened.

As a Project Lead

DVCS, used properly, means the shared repository is broken less often. I want developers on a project to commit with the highest frequency possible. Those smaller changes enable faster integration. That leads to greater stability.

But concentrating on smaller changes makes it hard to investigate alternatives, try different designs, and make early-stage mistakes. People are too concerned with the fear of “breaking the build.” DVCS avoids that: commit, but don’t push. Push when you reach a slightly larger grained (but not too large) checkpoint that should integrate well.

Also, as I mentioned above, DVCS makes it trivial to create sub-teams to attack smaller problems.  They work on their own branch, and when that is ready to integrate, the new feature gets pushed into the main branch.

Here, using DVCS enables people to experiment, and enables sub-teams to collaborate for short spikes.

As a Company Owner

DVCS takes some of the pressure off the IT infrastructure. Every developer has a reasonably up to date copy of the entire source archive.

If our nightly backup failed AND the server storing the main repo failed AND a recent backup couldn’t be found elsewhere, we still should be able to put together a pretty reasonable version of the last software. This is not to say that regular infrastructure isn’t important, but one more backup safeguard is a good thing.

In addition, using simple commands built into the DVCS, we can deliver the entire history of the project to our customers, in the case where we have work for hire agreements. It’s much easier than developing an out-of-band facility to move code and history from one organization to another.

Here, DVCS means I have created several low-friction backups, including copies and archives located offsite with the customer.

It’s not for everyone

I’ve been extolling the virtues of DVCS for several paragraphs. However, it’s not for everyone. DVCS systems use a workflow that often requires merging changes. The more frequent every member of the team synchs with the main archive, the less friction you’ll encounter.

That means if you have a team that contains people how hide for weeks at a time while developing new features, a DVCS will expose more pain. In practice, I’ve had very few merge conflicts that were not resolved automatically, and correctly. However, our teams practice quick, short cycles. I think almost everyone tries to synch up with the main database on a daily basis, and the longest. The longer between merges, and the more painful it can get.

I’ve also found that large corporations are somewhat concerned about DVCS. The fact that every developer’s laptop contains the complete change history of some important project sends shivers down the corporate IT spine. What many small companies view as a great feature, these organizations view as a scary, irresponsible design decision.

As I said in my opening, every time I’ve used DVCS, I’ve had a central server. If your team can’t agree on a single location from which to build the deliverable code, you’re not ready for a DVCS. DVCS provides new ways to work, and enables people to experiment.  If your team is going to use it to hide changes from each other or the central build mechanism, then it’s a bad idea.

I Intend to use Mercurial on all my projects

I spent this post saying why I like DVCS in general. I do find the benefits, when properly applied, greatly outweigh the concerns. The single biggest benefit is that all these small changes team members have made get folded into the central repository. I view DVCS as a way to have a local repository AND a central repository AND have them work together. It’s not about ripping apart everything we like about classic Version Control Systems, it’s about supporting more workflows.

I’ve used Bzr, Mercurial, and Git. Of those three, Mercurial gave me the best experience. That’s what I’m using on every project that I can.

Created: 3/4/2011 4:40:49 PM

I recorded a DNRTV with Carl Franklin a little while ago, and one of the viewers, Kyle Szklenski posted this comment:

I listened to the DNR TV today with Bill Wagner on dynamic typing. I was really, really, ecstatically happy to hear fellow champions of static typing. It's all too often that my coworkers or other people online tout the benefits of making everything in your application dynamic, and it makes me physically ill to have to listen to it. Hyperbole aside, I think that if you cannot design a system to work with static typing, then there's probably something wrong with your design sense. Thanks for the awesome presentation, and am looking forward to hearing Bill discuss the new async stuff!

I’m not sure I’d phrase it that strongly. It’s true that C# is a static language. Every feature in the language emphasizes a belief in the benefits static typing. It is true that you could design any system in a static language, but that doesn’t mean it’s the best way to design any system. The features added in C# 4.0 demonstrate that.  There are idioms where having dynamic typing is a great benefit. COM interop, runtime type checking, and language interop are obvious examples. Without adding language features, these idioms require pages of code.

That’s work to write the code.
That’s work to test the code.
That’s work to maintain the code.
That’s pages work to debug and extend the code.

Language features that shrink those pages to a few lines (with the same features) should be applauded.  We get done faster, with fewer bugs, and have more readable code.

However, adding dynamic types to your program has a cost. It is a very different style, and the rough edges where dynamic and static types meet can produce surprising behavior for those expecting the familiar. That causes disruption to our thinking when we try to understand programs. It can increase bugs.

For this reason, and many others, I tell customers to view dynamic is a very powerful feature, but one that must be kept in a pen. Use dynamic typing where it helps. Don’t make everything dynamic, and prefer narrow scopes for variables that are statically typed to be dynamic.

Created: 2/15/2011 8:37:01 PM

The Pex team has created a site that provides a great tool to exercise your brain, learn a bit of code, and see a bit about Pex.

To use it, just go to the PexForFun site. There are puzzles, learning exercises, and duels.  Register and you can create your own challenges. (I’ll be doing this over the next month or so.)

The puzzle structure is a great demonstration of Pex. When you start a new puzzle, you have a empty implementation. You can click Ask Pex to get some test results on the hidden successful implementation. Pex then executes those tests on your code. You’ll see some failures, and you can fix your code to make the tests pass.

Then, you can click “Ask Pex” again, and see if a more extensive test suite still passes. Iterating this way gets you to write more code and your implementation gets closer to the expected solution. If you are not familiar with Test Driven Development, I highly recommend it. You’ll get a real feel for creating code to an executable specification instead of a written spec.

All in all, there are several reasons I like playing with PexForFun:

  1. It’s practice for Test Driven Development. you start with an empty implementation, and see a few failing tests. Keep making a few pass and you’ll get more tests. After a few iterations you’ll have everything working.
  2. It will give you some practice writing tests. Pex generates tests by analyzing your code. It determines a number of interesting inputs to code by analyzing its structure. You can read an overview here: http://www.pexforfun.com/Documentation.aspx#HowDoesPexWork By seeing the inputs it chooses, you’ll get some ideas how to write your own tests. By thinking about inputs it chooses to ignore, you can get better at writing useful tests instead of more tests.
  3. It exercises your brain. The puzzles range from introductory to rather complicated.
  4. You can pick different languages for your puzzles: C#, VB and F#.

Learn some new techniques, exercise your brain, and most of all: Have fun!

Created: 2/8/2011 4:14:55 PM

The most often question I get these days is “Should I invest in Windows Phone 7?” It’s a difficult question. The people asking me are making large bets on their future investment in technology. I don’t want out customers to leap into an area with no future, nor do I want them to miss what could be a major growth opportunity.

In the mobile market, success is not a simple yes or no question. The continued growth of the mobile market and the apps market are critical to the Windows Phone 7 success story.

For Windows Phone 7 to succeed, iPhone and Android do not have to fail. This is a point that often needs to be restated to businesspeople. The mobile phone market grew 17.9% in Q4 2010 according to research firm IDC (http://www.mobiletor.com/2011/02/03/idc-traces-17-9-growth-in-global-mobile-phone-market-for-q410/).With that kind of growth, Windows Phone 7 can achieve success without taking sales from other platforms.

Unlike a mature and stable market (like autos), one company’s gains do not necessarily imply another company’s losses. From that perspective, I don’t expect Windows Phone 7 to take conquest sales from iPhone or Android, at least not until users of those devices reach the natural upgrade cycle for their devices (roughly 2 years out). For some, especially those experienced in mature markets, that means Windows Phone 7 won’t succeed.

But that isn’t necessarily the correct perspective. Microsoft surely has sales goals for the platform. That’s their metric for success, which probably involves taking some market share from iPhone and Android. For the rest of us business owners, the question is whether or not to invest in applications for the Windows Phone 7 platform.

The apps story is critical because it drives device usage and customer loyalty. The “killer apps”—like Mobiata’s FlightTracker—drive mobile phone purchasing decisions Microsoft needs to have compelling reasons for developers to create applications for WP7, and more importantly, businesses need to have a market for those applications.

The presence of applications will feed market share, and vice versa. Windows Phone 7 will be a success if it attains enough market share by 2012 that mobile developers believe that it is a necessary target platform, along with iPhone and Android. Windows Phone 7 succeeds and gains momentum if it becomes one of the platforms that are mandatory for successful applications. However, if developers believe they can ignore the platform through the end of this year, it won’t succeed.

It’s that perspective that businesspeople need to understand. It won’t matter which platform has the most market share in your business (or personal) buying decisions. What will matter is which platforms are on the ‘must support’ list. That’s the metric of success that will drive your business. You can’t afford to ignore any of the top platforms.

So, where’s the recommendation?

Right now, Windows Phone 7 has a very small market share (relatively speaking). That share will go up, and the real numbers of devices will go up. I don’t know how much, and neither do you. What I recommend is for our customers to architect mobile applications carefully, so that the largest amount of code runs in the cloud and can be accessed from any device. Then, the incremental cost of supporting new devices is smallest. That enables our customers to watch, and still respond quickly to the actual market data, rather than making a decision prematurely.

Current Projects

I create content for .NET Core. My work appears in the .NET Core documentation site. I'm primarily responsible for the section that will help you learn C#.

All of these projects are Open Source (using the Creative Commons license for content, and the MIT license for code). If you would like to contribute, visit our GitHub Repository. Or, if you have questions, comments, or ideas for improvement, please create an issue for us.

I'm also the president of Humanitarian Toolbox. We build Open Source software that supports Humanitarian Disaster Relief efforts. We'd appreciate any help you can give to our projects. Look at our GitHub home page to see a list of our current projects. See what interests you, and dive in.

Or, if you have a group of volunteers, talk to us about hosting a codeathon event.