I’ve started working with a new client and it’s re-affirmed my belief in the importance of asking the question “Why”?
Ask it again.
Follow up with more Why?
Make sure you get to the root why.
I’m not focusing on the “5 Whys” popularized by Toyota and now part of the 6 Sigma process, although that is important. Rather, it is to focus on understanding why a process was put in place, and why these processes have been established. More than anything else, it’s about listening.
In the current scenario, I’m helping a team that is near midway through a product release cycle. They’ve adopted a series of agile processes, build and deployment processes, and common practices for branching in git, deploying changes, and so on.
Like all teams, half way through a project, they aren’t sure if every process is working out well. This introduces friction and concern: Should the processes be changed? If so, to what?
This is where ‘why’ becomes important.
The first ‘why’ to ask is “Why did you adopt this process | tool | guideline?” From this question, I learn what gains a team hoped to achieve. What problems had existed, and how this was intended to help. Once that’s known, it’s easier to discuss the benefits and any unseen costs that a new initiative has brought. On the whole, was it good?
The second “why” in this situation is “Why is this new initiative not generating the benefits you expected?” Is there more friction? Are you finding that adopting new techniques took more time and investment than you thought? Are you losing productivity because a technique is not familiar yet? Does the team fear that “we’re not doing it right?” This starts to get to the cause of this new discussion.
The third ‘why’ for the team is “Why change now?” One important goal for high-functioning teams is “continuous improvement”. And while it’s important to always look for opportunities to improve, it’s also important to pick the right time for change. That’s especially true if the proposed ‘change’ is at large scale. (Example: I’m not switching Source Code systems a month before release. Too much churn, not enough gain). Related to that, if a team did implement a major change before starting this release cycle, has it been fully explored?
OK, give me a little license here. I know I can’t ask questions of a software tool. But, the point is that sometimes teams adopt a tool, or toolset, and then fight that tool because they don’t want to understand why it works the way it does.
One example from a previous client is git. They had planned an initiative to move from SVN to git. However, they did not expect this to change their workflow, or their overall development workflow. They didn’t develop a branching strategy. They didn’t work through the distinction between commits and push/pull. Git failed badly for them. They were fighting against the toolset. This example is not meant to take a position on centralized vs. distributed source control, or on a particular vendor. Both workflows can be done well. Both workflows can fail. The point is to work with your tools, not against them.
Sometimes that means picking different tools. Sometimes that means adopting a different process because of the tools you want to use.
In addition to asking “why” a tools was designed they way it was, it’s important to ask “why” the team picked a certain toolset. Did they originally intend to adopt the mindset supported by the tool? Or were there other drivers?
Early in my career, I received one of the best pieces of advice from a mentor:
Some people really listen. Others simply wait for their turn to talk. Be the former.
That sums up what’s necessary to really bring about change and to really have a positive impact. If you listen to all the team members explain their issues, describe what is and what is not working, you will be in a much better position to make a positive impact. You’ll also be solving real problems. Problems that real team members have described.
Listen. Ask Questions. And remember that the most important question is “why?”
You’ll have a bigger impact.
One of our customers is grappling with how to manage a scrum process that involves multiple teams with multiple responsibilities. The larger organization produces a number of business applications, all built on a common library. They have teams for each application, and a framework library team.
I thought our recommendations would be of general interest, so I’m posting them here. This is still a work in progress, so I'd appreciate any thoughts from my readers.
We recommended moving away from a process where every team attended one large sprint meeting. Every team planned their next sprint during these large meetings. Our customer was concerned that there was very little energy in these marathon meetings. The main reason was that for many attendees, over 75% of the content was not interesting: it was about other applications. The level of detail was too deep for members of other teams. By scheduling project-specific sprint meetings, each meeting had fewer people, and those people were engaged in the activity.
To make sure that the voice of the customer was represented, we recommended that one member of each application team attend the framework team's sprint meetings.
To make sure that the framework team knew of any issues relating to each application, we recommended that one member of the framework team attend each application team meeting. (Although not necessarily the same person. That would be a lot of meetings).
We made this recommendation because sprint meetings are in depth, and project focused. We feel it's important to have all attendees engaged for the entire meeting. We wanted to reduce the waste associated with attending meetings where someone is not really contributing.
It did raise the energy in the project meetings. It also brought some concerns. Everyone felt this decreased the communication between application teams. That could lead to code duplication, missed opportunities for reuse, and siloes of knowledge.
Standups are a mechanism to communicate to everyone any progress, and any issues. We recommended having all the application teams and the framework team attend the same daily standup. That’s about 30 people, but it can still be a short meeting, if everyone follows the rules. We're finding two main advantages to this process.
If someone is stuck, a larger group of peers hears about the issue. That increases the chance that someone will say "I can help." Issues get solved quicker.
The framework team is becoming much more efficient. Imagine someone on an application team makes a feature request of the framework team at one of these standups. One of three outcomes are possible:
It's working quite well so far.
What have you tried? How has it worked?
We often help our customers update their software development environments to more modern practices. Sometimes this involves languages and frameworks. Other times it involves updating a classic waterfall process with many of the practices from the lean and agile communities. At still other times, it involves an investment in the user experience.
This particular post is about introducing modern software development techniques, including scrum, into a large organization. I’ve been noticing how much the concerns and the values change as what we do becomes more and move visible to larger and larger parts of the organization.
Our first projects were small (for an enterprise organization). The stakeholders were key members of the user community, and maybe first or second level managers. Our scrum process worked quite well, and we had good success. The stakeholders understood the core values of our project: Let’s build useful software in a timely fashion. Everything else supported that goal. We wrote only enough documentation to support building software. Most (almost all) of the testing was automated. The automated tests were the test plan.
The projects were successful. Our stakeholders showed their management that our projects all achieved great ROI, and we showed a much better on time track record than their traditional process.
That was great, until it wasn’t.
See, we were successful. But, we weren’t ‘following the corporate standard.’ As more and more new managers and new departments were involved, the priorities changed. That core value of “building useful software in a timely fashion” was replaced by a core value of “following the process”.
I have to admit, following a waterfall process is seductive. It 'looks like you can accurately predict the future. You spec things, you build schedules, you ‘follow the plan’. Unfortunately, software projects (and real life) are not like that. No plan survives contact with reality. But, until those fallacies are exposed, the illusion is powerful. “These other teams have a plan. They’re organized. They’re following the process.”
My theory on why this happens is this: The farther removed a manager in a large enterprise actually is from a project, the more their goal shifts from the project actually succeeding to not being responsible. The enterprise process models, in my opinion, make that easier. The team no longer ‘owns’ the success or failure of the project. The team ‘owns’ adherence to the process.
I hope that data, such as this provided by Mike Cohn, can help. Will the same people swayed by “THE PROCESS” by swayed by “THE DATA”? (Read the comments on Mike’s post, he does discuss some of the industry’s concern with Standish reports.)
I hate writing just ‘downer’ posts, so I must close with some wisdom from Mary & Tom Poppendieck. Whenever we find ourselves in one of these situations, we look at Principle 7 from “Implementing Lean Software Development: From Concept to Cash”: Optimize the Whole. Process has some place. and some communication is valuable. However, too much process means not optimizing the whole, and not delivering great software. We keep reminding our stakeholders that the point isn’t to have a pretty plan. It’s to have working software sooner. Some get it. They hopefully keep pulling others along.
Because of some meetings, I missed our company wide standup for 3 straight days, finally attending again yesterday afternoon. I realized how important it is, and how much I missed it.
I don’t think anyone missed my contributions, but I missed hearing what everyone else was working on. A few statistics will explain. We have roughly 20 people participating in Standup. It’s almost always over in 10 minutes. That means you’re speaking for 30 seconds, and listening for 9:30.
That’s a pretty big listening to speaking ratio. And, that’s the important value in the company wide standup. The real value is listening to what everyone else is doing, and contributing help where you can.
Mary and Tom Poppendieck use the phrase “optimize the whole” to describe the importance of seeing the big picture, and making that better, even if it costs more locally. That’s a good way to look at standup: If you spend 30 minutes to save another project team hours, that helps optimize the whole. We’re all the better for it. Of course, one day, some one else will probably be able to save you hours by spending 30 minutes helping you.
That’s why we have continued these company wide standups, even though we have grown to the point where we now have anywhere from 5 to 10 projects running concurrently. That cross-project communication helps all our projects. It’s also one way to evaluate readiness for leadership roles: Participating across projects, and outside your own tasks shows that global view that is necessary for taking on more of a leadership role.
Chris Smith wrote a great post last week about the importance of other people’s hard work. In general, he’s right. Our industry has a sad reputation of developers throwing away other work so they can do it themselves. It’s common enough to have an acronym: NIH for “Not Invented Here”. We all need to do a better job of reusing work. It’s a more efficient way to create software.
That being said, I like to discuss this issue with customers in a slightly different way. Code, like a building, in an asset. If cared for, it’s a smart decision to invest in its continued upkeep, and enhancement. Put on an addition; remodel the kitchen. That’s money well spent. It’s cheaper than the alternative, and less disruptive.
On the other hand, if this asset has been left to languish, it’s value diminishes. If left in disrepair long enough, it becomes wiser to tear it down and build anew. With buildings, it’s a tough call, but the effects of that neglect are visible.
With software assets, it’s tougher to see the signs of neglect. But they do exist. How many tests no longer pass, and no one cares because “that test isn’t relevant anymore.” Or worse, how much of the codebase doesn’t have tests? Did this code base ever get released? Has it been refactored over its life to reflect modern idioms and techniques?
It’s a difficult problem, and I’m certain we don’t get it right every time. Chris accurately represents one time when leveraging other work would have been better than the path chosen. Other times, we’ve wondered if we should have scrapped a bad codebase right at the start. No matter what we choose, this will always be a difficult decision.
Therefore, the best advice I can give is for those of us that manage existing code bases: Treat them like the investment they are, and then it’s easier to convince everyone that you should continue to invest in them, rather then tear them down.
In an earlier post, I discussed that you should decide and promote new ideas as either incremental changes, or disruptive new innovations. If you try to describe it the wrong way, you won’t get the adoption you want, and many of your users will be confused and annoyed by your new idea.
Distributed Version Control has walked into that problem.
I don’t think it was a conscious decision, nor am I blaming the DVCS community. However, the fact is that many people in the development community look at DVCS with more confusion and less acceptance than they should.
If you search for comparisons between a centralized VCS and DVCS, you’ll find one consistent answer: “DVCS doesn’t need a central server.” Unfortunately, that answer is correct, but unhelpful.
First of all, most successful uses of DVCS have a central server. If you’re going to ship code, you need to build it from some location. That’s probably a central VCS server. The claim that DVCS doesn’t need a central server makes users think it is really just an incremental change from other systems such as SVN.
That sets them up to fail when migrating from VCS to DVCS. In fact,Joel Spolsky has written a great article on “Subversion Re-Education” to address this problem. If you’re struggling with migrating from SVN to Hg, or your team is struggling, check out that article, and the entire tutorial.
To me, DVCS is a major disruptive change over traditional Centralized VCS system. Where traditional VCS systems manage changes to files, DVCS manages changes to entire repositories. When you commit changes to a traditional VCS server, you are committing the current version of each file into the central repository. When you push changes in DVCS, you are applying all the changesets in your local repository to a shared repository. When you issue an update command in a traditional VCS system, you are getting the current version of every file from the central repository. When you issue a pull request in a DVCS system, you are merging all changesets in a shared repository into your local repository.
The differences in the terms above may seem small, but they have a large impact on the mindset you should have regarding operations using DVCS. It’s a disruptive change: You have your own repository. Any sharing activity involves a merge operation between repositories.
This change helps you see why DVCS has so many proponents. It fits the way teams work now. You can make small changes, make small commits, and have many small incremental changes in your local repository. I do a commit after writing a test. I commit after making that test past. I commit again after any refactorings. Rinse, Repeat. I push (to share the code) after I finish a story card. Commit is at the granularity of a code change. Push is at the granularity of sharing with other team members. I can’t do that in a centralized VCS system. “commit” and “share” are the same thing. That change is disruptive; it’s not incremental.
By thinking about DVCS as operations between repositories, you likely won’t accidentally create multi-headed repositories or any of the other issues that sometimes plague DVCS systems. Once again, I won’t exhaustively list what can happen, but refer to Joel’s hgInit tutorial. If you are interested in migrating to DVCS, read that tutorial. It’s a great explanation of DVCS systems. If you’ve got experience in traditional VCS systems, remember this:
DVCS systems give each user their own repository. Sharing code with teammates means sharing repositories and history between repositories.
The Mercurial (or git) documentation is quite good, and will explain its commands in these terms. It will be clear when you work through docs or examples whether a command works with your local repo, or between repositories.
I’ve written before that I do like DVCS, and I think it is a big improvement over traditional centralized VCS systems. Anytime I’ve had trouble getting someone to appreciate DVCS, they’ve been viewing a DVCS system as an incremental improvement over traditional VCS systems. That mental model made them under-appreciate the advances DVCS offered. It also led them to use a traditional workflow that played to DVCS’s weaknesses rather than its strengths.
DVCS may not be right for every organization, or every team. But, viewing it as a traditional VCS without a central server will not lead to a successful adoption. Viewing it as a disruptive change that fits better with today’s team-oriented development processes will put you on a better path to success.
SRT Solutions is a very collaborative environment. We enable our employees to choose the best way to collaborate. There are with a minimum number of rules and processes. If you’re in the Ann Arbor area, stop by. We’d be happy to let you see what goes on inside. For those that can’t see us in person, you’ll see some people pairing. You’ll see some people pairing remotely with telecommuters. You’ll see teams working together in larger groups. You’ll also see people working on their own. Every one of those different styles is important.
We use all these different working styles because no one configuration is perfect. Groups engender dynamic, thought-provoking ideas. We get brainstorming, new ideas, great innovations, and critical discussions. Pairing provides that extra pair of eyes, the constant review, and some knowledge transfer. Working alone is also important. Sometimes you need to think through a problem or a technique before you’re ready to share it with others.
An important result from working alone is that you gain deeper knowledge and confidence from performing tasks on your own. I see examples of this from my experience helping high school students with advanced math classes. My rule in any given session is that I do one problem, and only one problem for a student to explain a technique. After that one problem, they do all the work. I look over their shoulder, and provide some guidance if they make mistakes or get stuck, but I don’t ever take control and solve another problem.
That’s easy in a teaching session, because my only goal is that the student becomes self-sufficient. I have no deadline for finishing the problems. Sure, the student needs to turn in his homework, but I have no deliverables. I can look over a student’s shoulder for as long as it takes, my only goal is that the student grows her skills.
Contrast that relationship with two developers pairing on a feature. Because both developers have a shared deliverable (the feature), there is a natural motivation to put the keyboard in the hands of the person who will finish fastest. While one goal of pairing is to grow skills among team members, that goal is balanced against the goal of delivering software in a timely fashion. In fact, that second goal (delivering the software) often wins. The pair partner who has less knowledge about the task at hand can slowly become a passive partner.
That’s not to say that it always goes this way. Some people have strong enough personalities to ensure that they learn, and become proficient and productive without their pair partner. But, the risk is real, and it’s one reason why we allow people to work on their own some of the time, and to collaborate some of the time. You’ll learn, and grow different skills in each environment.
We value both real collaboration, and real independent thought. That requires giving people the freedom to work collaboratively, the freedom to work independently, and the trust that everyone will recognize which is best at which time. It makes for a very dynamic workplace. People do collaborate much more often than they work alone. It also changes very quickly. Someone may work alone for a bit, and then as soon as they find they’d be more productive collaborating, they get up and find someone to join them.
I was catching up on some blog posts over the weekend, and two by Stephen Forte caught my eye. He wrote about his initial experience with Agile methodologies, and how he started breaking all the rules.
That made me start thinking about our own process, and how it has changed over the past several years. We don’t strictly follow any process brand, but we work hard to develop the best software we can as fast as we can. We work they way we do because of our team. We have the luxury of a wicked smart team, and a team that knows how much we respect them. I can distill our ongoing process changes down to three points:
Looking back at our company’s history, most of the practices we’ve adopted started with these goals. And they weren’t driven by either Dianne or I. They came from the people that work for us. Then, they quickly move and change based on how it’s working, or how it’s not working. I’ll give three examples.
Early in our history, we did not have a daily standup. We didn’t need one. We were so small that we always knew what others were doing. Mike Woelmer suggested adding one. He is one of our first employees, and he recognized that as we added people, we didn’t information about what others were doing. We started a daily company-wide standup. That helped, but it wasn’t perfect. We’ve always prided ourselves in being flexible. Some days people work at home. That’s fine, but participating in standup is important. We added a free conference line, and asked people telecommuting, or at client sites, to call in. For those in the office, the phone is now our token for standup. Anyone working remotely chimes in in the order they joined the call.
As we continued to grow, we added project specific standups. Customers are encouraged to join their specific standup. We still have the company wide standup for all of us to learn about other projects. That just provides extra motivation to keep standups short. It may sound like a lot of meetings, but two standup meetings, both under 10 minutes, really increases communication and doesn’t have much of an impact on other tasks.
Oh, and if you can’t make standup because you are in a meeting at that time, email your standup report.
Our space has 5 offices, so we need to split up among the rooms. We still want everyone to get to know each other, and work together. Chris Marinos suggested that we switch offices every two months. If you’ve come to visit, and wonder why people are not where you last saw them, that’s why. Every two months (or so), we switch offices and change who we share space with.
This is one of those ideas that changed quite a bit during the last few years. We found we wanted to keep project teams in the same room. This is critical as deadlines and deliverables near. We wanted to time moves so they didn’t disrupt anything for our customers. Many projects last more than two months. Some people may decide not to move on a single rotation. There were times when Dianne and I needed to be in the same room so that we could concentrate on company direction. Other times, we needed to spend more time with other developers in the company.
Sometimes we assign people randomly to rooms. Sometimes we assign people based on the project they are on. Sometimes we assign people because someone hasn’t shared a room with someone else yet or in quite a while. It’s always changing. It’s always getting better.
At this time, I can say that almost everyone in the company has spent at least two months sharing an office with everyone else in the company. The only excxpetions are people that have been with the company a very short time.
Pairing is an emotional subject. Some people love it. Some people hate it. Some people refuse to try it. We experiment with it all the time.
We’ve let anyone on the team work in pairs whenever they want. Sometimes people need time to think individually. Personally, I find I work on my own when I’m working on books, articles, and technical presentations. I get in a flow and create content. Any interruptions throws me off. I don’t think I could pair on those tasks. Other times, I really need to bounce ideas off of others. I’ll pair with someone. Maybe using a computer and a screen, or maybe at a whiteboard. Either way, the collaboration is important. We have found ways to respect that both solitary time and collaborative time is important. we need to support both.
We’ve let people experiment with how and when they want to pair. That’s given us several new ideas. I can’t name everyone, because so many ideas came from so many different people. Some people pair by hooking up multiple monitors, keyboards, and mice to one machine. They can sit across from each other, making eye contact while working together.
Other people have experimented by using a server as a shared development machine and remotely accessing the ‘pair computer’. This gives them the advantages of pairing, and the advantages of having another machine to drive for searching, finding answers, or looking at docs for the problem at hand.
Others have paired remotely, using shared desktop applications. This isn’t the same as being together, but is better than being isolated because you can’t make it to the office.
Yes. I’m not going to give specific examples, because I won’t discourage anyone from suggesting new ideas because a previous idea didn’t work the way we’d hoped.
But I will say that we’ve discarded some ideas after giving them a small pilot and deciding it didn’t help as much as we’d hoped. Even ideas Dianne or I suggest are given the same evaluation. If it helps, we’ll do more of it. If it doesn’t help, it goes away, or changes.
We’re happy with the process we have now. We think we do a great job creating software for our customers. But we’re also convinced we can do better. Everyone on our team reads, explores, and learns constantly. They bring back the ideas the like, and we try them. We refine them, and we make them our own.
Today at lunch, new ideas came out and we’ll be experimenting with them shortly We continue to evolve, and continue to improve.
I can’t wait to hear the next great idea.
Distributed Version Control (DVCS) is generating lots of interest, buzz, and rhetoric. You’ve heard of Git, Mercurial (Hg), and Bzr. You’ve probably been told that all the cool kids are using it. Unfortunately, for these tools, and you, too often the proponents of these tools do a terrible job of explaining the benefits (at least in my opinion). This post explains why I like DVCS, from several perspectives. I’ve used Bazaar in the past, and have switched to Mercurial for the past several months.
If I get to choose the version control for a project, I will choose Mercurial. The rest of this post explains why.
DVCS is similar to classic VCS, where you have a single repository in many ways. You get code, you modify code, you checkin code. All the DVCS tools I’ve used support the Update / Modify / Commit model used by other systems, like SVN, or CVS.
The difference is that DVCS systems have multiple copies of the entire repository. Every team member that works with the code will have a local copy of the full repo. Instead of one copy of the full history, there are several. I think this supports new workflows that makes everyone on the team more productive.
While many proponents say the DVCS means you don’t have a master server, I find that does a disservice to DVCS. You can have a master server, and in fact, all uses I’ve seen do. At some point, you’re going to build and deploy the software you’re creating. That build has to come from some single, known place. It’s the central server. You will replicate that central server, including all the history (see below). But that is a very different statement than saying there isn’t a central server.
DVCS adds two new commands you’ll need: Push and Pull. Pull brings changes from a central repository into your local repository. Push takes the changes in your local repository and commits them (merging if necessary) into the central repository.
Key Point: DVCS, in practice, doesn’t mean “there’s no central server”. It means “The central repository can be replicated as many times as you need.”
You can also use the Push and Pull commands to share changes among individual developers as well. I’ll discuss why I like that later.
Nerd Note: I know other commands exist as well. Notable, I’m ignoring clone. This isn’t meant to be a how to or a tutorial, but rather a conceptual discussion around what I perceive as the benefits of DVCS. If you want a great tutorial on how DVCS works, I suggest this tutorial by our friends at EdgeCase.
DVCS has made it easier to experiment, re-start, try different designs, and not lose anything. As soon as I feel like I’m on the wrong road, I’ll checkin my changes to my local repository. I won’t push those changes to the shared repository. I just want to make sure I don’t lose them. After that commit, I can delete the code that made me think I had taken a wrong turn, and start a new path. I have my changes safely in the source archive, and I haven’t negatively effected others. It makes it easier for me to try different approaches, and eventually settle on the best one.
DVCS also means I can do small spikes with other developers. I can start on a feature, and ask someone to pair, and share code between our local repositories. I’ll make a change my pair can pull those changes into her local repository. She’ll make changes, and I’ll pull those changes to my repo. Neither of us push to the central repository yet. One of us will do that when we’ve hit a delivery point (finish a task, a story, or whatever the smallest unit of measure into the main repository is.)
In short, the best feature of a DVCS is that I can submit works-in-progress that aren’t ready to share with anyone, or with the whole team. Later, after committing and pushing the finished set of changes, all the interim steps are available to the team. They can see the missteps and trials as well as the finished version. It provides a more complete history of what happened.
DVCS, used properly, means the shared repository is broken less often. I want developers on a project to commit with the highest frequency possible. Those smaller changes enable faster integration. That leads to greater stability.
But concentrating on smaller changes makes it hard to investigate alternatives, try different designs, and make early-stage mistakes. People are too concerned with the fear of “breaking the build.” DVCS avoids that: commit, but don’t push. Push when you reach a slightly larger grained (but not too large) checkpoint that should integrate well.
Also, as I mentioned above, DVCS makes it trivial to create sub-teams to attack smaller problems. They work on their own branch, and when that is ready to integrate, the new feature gets pushed into the main branch.
Here, using DVCS enables people to experiment, and enables sub-teams to collaborate for short spikes.
DVCS takes some of the pressure off the IT infrastructure. Every developer has a reasonably up to date copy of the entire source archive.
If our nightly backup failed AND the server storing the main repo failed AND a recent backup couldn’t be found elsewhere, we still should be able to put together a pretty reasonable version of the last software. This is not to say that regular infrastructure isn’t important, but one more backup safeguard is a good thing.
In addition, using simple commands built into the DVCS, we can deliver the entire history of the project to our customers, in the case where we have work for hire agreements. It’s much easier than developing an out-of-band facility to move code and history from one organization to another.
Here, DVCS means I have created several low-friction backups, including copies and archives located offsite with the customer.
I’ve been extolling the virtues of DVCS for several paragraphs. However, it’s not for everyone. DVCS systems use a workflow that often requires merging changes. The more frequent every member of the team synchs with the main archive, the less friction you’ll encounter.
That means if you have a team that contains people how hide for weeks at a time while developing new features, a DVCS will expose more pain. In practice, I’ve had very few merge conflicts that were not resolved automatically, and correctly. However, our teams practice quick, short cycles. I think almost everyone tries to synch up with the main database on a daily basis, and the longest. The longer between merges, and the more painful it can get.
I’ve also found that large corporations are somewhat concerned about DVCS. The fact that every developer’s laptop contains the complete change history of some important project sends shivers down the corporate IT spine. What many small companies view as a great feature, these organizations view as a scary, irresponsible design decision.
As I said in my opening, every time I’ve used DVCS, I’ve had a central server. If your team can’t agree on a single location from which to build the deliverable code, you’re not ready for a DVCS. DVCS provides new ways to work, and enables people to experiment. If your team is going to use it to hide changes from each other or the central build mechanism, then it’s a bad idea.
I spent this post saying why I like DVCS in general. I do find the benefits, when properly applied, greatly outweigh the concerns. The single biggest benefit is that all these small changes team members have made get folded into the central repository. I view DVCS as a way to have a local repository AND a central repository AND have them work together. It’s not about ripping apart everything we like about classic Version Control Systems, it’s about supporting more workflows.
I’ve used Bzr, Mercurial, and Git. Of those three, Mercurial gave me the best experience. That’s what I’m using on every project that I can.
We’ve repeatedly said that the best developers learn different technologies and different platforms. I think this is especially true in the RIA space, where your customers may be using a different device, and different platform than you typically use. I believe that if you are building RIA applications, it will be important for you to have some knowledge of both Silverlight and Adobe Flex.
To that end, SRT Solutions is hosting Adobe Evangelist James Ward for a 3 day Flex Jam. Learn more, and Signup here: http://www.srtsolutions.com/flex-training
If you’re primary focus is Flex, James will help you learn more about that platform in short order. If your primary platform is Silverlight, you’ll learn some of the idioms employed by the other major RIA platform. That knowledge will make you a better Silverlight developer: you’ll learn some techniques that you can use in Silverlight, even though they are more natural in Flex.
All of these projects are Open Source (using the Creative Commons license for content, and the MIT license for code). If you would like to contribute, visit our GitHub Repository. Or, if you have questions, comments, or ideas for improvement, please create an issue for us.