My video series “C# Puzzlers” is live on both Safari and Inform IT.
C# Puzzlers are a series of small puzzles that make you think about some specific facet of the C# language. I put together a series of exercises are several different facets of the language. I think it’s an engaging way to get your mind around some of the areas of the language that commonly cause confusion for developers.
I don’t often review business books here, but “Good Strategy, Bad Strategy: The Difference and why it matters”, by Richard Rumelt is an exception.
For my audience, the best feature is that Rumelt is an engineer by training. He explains strategy from an engineering and scientific perspective. He begins his discussion by going over examples where a mission, or a set of buzz-word heavy press releases substitutes for strategy. From that moment on, you know this is not the normal business book. He picks apart such empty direction with the precision of an engineer, or a dilbert cartoon.
That first part of the book is a quick read, and quite humorous. Its value is in preparing you for the second and third parts of the book. That’s where the real value is.
The second section “Sources of Power”, describes where you can find leverage, or power, that enables a well-thought out strategy to succeed. You’ll learn to recognize drivers that can be part of a great strategy. Most of all, you’ll learn the elements of any successful strategy. On almost every page, I found something that I could apply immediately, and I’ve already found myself thinking through his examples and applying them to our current environment.
The final section “Thinking like a Strategist” provides insight into how to use your brain to evaluate strategies, and to create your own good strategies. In addition to learning to recognize a good strategy, you’ll learn how to create and execute good strategy. You’ll finally learn how to benchmark results of executing a strategy. Most importantly, you’ll gain insight into when and how to consider modifying, or even replacing a strategy that is not giving the results you hoped for.
It’s not often that I read a business book, and walk away thinking “I can use this immediately”. This book gave me that feeling several times. I was constantly finding new ideas, new techniques and tools to help me running our business. It’s well worth the read.
Over the holidays, I read “Driving Technical Change”, by Terrance Ryan. The subtitle providers a great abstract for the book: “Why People on Your Team Don’t Act on Good Ideas, and How to Convince Them They Should.” This book will give you tools and techniques that will help you get your technical recommendations adopted at your workplace.
Mr. Ryan divides the book into three main sections: Skeptic Patterns, Techniques, and Strategies.
In Skeptic Patterns, you’ll learn how to categorize the negative responses you receive to new ideas. This is the first technique you’ll need: Recognize why people resist your idea. You’ll meet “the uninformed”, “the burned”, “the time crunched”, and others, including my favorite, “the irrational”.
In Techniques, Mr. Ryan describes many techniques that you can use to drive acceptance of a new idea. In each case, Mr. Ryan cross references each technique with the kinds of skeptics that are most swayed by that technique. The good news is that you probably already know most of the techniques. Mr. Ryan helps the reader by motivating you to perform those actions, and which skeptics are most receptive.
Finally, in Strategies, Mr. Ryan proposes a plan that will, over time, move the greatest number of influential skeptics to your side of the argument.
He doesn’t promise that driving technical change will be easy. But, Mr. Ryan will arm you with techniques to succeed far more often than you probably do now. Even though I’ve followed much of his advice for many years as a consultant, I found myself thinking I need to keep this book handy whenever I want to get customers to change. It’s thoughtful, calm techniques will keep you focused on the end goal, and give you the techniques to get there. If you are responsible for driving change, or wish you could drive change, you need this book.
During my recent vacation, I read the final print version of Essential LINQ, by Charlie Calvert and Dinesh Kulkarni.
Normally, I try to answer the question, “Who should read this book?” That answer eluded me on this book, due to the thorough treatment Charlie and Dinesh give the subject. Essential LINQ is approachable by developers that have minimal experience with LINQ, and yet those developers that have been using LINQ since day one will learn something from this book.
Everyone will learn something from two chapters in this book. “The Essence of LINQ” describes the principles behind the LINQ libraries and language enhancements. By understanding the goals of LINQ, you’ll immediately gain insight that will make LINQ more approachable and more productive for you.
In chapter 16, toward the end of book, Charlie and Dinesh discuss some patterns you’ll run into while developing with LINQ. You’ll learn how to use LINQ to SQL entities as part of a large multi-tier application. You’ll learn how to improve performance in LINQ applications. You’ll learn how to separate concerns in LINQ based applicaitons. LINQ is too new to be considered complete in terms of ‘best practices’, and thankfully neither Charlie nor Dinesh approach this subject with that kind of arrogance. Instead, they offer their recommendations, and invite discussion on the subject.
Throughout the rest of the book, Charlie and Dinesh explain LINQ to Objects and LINQ to SQL from a usage standpoint, and from an internal design standpoint. In other chapters, LINQ to XML is discussed. The authors provide examples of transforming data between XML and relational storage models as well. In every section, they tie together those features with the concepts discussed in “The Essence of LINQ”. That continuity helps to reinforce your understanding of LINQ through its design concepts.
After reading this book, you’ll be able to leverage LINQ in all your regular coding tasks. You’ll have a much better understanding of how LINQ works, and when you’ll encounter subtle differences in how different LINQ providers may behave.
It seems you can’t discuss LINQ without at least wading into the controversy of LINQ to SQL vs. Entity Framework. This book wades there as well (It was finished about the time the first EF release was made). More time is spent on LINQ to SQL, as it is more approachable from an internal design perspective. However, the chapters that cover EF build on that knowledge to help you understand how the two database LINQ technologies are more complementary adversarial. In addition, they touch on when you should consider one over the other in your application.
This section is the least complete, but the most useful to look into the future of the LINQ universe. It’s too easy to view LINQ as LINQ to Objects, LINQ to SQL, and LINQ to XML, and nothing more. This chapter gives you a brief view some of the other providers people have created for other types of data stores. Looking at some of those providers (especially the IQToolkit) will give you a greater appreciation for how LINQ can be used with a wider variety of data sources than you ever imagined.
If you are interested in being more productive with LINQ, you should read this book. You’ll probably thumb through it again and again as you search for better ways to solve different problems.
I’m reviewing them together because I read both of them at the same time, and some of the content is inter-twined in my own mind.
Both books provide a wealth of practical advice on succeeding with an agile process. Throughout both books, the depth of Mike’s coaching experience comes through. With every element of advice, Mike includes a discussion about why each recommendation is what it is. He also includes a lot of “you may be thinking of …” comments with reasons why straying from his advice may cost you and hurt your project.
Throughout both books I found myself almost thinking Mike was inside my head, helping me improve. When an author does that, he’s clearly succeeding.
The two books are aimed at different, but overlapping audiences. Maybe because I’m in both audiences, I found both useful.
User Stories Applied goes into more depth about the process of creating user stories that will help drive your project to success. You’ll find advice that you should share with your customers. It will help them learn what makes a good user story, and how to express their needs and feature requests in the form of user stories. In addition, you and your team will gain a better understanding about creating stories that show value, are not too big, and aren’t too small to provide real value. You’ll learn how to break epics into measurable deliverables for your team. Those skills will help you succeed with agile. Software project success starts with getting good input from stakeholders and customers, regardless of which process you choose. Mike’s guidance in User Stories Applied will help your team, and your customers get this crucial part correct.
Agile Estimating and Planning provides advice from a different angle. This book explains agile techniques from all angles: release planning, iteration planning, daily planning, and modifications when reality differs from estimates (you know, like it always does). The best feature of this book is how Mike seems to anticipate counterarguments from those in any organization that would be opposed to adopting agile. He counters those arguments with clear logic and solid explanations about why following his advice will achieve better results. From different angles, you’ll find advice for developers, project leads, customers and customer proxies. Whichever role you find yourself in, read the entire book. Knowing how other team members should be approaching the process will help build a functioning team across processes.
The key theme running through this book is that agile plans must be constantly revisited, because reality changes constantly. Agile planning is not something you do at the beginning of a project, it’s a series of ongoing activities throughout the project. That point is stresses repeatedly, and it’s worth it.
If you are looking into agile, or you’ve tried an agile process and haven’t had the success you’d hoped for, you must read these books. It will help.
Wow. I went to Mary Poppendieck’s CodeMash Precompiler talk on Value Stream Mapping. That was great. It’s rare that I take 5 pages of notes during one conference session. To be fair, the precompiler sessions are three hours, but still, that’s much more value than I usually get out of a conference session.
I’ve condensed, but here are my notes:
Value Stream Mapping comes from the Lean Business concepts. In most businesses, company value is measured using a balance sheet. Lean businesses think more about cash flow than balance sheet. That’s a major shift in thinking, and affects how you see everything.
Danger: major simplifications coming. I don’t want every MBA correcting my accounting analagy.
For example, your personal balance sheet is probably quite a bit better than your cash flow statement. Your personal balance sheet will include assets like your home, cars, the future value of your education (which allows you to make more money), cash value of investments, and so on. Of course, it will also include the liabilities against any of those assets: mortgage liability, car loans, student loans, etc.
However, as you go through your daily life, you’re much more likely to think about cash flow: do you have enough cash to make the payments on those liabilities, and pay for other items you want to buy.
The problem (for you as an individual) is that many of the assets on your balance sheet are not fungible. For example, you cannot immediately exchange your home for cash. You can’t use your house to buy groceries.
The analogy works for software as well: partially done work has value (it’s more than not started software), but that value cannot be extracted until the value works its way through the stream to a customer that desires that value. Any value created but not unleashed represents waste.
Mary defined waste as “anything that depletes resources without adding customer value”. From a cash flow perspective, time wasted is a big problem. The longer it takes for value to move from concept to cash, the worse your organization is.
Look at that definition again. “Value” must be seen through the eyes of the customer. It’s not enough for you, internally, to see value. Customers must see the value, because it is customers that will pay for that value.
From this basis, Mary went on to discuss some of the most common forms of waste in software projects.
The biggest single waste is creating code that doesn’t get used.
According to a Standish Group study on custom software, 64% of specified features are rarely or never used. Only 20% are always or often used.
Therefore, the biggest possible savings in software projects is to write less code. However, we have to not write the code that doesn’t add value.
I asked how those statistics can be applied to product development. Mary explained that extra requests in product development usually comes from Lazy Marketing: people request features for dubious reasons: competitors have it, it looks cool, we need to stack up. Instead, marketing should be asked how much revenue those new features would actually produce. To be fair, asking marketing to predict revenue generated from future features is a difficult as asking developers to predict the cost of those features. But it is critically important to try. To address these conditions, developers need to be actively engaged in marketing. And, they need to push for more frequent deliveries of software. Delivery will change a customers perception of what’s important. More frequent delivery will more accurate identification of the most important issues.
The second most important waste is Work in Progress
Look at those four wastes noted above. All four of those items represent queues. Queues represent waste. They containing work in progress; items that hold value that cannot be delivered. More importantly, if the early activities can proceed faster than the later items, you quickly get into a state where your real expected delivery date for new features is “never”. If you haven’t studied queuing theory, take a brief look here, and ponder it the next time you’re in a traffic jam.
From this, a Value Stream Map describes the flow of activities that starts with a customer need and ends when that need is satisfied.
Look at the emphasis there: It starts with the customer need and it ends when the customer need is satisfied.
As developers, we too often sub-optimize our portion of the cycle: from feature request to build artifacts, or something similar. When you work with value stream maps, do not do that. You must work end to end in order to identify all the waste. If you don’t know how long a particular activity takes, you should make your best guess. As you present your results, others in the organization will correct any false assumptions and you’ll have a better map. However, if you don’t add some guess, those parts of the process will remain absent, and you won’t see any waste there. that means you won’t fix it. Later, as Mary reviewed our value stream maps, I found it very interesting that she spent much time on processes at either end of what was drawn. Sure enough, activities occurred before we thought the work started; other activities occurred after we thought we were done. Those activities also had waste in them.
After you map the activities, you assign average times to each activity, and each waiting state. The goal is to minimize the wasted time as a request rolls through the system (queues again). Some of the examples should significant opportunities for improvement: activities that had 3 weeks of useful value-added tasks actually took more than a year. That sounds silly, and yet, the map of the activities (without the times shown) looked completely reasonable. The act of writing down the map pointed to incredible waste that could be easily attacked, with incredible benefit.
I’m assuming that most of you reading this blog are engaged in software development activities. That means you may want to create three different value stream maps: one for product releases, one for feature requests, and one for emergency fixes. Mary said that you may find that the emergency fix is the least valuable: by the very nature of an emergency, you’ll find that process is already streamlined.
Mary closed with a very important takeaway for value stream mapping activities: The purpose of creating a value stream map is to identify the most significant source of waste in your process so that you can fix it. It is not to show how great you are. The big picture goal is to create “a reliable repeatable cycle time from need until that need is satisfied.”
If the rest of CodeMash continues to live up to this, it will be time very well spent.
I noticed that More Effective C# is available on a Kindle edition.
I don’t have a kindle, and I’m curious what people think of the experience with a programming book in a Kindle edition. (Historically, the knock against the kindle was that it was not a pleasant experience to read code examples on the kindle.
Has that been addressed?
Is the experience good?
I don’t think Working With Legacy Code gets the respect and readership that it should. I believe that’s because most of us have a working definition of legacy code that implies something we want to avoid: We want to work on the cool new stuff, not the old legacy stuff. It makes us conjure images of C, or FORTRAN, or worse, COBOL. Or maybe something newer, but still mature enough you want to move on.
That’s not the definition Michael uses in his book. Michael defines legacy code as “Code without tests”. Based on that definition, do you work on legacy code? If you’re honest, you’ll say yes. Now, ask yourself if you want better techniques to work with code that doesn’t have tests.
If so, this is for you. You’ll learn several specific techniques that you can employ to take this code, make the absolute minimum number of modifications to get the code testable, and then you’ll feel safer applying your usual refactoring techniques.
I like the way the book is organized, with lengthy chapter titles that point to specific large scale code problems you’ll often find in code that doesn’t have tests. Example titles are “My Application has no Structure”, or “I can’t get this Class into a Test Harness”. Do those sound like problems you encounter? In these and other chapters, Michael identifies several common practices that lead to untestable code: dependencies on other system resources, unavailability of public interfaces to support testing, lack of interfaces for mocking, and so on. Each chapter title is more or less a description of the current problem, and the chapter content is a set of techniques that will enable you to move that code into a more testable design Once you can apply tests, you can add those tests and then go about your changes.
Other chapters show how to write tests that help you understand the current behavior. While this can seem silly, it does help ensure you don’t make a mistakes as you move the code forward.
Final, the last section of the code is a set of techniques that help break dependencies between different parts of a legacy system so that it is easier to inject those tests.
I haven’t said anything about the languages used in the book for examples. That’s because there are several: C++, Java, and C# all appear. One section that is specific to moving from procedural to OO techniques includes C. However, if you use a different language, don’t let that turn you off. The techniques are language agnostic, and that is proven by mixing the samples in different sections with different languages.
This is one of those books hat will always be handy,and will be one of the resources I turn to often when I inherit that set of code that just doesn’t have any tests. If you find yourself staring at blocks of undecipherable code, you should do the same.
I was recently notified that the 3rd edition of the C# Programming Language is out.
This version has is new in several ways. Obviously, it includes a description of all the new C# 3.0 language features.
In addition, a number of people were invited to provide annotations on the language specification. It’s an incredible group of smart people:
Brad Abrams and Krzystof Cwalina (of Framework Design Guidelines fame)
Joseph Albahari (of C# in a nutshell fame)
Don Box (of Don Box fame)
Jesse Liberty (of Programming C# fame)
Eric Lippert (member of the C# team, with a fantastic blog)
Fritz Onion (of Essential ASP.NET fame)
Vladimir Reshetnikov (SDET on the C# team)
Chris Sells (of Chris Sells fame)
Oh, and they let me add my annotations as well.
Today is the official release date for More Effective C#.
Writing a book may seem to be a solitary activity, but nothing could be further from the truth. I have been lucky enough to work with fantastic editors, technical editors, and community members as I have put this together. If you read the acknowledgements, you’ll see what I mean.
OK, this is why my blogging activity has slowed to a crawl, or downright stopped lately.
I've been working quite a bit on my next book: More Effective C#. It's getting closer, and it's now available on Rough Cuts. Rough Cuts is a Safari Books Online service that provides you with pre-publication first access to upcoming books. Chapters 1 & 2 are up right now.
You can see more about the Rough Cuts program here: http://safari.informit.com/roughcuts
More Effective C# is here: http://safari.informit.com/9780321580481
Back to more editing
A while back, I read Mary and Tom Poppendieck's "Implementing Lean Software Development: From Concept to Cash". That tag line, "From Concept to Cash", is the thesis for the book: By minimizing the time between receiving a customer request and delivering value (for cash), a company can obtain a long term competitive advantage.
This book is primarily aimed at management (where the Poppendieck's first book was aimed at the development team). If your biggest challenge is convincing your management to go lean, give them this book. It contains a number of case studies, from the Boeing 777 program to the Polaris missile program to PatientKeeper (a mid-size software development company), to Open Source project. It uses those case studies to justify the claims that lean techniques will bring success.
Of course, as a developer, you may need to convince management that these techniques will pay off. If that's you, you should read this book. You'll be able to convince management why techniques like Test Driven Development, Continuous Integration, and Short Iterations will help the bottom line. That will get traction when a strictly technical argument won't.
A little more than a month ago, at CodeMash, I had the pleasure of spending quite a bit of time with Mary & Tom Poppendieck discussing software development, agile methods, the business value of software, and the general state of the industry. I came away very impressed by the breadth and depth of their knowledge, and their willingness to share it. Our organization can learn a lot from their strategy of software development. Dianne and I were impressed enough that we got a copy of each of their books. I started with Lean Software Development: An Agile Approach.
This is a fantastic book if you are trying to convince a resistant organization to go agile. Mary and Tom discuss 22 tools centered around 7 themes of software development agility. They show the business value of each of these tools. Sometimes it is shorter schedules, sometimes it is higher quality, still other times it is more profit for your business.
In every case, these tools give you the vocabulary, themes, and arguments to make to the business people. For example, when they discuss the tradeoffs between features and time, they build a profit and loss (P & L) model for the project. (Chapter 4) While developers may feel that a new feature request is a bad idea, and marketing may demand it because a major customer wants it, the P & L model will move that discussion forward: The added development time will mean a delay in getting to the market. That delay will mean lost sales. The model even predicts (and yes, I know predictions aren't 100% accurate) how much money will be lost by the delay. Now, the feature vs. time discussion can proceed in a logical manner. And, note that it's not an either / or discussion, if you're doing agile right. If you have delayed decisions to the last responsible moment (Chapter 3) you may find any number of lower priority feature requests that have not been started. These can be dropped in favor of the new requirement.
As a small software consulting company, I found the last item on agile contracts most interesting. Poppendieck's discuss how to build a mutually beneficial contract around shared goals for the customer and the vendor. Whichever line you sign, that discussion is worth reading: They found that a more agile framework supported by an agile contract would enable a supplier to be more profitable while holding down the costs for the customers. Building a contract with some uncertainty is always a challenge, but it's worth that perceived risk.
Lean Software Development is not a cookbook. You won't find directives saying you must have pair programming, standup meetings, test driven development (although all those may be part of a winning agile strategy). Instead, you'll find recommendations on seeing and elminating waste, providing feedback loops throughout your value stream (customers, business managers, developers, testers), making decisions at the last responsible moment, yet delivering interim releases as early and often as possible, and many others. Most importantly, as you try and convince your organization to adopt these methods, you'll find examples, case studies, and other evidence you'll need to convince your stakeholders to adopt these methods.
In closing, to recommend this book, let me just say that Dianne and I are buying copies for all our consultants, and for customers struggling with adopting a winning agile strategy.
I got yet another suprise from Addison Wesley in the mail yesterday. This one contained a copy of Effective C# in Hungarian:
If you prefer to read in Hungarian, you can purchase the book here:
I received a care package from Addison-Wesley the other day: My reviewer copy of the .NET Framework Standard Library Annotated Reference, Vol 2 (SLAR II).
This is a reference book, and as such, it is very dry reading. In fact, much of the text is verbatim from the ECMA standard for the pertinent portion of .NET framework library (in this case, Networking, Reflection, and XML processing). The real value in this book is in the annotations. There are many annotations on the classes, members, and namespaces documented here. These comments are everything from historical notes, reasons for inclusion (or exclusion of other possible features), design goals, and even the occasional apology where one of the framework designers felt they should have done better.
To close, I’ll quote myself from the inside cover: “The .NET Framework Standard Library Annotated Reference is the one reference you really need when you use the .NET Framework library. The annotations provide clear insight into the design choices that the library development team made when building the library. Those explanations will guide you to the best design choices for your own application code.”
All of these projects are Open Source (using the Creative Commons license for content, and the MIT license for code). If you would like to contribute, visit our GitHub Repository. Or, if you have questions, comments, or ideas for improvement, please create an issue for us.