Bill Blogs in C# -- me

Bill Blogs in C# -- me

Created: 12/16/2010 4:01:09 AM

If you are a developer, you need to keep your skills up to date. If you’re working in the .NET space, our local Microsoft office is going to help. They’ve just announced Windows Client Development bootcamps, and another tour for the Windows Azure Development bootcamps.

The Windows Azure Bootcamp contains updated content to reflect the 1.3 release of Windows Azure. If you attended the previous bootcamps, there is a wealth of new material to learn. If you’re new to Azure, the bootcamp will give you a head start on growing the skills you need to build applications that run on the Windows Azure platform. See for a list of dates and locations. I’m honored to be part of the group of people delivering the windows Azure Bootcamp. I’ll be helping at a few of these events, primarily in Michigan and Illinois.

The Windows Client bootcamp concentrates on development for the Windows client platform. That will include Silverlight 4, Windows 7 features, and IE 9 as a development platform. The Windows bootcamp is a one day event, concentrating on the features you can use to leverage features in the Windows platform to create more compelling applications. See for cities and dates.

Created: 9/20/2010 7:27:53 PM

Jon Skeet wrote a post last week discussing the pitfalls we encounter when any of us try to create prescriptive guidance.  The hardest part of writing the Effective C# books is trying to write actionable advice for developers I’ve never met working on projects I’ve never seen.

There are two extremes I tried to walk between.  At one extreme, truly universal advice is so universal as to be almost trivial. (Think of your average beginner C# interview questions). At the other extreme, corner cases aren’t that applicable in every day development. In my own work, I’ve tried to handle this by constructing guidance to change behavior I see that causes problems later.

Let’s face it, none of us can predict the future. (If I could accurately predict the future, I’d be playing the stock markets much more aggressively.) Most of my experience in software tells me that I want to make the mistakes now that costs me the least in the future. That means deferring expensive work now that has little long term benefits. That also means doing smaller tasks now that might pay off in large ways later.

I should also say that some of the guidance in both my books is aimed at developers that create components for other developers to use. They may be developers in your organization, or a customer organization. However, one important point is that these libraries need to have a release cadence different than the applications that use them. As Jon points out in his post, that puts some constraints on your design decisions that don’t exist if you can simply recompile all code (as you can in a single application scenario). That one fact can change the guidance significantly.

When writing the effective books, I tried to modulate the advice based on how often it would apply, and when it might not. You can even tell this in the titles.  Some titles are meant to imply an almost universal guideline (Always Provide ToString()). Some are a bit weaker (Prefer making your types Serializable).  Some are downright neutral (Consider Weak References for Large Objects). On the negative size, the titles also modulate the advice. Some are almost absolute (Never overload extension methods) others are less universal (Avoid Conversion Operators in Your APIs).  And I tried to explain in the text when the guidance wouldn’t apply. I’m sure I didn’t manage to list all of them, but I hope I gave enough description that readers can make intelligent decisions about when to follow my guidelines, and when they don’t apply.

Of course, in the end, Jon is correct: It’s very difficult to give advice without more context.

Created: 4/1/2010 11:52:28 AM

As part of the ongoing goal of toward language parity, the C# team has quietly started a program to change some of the keywords and reserved tokens to provide better typing hand parity.

You see, Visual Basic has better parity in its most common keywords:

Sub:  2 keys with the left hand, 1 with the right
Begin: 3 keys with the left hand, 2 with the right
End: 2 keys with the the left hand, 1 with the right
Dim: 1 key with the left hand, 2 with the right

C# is overwhelming right handed:

{ and } are both right hand characters.
( ) right hand.
< > right hand.
the most egregious offender of them all: ; is a right hand character.

Well, the C# team is addressing this. But of course, this was a bit of challenge, and the team rejected some initial proposals.

Alpha and Omega were considered as replacements for { and } respectively. They did achieve better handedness parity, but they were rejected by the community because of wordiness.  After all C# is the language where the following is legal, and in some circles considered great form:

var label = (value == 1) ? "one" : (value == 2) ? "two" : (value == 3) ? "three" : (value == 4) ? "four" : "unknown";

Obviously, this community would not take to something as verbose as Alpha and Omega as keywords. One letter replacements were considered.  “A” and “O” were rejected because “O” was too close in style to the “0”. (see, you’re not sure if that zero is just another O, are you?)

That’s when the team settled on the “A” and “Z” characters.  Both are typed with the left hand, and they preserve that single-letter terseness embodied today in { and }.

Of course, all this hard work would not address the principal concerns if the ; weren’t addressed. That was the hardest of all. Most of the symbol characters are typed with the right hand. Some characters are already used in the C# language: @, !, # and ~. (Hey, it’s not like the language designers weren’t trying to avoid this handedness problem.)

Finally, after much debate, the team settled on $ as the end of statement character.

That makes the typical HelloWorld program look like this:


using System$
namespace LeftHanded
    public static class Program
        public static void Main()
            Console.WriteLine(“Hello World”)$

Admittedly, this takes some getting used to. But, in the interest of fairness to lefties everywhere, it’s worth the sacrifice.

Now, before you write comments, look carefully at your calendar.

Created: 1/29/2010 7:26:23 PM

The January Visual Studio Magazine marks the first time the C# Corner is written by Patrick Steele. I’ve bowed out after a long run with the magazine and its predecessors.

The most important part is that the C# column is in great hands. Patrick is excellent at explaining concepts, and he’s going to bring a wealth of new ideas and concepts to the magazine. I feel much better walking away from the column knowing it is in such good hands.

It was hard to walk away after so much time with VSM and its predecessors. But it was time. I’ve written on so many C# topics that I was having trouble coming up with ideas that felt new. I felt like I was covering the same ground over and over.

That got me thinking about how long it had been, and what a long, strange trip it has been.

I started as the original C++ Fundamentals columnist for Visual C++ Developers Journal in the 1990s. That was a new magazine published by TPD, for the MFC / C++ Developer. It were covering techniques to bridge the divide between 16 bit and 32 bit applications. There was this amazing new OS code-named ‘Chicago’ on the horizon.

A few  years went by. I kept covering more topics related to C++ and windows development. More MFC, ATL, language enhancements and how the Microsoft C++ compiler tracked (or didn’t track) the C++ standard.

At some point in this period, Visual C++ Developers Journal was bought by Fawcette. TPD stopped being involved.

The turn of the century brought more changes.  This amazing new .NET platform and the C# language showed up. I started writing about C#and .NET fundamentals, instead of C++. Although the change was gradual: In the beginning, I was writing 2 C++ columns for every 1 C# column. Over time that kept changing.

Next, came some initiatives to capture more of the web audience. Several of the columnists starting writing a column online (at the rate of one a week). These weren’t all tech columns. Some were more opinion, or ‘tip’ columns. That felt like quite a grind. I was constantly under deadline pressure to come up with a new idea every week. It soon got harder: The readers liked the ‘how to’ columns most. I was asked to replaces the ‘tips’ and ‘opinion’ entries with more regular columns.

I took a break and wrote a book (Effective C#, the first edition).

I wrote for a while, and then there were more changes.  Visual C++ Developers Journal merged with Visual Basic Programmers Journal to become Visual Studio Magazine. This was a challenging time to write for this magazine. The audiences for VCDJ and VBPJ were very different. And, most importantly, they were both afraid of losing content to the ‘other’ audience. C# aficionados were concerned that they’d lose coverage to the larger VB market. VB developers felt the same fear of losing coverage to the newer, and perceived to be cooler C#. That was a tough era. The editorial staff did a tremendous job to navigate a very tough set of market perceptions.

I stayed on break and wrote a second book (More Effective C#).

Then, I was approached to come back and write the C# Corner column for Visual Studio Magazine. Having finished the book, it was time to keep writing again. It was fun for a while. I was and still am impressed by the energy that 1105 media is bringing to the publication.  I had a blast over the past two years writing for a reenergized Visual Studio Magazine.

Then, while still trying to write the column, I updated Effective C#, covering C# 4.0, and other recent enhancements to the language.

I was running out of ideas for the column. Visual Studio Magazine deserves better content. That’s why I worked with Michael Desmond and the editorial team at VSM to turn over the column to Patrick. I’m glad it’s in good hands.

So what’s next?

I’m now writing content for the C# Developer Center. The cadence will be roughly once a month, and I’ll be writing on current and upcoming language features in the C# language.

Created: 1/20/2010 8:52:06 PM

I have enjoyed reading other posts about how much people enjoyed CodeMash. It is my favorite yearly conference. The people are brilliant, and energized about technology. It’s also one of very few events that embrace people with very different views.

I use CodeMash to learn about technologies I don’t get much time to use during my regular work.  This year, that was Ruby, and Silverlight.

First ruby.  I took a crazy route.  I went to the pre-compiler with Joe O’Brien and Jin Weinrich on the Ruby Koans. But I did it with a twist:  Instead of the reference Ruby implementation, I used IronRuby. After some initial hiccups that worked quite well.  The test harness used by the Ruby Koans makes use of an END block in the test code. An END block (in Ruby) is code that is executed as the interpreter exits. It’s rather common as a test harness: load all the tests, and have the END block reflect on all tests, and execute them. It took me a while to get over that hump, but once I did, I learned quite a bit about ruby during the rest of the morning.

During the main conference, I went to some other .NET and Ruby talks (see My CodeMash Schedule). I’m impressed with the integration story for dynamic languages and the rest of the .NET stack. I am not a Ruby expert by any means, but I’ve whetted my appetite for more Ruby learning. And now, I feel like I can be somewhat productive in the language.

I went to both of Jesse Liberty’s Silverlight talks. Going into a .NET topic may not seem like stretching my horizons, but I don’t have a lot of background in Silverlight (Mike Woelmer knows quite a bit more than I do).

Jesse gave two talks: One was very basic, and the other a bit more advanced. Jesse is a great speaker, he’s very engaging, and provides a great amount of information in a small amount of time. He provided a great foundation for someone starting in Silverlight, with or without a .NET development background. I’m much more equipped to dive into the richness that is Silverlight. Jesse helped me get over that initial hurdle of working in a new environment.

It’s the rest of CodeMash that makes the conference special: The time outside of sessions was filled with great conversations about all kinds of technologies. That’s what makes CodeMash special. Hey, it was such a mixing of different views that the Java Posse even invited Chris Smith and I to be on their CodeMash panel. It was CodeMash, so no one came to blows.

Of course, after the tech talk was over, I spent the weekend at the Kalahari with the family in the water park.

Created: 1/11/2010 7:13:39 PM

A new year means it’s time for CodeMash.  Tomorrow I begin the annual geek pilgrimage into the water park for tech knowledge.  I’m amazed at how much the conference has grown, and the strength of the session list. It was difficult to decide where to spend time, but here’s my current plan:

We’ve got great keynoters this year, and I’ll be attending all of those. I am attending the precompiler again this year. Here’s the current plan:

  • Wed AM:  The Ruby Koans (Joe O’Brien and Jim Weirich
  • Wed PM:  Open. I’m not sure what


  • 9:45:  Silverlight from Zero (Jesse Liberty)
  • 11:00: User Stories: Closing the Agile Loop (Barry Hawkins)
  • 1:45:  Ruby and Rails for the .NET Developer (Matt Yoho)
  • 3:35: Funky Java, Objective Scala (Dick Wall)
  • 4:45: Engineering vs. Design – How to Work Together (Joe Nuxoll)
  • Friday:

    • 9:30: Going Dynamic in C# (Bill Wagner). Of course, I have to be there.  But, I hate pimping my own talk, so if I weren’t speaking at this slot, I’d pick:
    • 9:30 (alternate) Being an Evil Genius with F# and .NET (Chris Smith)
    • 10:45: Restful Interfaces to 3rd Party Websites with Iron Python (Kevin Dahlhausen) (I’m picking this because I’ve seen Jim Holmes’ Leadership talk, and I need to learn Python better)
    • 1:45: Iron Python with ASP.NET (Chris Sutton)
    • 3:35: What’s new in Silverlight (Jesse Liberty)


    Some of this will likely change, but I think you can see the goals:  I want to learn more about dynamic languages in general. I also want to take the opportunity to learn as much Silverlight as I can. Jesse Liberty is a great teacher, and I can’t pass on this opportunity.

    Of course, like every CodeMash, I’ll probably make a number of changes to this one as well.

    Created: 1/1/2010 12:23:01 AM

    Before I discuss my 2010 predictions, it’s important to provide a disclaimer using my 2009 predictions.  One of my predictions was spot on.  At the end of this post, I said:

    “Of course, I’m probably wrong about many of these.”

    That one was right. The other four, less so. Cloud Computing has not achieved the market share I would have thought it would by now. I’m going to call that one “delayed” not “wrong”. I still think Cloud Computing will be important, it is just taking longer. Rich Internet Applications was just plain wrong. While I think (almost) every application will make use of the internet, and people want rich applications, the term will lose favor. It won’t have meaning if everything is ‘rich’ and ‘internet’. “Multi-touch goes mainstream”. Not yet. I still think that will happen. The mouse as a metaphor is very long in the tooth. We do crave better ways to interact with our computers. We have it with some special purpose applications. We’ll get there with other apps as well. My final prediction “Social Networking as a business tool”, was much better. In the last year, Detroit and Ann Arbor newspapers stopped printing. I didn’t miss a beat. All those papers have great web sites (,, and Even more immediate, all those sources, and several of their reporters are on twitter. I get news more quickly following them, and following their links than I did reading the dead tree edition with my morning coffee.

    On to 2010 Predictions

    Armed with the confidence that I’m probably wrong, I’m ready to divulge my thoughts about 2010.

    Polyglot Programming Languages

    For the past few years, many industry leaders were saying that developers needed to know multiple languages. They were right, because learning multiple languages (if done correctly) meant that you needed to learn different programming idioms: procedural programming, object oriented programming, functional programming, and so on.

    But that’s painful. Why should I need to learn new syntax to use new idioms? Curly braces aren’t allowed in FP? semicolons aren’t permitted in dynamic languages?

    Instead, why can’t a general purpose programming language support multiple programming idioms? C#, VB.NET, and C++ are starting to seriously support that. (Other languages may be doing this as well; I don’t know). All these languages have added (or are adding) lambda expressions which support functional programming concepts. (C++ has used Class Type Functors for this purpose for some time). C# is adding support for dynamic typing (as is VB.NET, in a more strict fashion than previously supported). Implicit typing is supported in C#, C++, and VB.NET as well.

    This trend will continue as more and more developers want to use the best programming idiom for a particular task without learning a totally different syntax. Any programming language that calls itself a “general purpose language” will support multiple idioms.

    Value Placed on Comp. Sci Skills

    For the past several years, the conventional wisdom has been that ‘business skills’ are more important than ‘core technical skills’. That trend is changing. While I don’t believe we’ll return to those days where anti-social behavior is tolerated (or worse, celebrated), I do believe that serious technical talent is valued more than it was. Even The Economist has predicted fewer MBA students in the coming year, saying it will “…cut off the supply of bullshit at the source.” 

    OK, MBAs aren’t that bad. Or, maybe they are. But the fact is that technical talent and skill is important. “Soft Skills” complement hard skills; they aren’t a substitute.

    In 2010, more companies will want Comp Sci or Software Engineering grads for their developer positions.

    Small, Focused Applications Rule

    In an interconnected world, large enterprises are no longer going to tolerate the “One Application to Rule them All” concept. They will start looking for best of breed solutions to each individual problem. The suites that solve everything are going to decrease in value. (or at least customer value). They do everything, but not exactly the way a large enterprise wants. More importantly, they don’t have the integration points needed between these important processes.

    Finally, CIOs will be looking to avoid “vendor lock in”, whether that’s a real strategy, or just defensive posturing.

    But APIs are the Key Value

    Balancing that trend is the fact that too many business processes are stitched together from disparate programs that have no interoperability strategy. Businesses cannot compete if their disparate programs don’t coordinate automatically. It’s too much friction, and no added value. In fact, that’s what drove the trend toward the enterprise suites that claimed to do everything: it was a way to finesse the interoperability requests.

    Going forward, the single most important feature for business programs will be data import / export. That must be in a standard format, and it must be a programmable API.IT may or may not be REST, SOAP, or some new made up word. But, I’m confident that any serious product in the business world will have a clear strategy to interoperate with other programs that solve other business problems.


    These last two predictions will mean that future systems will built using these concepts:

    1. Each process is a small, focused applications that solves a single problem.
    2. The inputs and outputs of that process will be in a standard format.
    3. Enterprise developers will stitch together different commercial applications are handle *part* of their business processes.

    Notice how those last two also fit well with the cloud strategy? Processes in the cloud. You’ll interact with those small processes using a rich UI on whatever screen you have.

    Of course, I’m probably wrong again.

    Created: 10/27/2009 3:01:58 AM

    A friend of mine asked me for some pointers relating to giving technical talks recently. We had a great discussion and I thought that some of the advice was general enough that I would share it here. Many of us are involved in user groups, and we (as a community) owe it to our other members to help everyone learn.

    Of course, I’m not the world’s authority on speaking, and I have quite a bit to learn. I want to hear your speaking tips as well.

    Tell a Story

    View your talk as a story, in the literary sense. This has a lot of implications in your talk.  First, think about books you really like (or even a movie). The very beginning hooks the audience immediately. You may not know much about the story yet, but you are hooked.  You need to know more. OK, I admit that very few tech talks have the same hook as “Raiders of the Lost Ark”. But still, work on the hook as much as you can.

    Next, continuing the ‘story’ analogy, consider the different components of your talk as characters in the story.  Your story (at least for this talk) revolves around one topic.  All other topics. are supporting characters. They are part of the story, but they have supporting roles. Do not spend more time on these supporting roles than they deserve. That limits your ability to tell the main story. These supporting characters can be covered, but only in such as these supporting characters tell your main story.  Look through your slides, and your samples and look for opportunities to focus on your main story rather than these supporting characters. For example, if your talk is about C# 4.0, you may use a Silverlight application to demonstrate the features. Concentrate on C# 4.0, not Silverlight. (Of course, if your talk is about Silverlight, you do exactly the opposite).

    Everyone Needs Practice

    Listen to yourself. Do you have an verbal tics? ‘Um’, ‘So…’, Most of us have them. Work on minimizing those tics. They will hurt your delivery.

    Mix up the slides and demos. Almost all great speakers split up the cadence of their talk. They do some discussion. Then some demos. Back to slides, or a story, and onward. Staying in any one of those styles for too long will lose the audience.

    Demos need the Most Practice

    Let’s talk about demos. That is the hardest part.  It’s hard because we usually don’t speak while we are coding, and no one likes ‘dead air’. That’s why I use the trick of saying what I’m coding.  It avoids ‘dead air’ and I’ve never been able to code one thing while saying something else.  It’s usually something like:

    “I’ll new up a List of Employees so that I can run a few queries. We’ll have to initialize it with some data. (see note one below).  Now that we’ve got a collection let’s find the most expensive employees. That’s a query from emp in employees where salary is …”

    By talking through the code, you can keep your focus.

    Note: some code is not relevant to your talk, like initializing a collection with a bunch of values. Use a snippet, or already have that written. When you practice, you’ll spot these. They are the bits of code that you don’t want to talk about, because they aren’t part of the story.  Just put them in place, and move on. It will give you more time to talk about your main story.

    Have the crutches handy

    There are two crutches I have ready all the time:

    1. Snippets for bits of code I forget. Everything I type is on the snippet clipboard, in case I forget something. I try not to just paste code, but it helps to know that I can grab those if something bombs.
    2. The finished demo somewhere on disk. I’ve often forgot some small bit needed to make the demo work right. It’s easy to do on stage. And, when a demo doesn’t work, it’s easy to get really nervous really fast. Therefore, I always have a way to get to the finished working demo quickly. When the demo bombs, I’ll try to fix it once. If I can’t get it working with one quick session, I punt and say something like “I must have forgotten something, so let’s load the finished solution and look at it. We’ll find what I missed, and you’ll never forget to do that again. Especially if forgetting it means your demo doesn’t work and look foolish on stage.” Then, I load the finished solution. I run and prove to the audience that it works. Then, I look at the code and explain what’s in the pre-loaded solution that wasn’t in the demo that broke. (Trust me, it’s really easy to find.)

    Finally, you have to use the big font. And, you have to practice with it because that will make sure you format code so it looks good on the big screen.

    The more you practice, the better you’ll be. Talk to any experienced speaker, and you’ll hear stories of when that person gave a talk that felt like a failure. They are great speakers because they learned from that experience. Everyone can do it, but it does take practice.

    Created: 6/25/2009 3:56:56 PM

    Once again, we brought together a small group to discuss two more upcoming topics for the software industry. This time it was Natural User Interfaces and Social Media.

    The most interesting observation is that there is a bigger gap between the future-oriented Natural User Interfaces and the closer-to-the-mainstream Social Media topic. That colors this post: In this group we had much more discussion on Social Media than we did on Natural User Interfaces. That's a function of the group, and the relative adoption of both topics.

    Natural User Interfaces

    Natural User Interfaces are a rather nascent topic in our industry. However, I do think that over the next decade, it will become more and more important. The mouse and keyboard metaphor is more than 30 years old. Gaming has clearly gone way beyond mouse and keyboard.  (How long would you play rock band or guitar hero using a mouse and keyboard? Flight Simulators, racing games, and more use custom controllers that are supposedly more natural than the mouse and keyboard metaphor we’re forced to use in our regular line of business applications.

    We’re starting to see some more forward looking computing devices.  iPhone, iPods, Zune, Surface and Windows 7 multi-touch are all examples of a more natural user interface. In all cases, we’re using more natural motions to directly work with the device.

    The conversation lagged somewhat. We didn’t have many of the devices at our disposal that day. Also, it was a bit harder to create a dynamic discussion because it was harder to get conversations about how to these new devices might map to what we do.

    Social Media

    Social Media was completely different. This was very lively, and it’s clear how we can all make use of social media for our business and personal endeavors.

    The clearest point here is that the twitter model has won. It’s more interesting than Facebook, LinkedIn, or any other social media platform today.

    The reason is that it provides the best way to reach an authentic audience from an authentic perspective.

    Andy Seidl (@faseidl) has described twitter as ‘a world wide cocktail party’. That’s a very apt description. You can wade in at any time, you can choose to behave like a professional, or be that person in the corner with the lampshade on his head. You can leave conversations that aren’t interesting. But most importantly, you can engage in real conversations with those people with whom you interact.

    That’s the most interesting part: Social Media must be personal in order to be useful. You can only create an audience by engaging in real conversations. Once you engage in real conversations, you can get real results.

    Social Media also allows you to get around the major barriers to efficiency today.  Our corporate structure exists to allow specialization, which builds walls and silos around different areas, business units, and disciplines. With a twitter presence, I’ve found it easier to reach elected officials, vendors, customers, and collaborators. I haven’t used anything else that has made it easier to reach as many people in as many ways.

    Bud Gibson (@Innovators) has made significant investments in many areas of social media. He’s also claiming that twitter is a better investment than any other social media. In fact, he’s finding that twitter is better than ‘pay for click’ advertising to reach new prospective customers.  It gets back to Andy’s cocktail party analogy: It’s all about the conversation.

    The bulk of this was about Twitter, although we did discuss other media as well. However, most of the other social media create more walls around different communities. They are ways to interact with people you already know. Facebook and LinkedIN are platforms to talk with people you know. Twitter starts a conversation, and as part of the platform, allows anyone to join and participate.

    Was there a call to action?

    There was less of a call to action here. NUI seems a bit farther in the future. The call to action was to ‘keep an eye on it’. It likely will be important, but not immediately for broad reach applications.

    For social media, the call to action was to develop a strategy about how to participate. Your customers are participating. Your competitors likely are too. That means you need to participate, and you need to differentiate yourself with your content.

    Oh, and you can follow my updates on twitter as well: @billwagner

    Created: 6/16/2009 7:28:24 PM

    Yesterday, we hosted a Software Development Jam at Automation Alley in Troy, MI (North suburbs of Detroit, MI for those readers not in the Michigan area).

    We billed the event as a way for developers to learn something about those software techniques that we are convinced are the important areas of growth in the future. Those areas were modern software engineering techniques (TDD, Continuous Integration, distributed version control), modern .NET (WPF, Silverlight), and modern Java platform tools (Scala, Jython, JRuby). We spent the morning going over some general software techniques, discussing what kinds of apps we’d want to build, what folks wanted to learn, and why. We spent the afternoon building samples and learning from each other.

    The folks that attended were an interesting mix. We had people  with years of mainframe experience, people that had been teaching C++, and folks in the process of transitioning from IT Pro to developer track careers.

    This was the kind of event that wouldn’t have been possible only a few years in the past. The availability of tools, frameworks, and trial editions helped.  Jay Wren setup an SVN server in the meeting room on a spare laptop. Then, he helped everyone get access to it. He also setup a CC.NET server so that all the samples were building throughout the day.  That was a fantastic way to show all the attendees the power of these tools.  If anyone got stuck on a sample later, they could do an SVN update and see a working sample. Others could ask them to checkin their failing build and get help from someone that wasn’t stuck at the time.  And the best part was that everyone could do an "SVN update” and get all the samples before leaving. Of course, everyone could see if anyone made mistakes by monitoring the CC.NET homepage on the internal LAN.

    Mike Woelmer led a WPF tutorial in the afternoon. He also gave an overview of the differences between WPF and Silverlight. That helped folks understand when to pick one or the other for a new project. I was coding in Mike’s group, helping folks that got stuck.  I also modified the sample so that much of the middle-tier logic used LINQ style syntax rather than the classic imperative style. Later, Mike and I collaborated on an update to a Windows Forms applications that was multithreaded to give folks an idea on how to use multi-threaded concepts in WPF.

    Meanwhile, Dianne Marsh was working with a different group building some of the Code Kata problems in Scala.

    All in all, we all learned quite a bit about how to apply skills used in one environment to another.  Many of the attendees were stretched to apply what they know to new environments. We were all stretched to explain these new environments in a way that would resonate with people coming from a very different perspective.

    We’ll definitely do more of these in the future.  It was a great way to get information in front of a group of developers and understand the challenges many developers face in adopting new technologies in their current environment. That makes in incumbent upon all of us to do a better job of explaining the value proposition of the latest advances in software development.

    Created: 1/19/2009 11:03:46 PM

    There is still time to register for the MSDN Developer Conference in Detroit, this coming Thursday.

    If you missed PDC, this is a chance to get a look at the major content announcements from that conference.  You’ll get sessions on the Azure Services Platform for Cloud Computing, Client and Presentation Technologies, and Tools, Languages and Platforms.  You’ll learn how to build applications for Azure the Live Platform, and Live Mesh Services, and how to use SQL Data Services for storage in the cloud. You’ll see what’s next in ASP.NET, WPF, and Silverlight. You’ll learn what’s next for C# and VB.NET, what is Oslo, what’s the F# buzz about, and what’s coming in VSTS 2010.

    That’s in addition to side events like the Community Courtyard, and Women Build.

    I’m giving the talk on the future of C# and Visual Basic, and I’m thrilled with the content.  You can register here:

    Created: 1/12/2009 4:04:55 PM

    I’ve just uploaded my code and slides from both talks at CodeMash.

    I was somehow lucky enough to have two talks at CodeMash:

    Extending and modeling the type system using Extension Methods.



    Understanding Query Comprehensions




    The talk on understanding Query Comprehensions was recorded.  When the CoeeMash tribe posts the recording, I’ll post a link to that as well.

    Created: 1/5/2009 4:05:57 AM

    It’s time to consider what 2009 will bring for the technology industry. It’s more than just an idle exercise: In order to stay relevant, I need to make some reasonable predictions about what I should be learning as new technology becomes available. So here’s where I’m investing my learning time, and why.

    Cloud Computing

    Sure, this one is easy.  The big players in our industry are spending millions to build cloud computing infrastructure. I doubt they are all wrong.

    Personally, I believe it’s too early to pick winners among the major players. In fact, they may all be winners. More importantly, I want to consider what cloud computing will enable us to build.

    ‘Computing in the cloud’ gives us scope we’ve never had before. Scope of CPU, scope of storage, scope of geography.

    CPU scope means that applications out of our reach will now be possible. Consider genetic algorithms. Larger populations will be possible, because the cost function for subsets of the population can be distributed among more CPUs in the cloud. Applications like SETI at home can easily move to the cloud. Any applications that can be scaled into multiple processes can be moved into the cloud, and can be scaled to solver larger and larger problems.

    Storage Scope means that applications that need more and more storage can be considered in the cloud computing era. What can we do to solver world health problems if we can analyze data from everywhere in the world? Can we cure cancer? Can we analyze the spread of viral outbreaks with world-wide data? If we can analyze them, can we stop them? Or at least slow them?

    Clearly, CPU scope and storage scope are related. Many problems that require enormous amounts of data also require tremendous CPU resources. In both cases, we’re now going to create software that can address larger and larger problems.

    The elasticity of the cloud computing platforms available means that it will be easier to validate these new applications on smaller subsets of the data, or algorithms. Then, as the algorithms are proved in the small scope, more and more resources can be brought to bear on larger and larger problems.

    Rich Internet Applications (RIA) grow up

    Right now, the industry doesn’t have a consistent definition of an RIA application. Is it a web application? Smart client? Something new? A mix of all those things we’ve already had?

    It seems that no one knows for sure. And yet, there’s clear direction from all of us that use software on the web.  The occasionally connected, or casually connected user, is important. Web applications don’t work for them. RIA applications must satisfy the user that isn’t always connected to the web. That means something different than a browser based application. It means a UI that works offline. It means a data storage model that works offline. It means real state management. It means a push model: send data to the user rather than asking the user to poll for updates as an explicit action.

    Of course, not everyone will want, or be allowed to install or run a rich client. And, your users may sometimes be using borrowed machines. That means supporting a browser based experience for broader reach.

    In my opinion, an RIA will come to mean an application that has several work models. The key feature will be server-based (or cloud based) data storage, multiple client based experiences, and every one of those client experiences will be as as rich as possible, based on the constraints of the delivery experience. Is that Silverlight? Flash? something new? A WPF application using WCF to communicate?  Well, yes. It means all those things.

    In addition, because the data storage will be based at the server, or in the cloud, RIA applications will be, by definition, highly collaborative. You’ll be sharing data, and sharing work, with many others. The RIA applications must be capable of handling that kind of collaborative workflow.

    Multi-touch goes Mainstream

    Surface? Windows 7 with Multi-touch? Even the iPhone. It’s a start. The current UI metaphors are  more than a decade old. It’s hard to believe that we’ve not come up with any new ways to communicate with our machines.  I want the speech recognition from Star Trek. OK, that may be a bit too far in the future, but I do believe it’s time for something new.

    Direct manipulation on the screen seems to be the most promising.

    As with cloud computing, it’s more interesting to look at what these new metaphors might enable us to accomplish. Surface computing is highly collaborative. Many people can work on the same computing platform together in ways that just aren’t possible using the traditional PC (or Mac). Multi-touch is going to bring surface – like metaphors to the mainstream. In fact, it may help drive adoption enough that the price point for a surface will come down to where it becomes a mainstream computing device.

    I think that as more and more applications show up for these new and different platforms, there will be more and more demand for applications that make use of these new metaphors.

    Social Networking as a business tool

    Most businesses are afraid of social networking applications. The current thinking is that social networks (facebook, twitter, blogging) provide a megaphone for a business’s harshest critics. That’s true, but that’s only one facet of how social networks can affect businesses. It’s also not one of the better uses of social networks for businesses.

    Instead, I believe that this year, businesses will learn to leverage social networks to build customer communities, or enable customers to endorse products. In addition, businesses will learn to use social networks to keep their customers up to date on new and upcoming products, events, and more.

    It’s really just another way to communicate. What seems hard for businesses is that it’s more of a two-way conversation than traditional marketing efforts. The winners will be the companies that can learn how to promote what they do in the context of a two way conversation.


    Of course, I’m probably wrong about many of this. Predicting the future is very difficult. However, I do believe all four of these topics will be important over the next year, as businesses try to create value and compete for customers in a shrinking world economy. Software is going to command some resources, but in order to continue to grow, software will need to provide ever increasing value for ever decreasing costs. All of the topics I’ve mentioned above will help software companies create greater value.

    Created: 12/17/2008 2:52:05 AM

    Dianne Marsh and I have been sharing time writing a monthly column for the Ann Arbor Business Review magazine.  This last month, I wrote about the trend for ‘inshoring’: Outsourcing software development from either coast to the midwest regions.

    Read the entire article here.

    Created: 12/12/2008 8:57:00 PM

    More Effective C# was one of the better selling books at PDC, which triggered Alan Ashcraft to sit down and chat with me about C#, writing, helping customers, and being a general nerdy person.

    The full interview is here:

    And, the interview contains links for a download of a portion of Chapter 3 in More Effective C#.

    Current Projects

    I create content for .NET Core. My work appears in the .NET Core documentation site. I'm primarily responsible for the section that will help you learn C#.

    All of these projects are Open Source (using the Creative Commons license for content, and the MIT license for code). If you would like to contribute, visit our GitHub Repository. Or, if you have questions, comments, or ideas for improvement, please create an issue for us.

    I'm also the president of Humanitarian Toolbox. We build Open Source software that supports Humanitarian Disaster Relief efforts. We'd appreciate any help you can give to our projects. Look at our GitHub home page to see a list of our current projects. See what interests you, and dive in.

    Or, if you have a group of volunteers, talk to us about hosting a codeathon event.