Bill Blogs in C#

Bill Blogs in C#

Created: 8/4/2015 4:07:44 PM

Whenever I teach or present, people ask me why their copy of Visual studio doesn’t look like mine. The reason is the extensions I have installed. I’m really impressed with Visual Studio’s extension model, and how rich the ecosystem for Visual Studio extensions are.

It’s even gotten more rich with the release of Visual Studio 2015, and Roslyn based analyzers.


Here (in alphabetical order) are the extensions I use all the time:

The .NET Compiler Platform SDK:  This extension provides all the tools and templates that I need to create Roslyn based analyzers, diagnostics, and refactorings. As of RTM, this extension includes the Roslyn syntax visualizer, which was previously a separate install.

C# Essentials: This extension is a set of Roslyn based analyzers and code fixes that help you adopt the new features of C# 6. If you falling into old habits, and using old constructs, when you want to adopt the new language syntax, this analyzer will help you build those new habits.

Chutzpah Test Adapter: This extension runs Jasmine based unit tests for my JavaScript code in the Visual Studio Test explorer window. Adding this extension means that my JavaScript tests run with every code change, just like my C# based unit tests run on every build. It’s a great way to enforce TDD practices with your JavaScript client code.

Chutzpah Test Runner Context Menu Extension: Sometimes I want to run my Jasmine tests in the browser to use the browser based debugging tools. With this extension, I can right-click on a test, and Chutzpah will generate a test HTML file with JavaScript loaded to run the tests.

Code Cracker for C#: This is another Roslyn-based analzyer that finds and fixes many common coding errors. It’s also available as a NuGet package, if you’d rather install it on a project-by-project basis.

Github extension for Visual Studio: I use this extension in addition to the standalone GitHub for Windows application. It makes it easy to stay inside Visual Studio while working with GitHub Flow.

Telerik JustDecompile extension: This requires you to install JustDecompile, which is free. I use these extensions to understand what IL my C# compiles down to. It helps me when I’m writing articles, or looking through the C# spec.

Powershell tools for Visual Studio 2015: This extension helps when I’m writing or debugging Powershell scripts.

Productivity Power Tools 2015: This extension provides many useful UI features: Presentation mode (switches font sizes), Tab Well enhancements, Visualizer enhancements in Solution Explorer, and many, many more.

Web Essentials: If you do web development, you need this. Period. It makes so many of the web programming tasks much, much, easier.

If you compare my list to yours, note that the above is an incomplete list. I did not include any of the extensions that install as part of the Visual Studio install (like ASP.NET templates, TypeScript support).

That’s my list. What’s yours?

Created: 7/23/2015 8:53:43 PM

This past Monday, Microsoft released the production version of Visual Studio 2015. Let’s get right to the lead: Visual Studio Community edition is free (for independent and Open Source developers). You have no excuse not to get this.

I’m not going to repeat all the information about the release and the total set of new features. Soma did that on his blog quite well. Instead, I’m going to focus on my areas of interest, and some of the resources I’ve written that can help you learn about these new features.

New Language Features

There are a number of new features in C# 6. Members of the team have been updating this page on Github with information about the new features.

I’ve written a number of articles about C# 6 for InformIT. The first four are live:

I recorded a Microsoft Virtual Academy Jump Start on C# 6 with Anthony D. Green, one of the program managers on the Managed Languages team.

And finally, I’ve written quite a few blog entries on the new language features. You can see a list here.

Language Service APIs

This version is particularly exciting because of the new compiler services that are part of Roslyn. These are a rich set of APIs that enable you (yes, you!) to write code that analyzes C# code, up to and including providing fixes for mistakes or other poor practices.

I’ve written an article for InformIT about the analyzers. You can read it here. I also did a Jump Start for Microsoft Virtual Academy with Jennifer Marsman.  If you missed the live event, watch the Microsoft Virtual Academy home page for updates. The recording should go live soon.

You can also learn more by exploring some of the open source projects that contain analyzers and code fixes:

  • Code Cracker. This project contains a number of different analyzers for common coding mistakes in C#.
  • DotNetAnalyzers. This contains a set of projects for different analyzers. Some manage particular practices. Others enforce rules, similar to the Style Cop effort.
  • CSharp Essentials. This project is a collection of analyzers that make it easier to adopt C# 6. If you don’t want to build it yourself, you can install it as an extension in VS 2015.

Humanitarian Toolbox and the All Ready app.

Finally, I was thrilled at the contribution from members of the Visual Studio team to Humanitarian Toolbox. Several members from different parts of the Visual Studio team worked for three days to build the initial release of the allReady application. The application source is on Github, under the HTBox organization.

This application was requested by the Red Cross. It provides features to lessen the impact of disasters on familes and communities. You can learn more about the project here on the Humanitarian Toolbox website.

The Visual Studio 2015 launch event included a profile of the developers and the process for building the initial code for allReady. You can watch the entire event on Channel 9. If you are interested in just the allReady app, and how it was built with Visual Studio 2015, look for the “In the Code” segments. There may be more In the Code episodes coming as the application grows.

All of us at Humanitarian Toolbox are grateful for the contribution from the Visual Studio team.

As a developer, I’m also grateful for the great new tools.

Created: 7/16/2015 5:05:29 PM

I was asked to review a bit of code for a friend the other day, and the result may be illustrative for others. I’ve stripped out much of the code to simplify the question. Examine the following small program:


public class SomeContainer


    public IList<int> SomeNumbers => new List<int>();


class Program


    static void Main(string[] args)


        var container = new SomeContainer();

        var range = Enumerable.Range(0, 10);

        foreach (var item in range)







If you run this sample, you’ll find that container.SomeNumbers has 0 elements.


How is that possible? Why does the container not have 10 elements?


The problem is this line of code:


public IList<int> SomeNumbers => new List<int>();


The author had used an expression bodied member when he meant to use an initializer for auto-property.

An expression bodied member evaluates the expression whenever the public member is accessed. That means that every time the SomeNumbers property of the container gets accessed, a new List<int> is allocated, and assigned to the hidden backing field for the SomeNumbers property. It’s as though the author wrote this:


public IList<int> SomeNumbers { get { return new List<int>(); } }


When you see it using the familiar syntax, the problem is obvious.

This fix is also obvious. Use the syntax for an initializer:


public IList<int> SomeNumbers { get; } = new List<int>();

Notice the changes from the original code. SomeNumbers is now a read only auto property. It also has an initializer, This would be the equivalent of writing:


public IList<int> SomeNumbers { get { return storage; } }

private readonly List<int> storage = new List<int>();


That expresses the design perfectly.


This just illustrates that as we get new vocabulary in our languages, we need to fully understand the semantics of the new features. We need to make sure that we correctly express our designs using the new vocabulary.

It’s really easy to spot when you make a mistake like this: every time you look at this property in the debugger, you’ll see that it re-initializes the property. That’s the clue that you’ve made this mistake.



Created: 7/5/2015 4:11:33 PM

Sometimes, despite everyone’s best planning, you can’t help having features collide a bit. Thankfully, when it happens, there are almost always some way to reorganize the code to make it work. The key is to understand why your original code causes some issues.

I’ve written before about the new string interpolation feature in C# 6. I find it a welcome improvement over the previous incarnations.

Two goals of the feature are:

  1. It supports format specifiers for .NET types.
  2. It supports rich C# expression syntax between the braces.

Those two goals conflict because of the C# symbols used for each of them.

To specify a format for an expression, you place a colon ‘:’ after the expression, and then the remaining characters inside the braces consist of the format specifier. Here’s an example:


Console.WriteLine($"{DateTime.Now:MMM dd, yyyy}");


This will print the date in the form “Month day, year” where day is always a 2 digit number.

The C# compiler supports these format specifiers by interpreting the first first ‘:’ character in a string interpolation expression as the start of the format specifier. That works great, except in one case: where you want the colon to mean something else, like part of a conditional expression:


Console.WriteLine($"When {condition} is true, {condition ? "it's true!" : "It's False"}");


You can see that the syntax highlighting is getting a bit confused as it is not following the later string correctly. That’s because the compiler interprets the ‘:’ as the beginning of a format specifier.

It’s easy to fix this. Just put the conditional expression inside parentheses:


Console.WriteLine($"When {condition} is true, {(condition ? "it's true!" : "It's False")}");

That way, the C# parser views everything inside the parentheses as part of the expression, and does not view the : as the beginning of a format specifier.

See? It’s easy to get both features to work. You just have to know why the first expression is interpreted differently than you’d expect.

Created: 5/20/2015 6:18:47 PM

I was listening to Episode #1140 of .NET Rocks yesterday, and I chuckled as Carl starting talking about XML Literals and JSON literals.  It’s the canonical answer whenever anyone with a VB.NET background wants to bring up a feature that’s in VB that C# doesn’t have. We’re 6 years into the Co-Evolution vision for the two languages, and this disparity still exists.


And, why haven’t JSON literals been added to both languages?

Well, there’s the flippant answer:

VB.NET is 1 – based, and C# is 0 – based. Therefore, VB.NET made the mistake of adding built in literals once, and C# made that mistake 0 times.

(thanks for the laughter. I’m here all week. Please tip your wait staff)

But seriously.

First, some historical perspective. VB.NET and C#, for two languages that are very similar in capabilities, have very different goals and audiences.

  • Visual Basic started life as a highly productive, very approachable, environment for people that wanted to develop applications, but may not have had a computer science background.
  • C# started life as the .NET platform’s answer to Java. It’s a C based language, and was designed for developers coming from the C / C++ / Java communities.

The difference in goals, and audiences forms the basis for XML literals in VB.NET, and not having them in C#. VB.NET has a vision of being super productive. XML literals support that productivity. It’s an incredibly concise way to create XML output. It’s consistent with other concepts where VB.NET has richer syntax than C# (more on that in a bit).

C#, on the other hand, eschewed XML literals. You use libraries (XElement, and LINQ to XML) to generate XML output in C#. To add XML literals to C# would mean tying the C# language syntax to another grammar (that of XML). What happens if the XML standard changes in the future? How should C# literals react? Should they be backwards compatible, or should they conform to the new standard? Well, better to not make a language feature that ties C# grammar to another standard grammar. I don’t believe the C# team will take up any request for XML literals in C# any time soon.  (Read the comment thread on the linked issue.)

But what about JSON literals?

My prediction is that C# will not get JSON literals. There’s reasonably ample evidence that the managed language teams will not JSON literals to either VB.NET or C#. I also disagree with Carl’s statement that the C# community would welcome JSON literals (although I’m speaking mostly for myself here).

I believe the teams will create syntax and libraries more along the lines of the Pattern Matching and Record Type proposal written by Neal Gafter  and referenced in the March 4th C# design notes.

VERY IMPORTANT NOTE: These documents are in the proposal stage. The language teams are doing their design in the open. Comment on them and participate in the discussion. But, realize that these are not promised features nor are they scheduled for any release that may or may not happen.

Personally, I like the Pattern Matching proposal. I liked the primary constructor feature that was part of C# 6, but removed during the pre-release builds. The Pattern Matching proposal is more complete, with more useful features for similar scenarios. Neal has added a discussion item that talks about Patterns and Records for both C# and VB.NET here.

I believe the team is making the right decision by investing in a general solution that supports serialization to and from a variety of grammars rather than embedding those grammars into the languages. It’s true that XML literals are a great productivity enhancement. It’s also true that JSON has gained traction in the years since XML Literals were introduced. It’s even likely that some other format will replace JSON at some time in the future. Over time, a language would be burdened with embedding many different grammars. Better to stop at 1 (or 0).

Wait, you said VB.NET had richer syntax than C#?

I wondered if you’d notice that.

Yes, I did say that. In many ways, VB.NET is a more complicated language than C#. Again, that is because of the goals. C# is a small language, consistent with C, C++ and Java. “Small” is a relative term: All those languages have a small number of reserved words. They have strict grammars, and they rely on libraries for large parts of their feature sets.

VB.NET has more keywords. It has more open grammar. It has more compiler options. And, those options affect the grammar and the compiler’s output for many different constructs. Option Strict is just one example. VB.NET’s Query Syntax supports several clauses that are not supported in C# (Aggregate, Skip, Take, Skip While, Take While). C# Developers must use the method call syntax for those queries.

And, just in case you wondered, I still prefer C#. But that’s just me.

So, Carl and Richard, thanks for all the great shows, and inspiring a blog post. Most of all, thanks for being a regular part of my work week.

Created: 4/7/2015 5:02:02 PM

I’m teaching a second .NET bootcamp in Detroit this spring. It’s quite a rewarding experience. Like the previous cohort, these students are all adults that are motivated to make a career shift. I always think I learn as much as the students do when I teach one of these groups. I’ve got four important lessons for all my readers based on the first few weeks of the bootcamp experience.

Lesson 1: Developers are in Demand

My first surprise was the experience that some of the students have coming into the class. Everyone has been successful in different fields, from business to medicine to finance. And they all want to be developers. Developers are in serious demand everywhere. This may be an exaggeration, but I believe the unemployment rate among developers is approaching 0. Every growing company I work with wants to hire skilled developers. It’s become a barrier to their growth.

Investing in yourself by learning to code will pay off. It opens doors.

There’s a corollary to this lesson: Having other skills also pays off. As we’ve been discussing next steps, we discuss where everyone’s past experience will also pay off. Several of the students have very strong backgrounds in different vertical businesses. Those skills will help to set them apart from other entry level developers.

Lesson 2: Anyone can Code.

I’ve been really happy to see this result. There are too many people that have the world view that someone is “born with” the skills or the mindset necessary to be a developer. These classes, and the students that have attended, prove that’s bunk. Most of the students enter with no programming experience at all.

8 weeks later, they can develop code, and feel comfortable with the .NET framework.

Now, I don’t want to overstate this: they are all still beginners, and ready for entry level jobs as developers. They don’t yet have the experience many of my typical readers do. But, that’s a function of time, not innate ability. I was beginner once, as were all of you, dear readers. These students will continue to grow, as they keep coding.

Anyone can learn to code. It takes motivation, some help, and a path. If you know someone interested in learning, get them involved. Point them in the direction of learning resources. Encourage them to try and build something. We’ve all enjoyed developing software. There’s plenty of room for more. And, anyone can learn.

There’s a corollary here: I continue to be impressed by just how fast new folks pick up the core skills. There’s so much vocabulary and concepts that we work with. We have learned a lot and have a lot of experience behind us. I am truly impressed by how quickly I see these new developers learn and grow the skills we’ve already internalized. It does seem very frustrating for a day or two, until they get past that “Hello World” stage. Thankfully, within a week, they are building classes, understanding core concepts, and creating real code. It’s great to see.

Lesson 3: There are stages of understanding

This has been the most interesting piece to observe. There’s the famous quote from Joseph Joubert: “To teach is to learn twice”. I’m finding that students really go through four distinct phases of understanding: reading code, doing guided labs, working independently, and helping peers.

In that first phase, they can see code that I’ve written and begin to understand what it does. They don’t yet have the vocabulary, and they are kind of unsure exactly what they are reading. But, they certainly beginning to understand.

The second phase is where students can work with a guided lab, and understand what’s being added. They can follow the instructions, type in the code, and do the debugging and proofreading necessary to make a guided lab work.

The third phase is when they can create their own code and their own algorithms to build software that does something useful. It’s where a lot of entry level developers spend much of their time. Their code works, but they may not be able to completely understand and articulate how it works.

That fourth phase is the key to mastery: Once students get to the point where they can explain what they’ve built, how it works, and how it uses the underlying libraries, they have achieved a new level of mastery.

Well, what about you?

I’ve truly enjoyed working with new developers and helping them join this career. There are large numbers of people that want to write code. Can you help? It would be a great opportunity for you learn twice. Maybe it’s not beginners, maybe it’s mentoring junior developers in your organization.

Created: 4/3/2015 12:50:42 AM

I’m thrilled to have been nominated and accepted as a member of the .NET Foundation Advisory Board.

I’m very excited about the role we can play in growing the Open Source ecosystem around .NET. We’ve just gotten started, so there is not a lot of progress to report, but I’m excited by the potential. Our role is to provide a channel between the .NET Foundation Board of Directors and the .NET developer community. We will be helping to refine policies to accept new projects, grow and nurture the projects under the .NET Foundation, and overall, make .NET Open Source Development better and richer for everyone.

Shaun Walker is the chairman of the .NET Foundation Advisory Board, and his announcement here is a great description of the rationale and thought process that went into creating the advisory board.

I’m excited to participate in growing Open Source development around .NET and the great languages and frameworks that are coming from the developer teams. This is a large and important initiative. It covers everything from the Roslyn compiler projects, to the TypeScript compiler to ASP.NET vNext to the Core CLR and core .NET Framework releases. And that’s just the major projects from inside Microsoft. There are so many tremendous projects (like ScriptCS, just to name one) that are part of a growing .NET ecosystem.

We’ve got quite a bit of work to do. The Foundation is a new organization, and we need to advise the board on everything from what kinds of projects we’ll accept, to the process for accepting new projects, to the governance of the advisory board. It’s a lot of work, but it’s also a lot of fun.

It’s an exciting time to be a .NET developer. I’m glad to be in the middle of it.

Created: 3/3/2015 4:34:42 PM

The more I work with C# 6 in projects, the more I find myself using ?. to write cleaner, simpler, and more readable code.  Here are four different uses I’ve found for the null coalescing operator.

Deep Containment Designs

Suppose I’m writing code that needs to find the street location for the home address for a contact person for vendor. Maybe there’s an awesome event, and I need to program my GPS. Using earlier versions of C#, I’d need to write a staircase of if statements checking each property along the way:


var location = default(string);

if (vendor != null)


    if (vendor.ContactPerson != null)


        if (vendor.ContactPerson.HomeAddress != null)


            location = vendor.ContactPerson.HomeAddress.LineOne;





Now, using C# 6, this same idiom becomes much more readable:


var location = vendor?.ContactPerson?.HomeAddress?.LineOne;


The null coalescing operator short-circuits, so evaluation stops as soon as any single property evaluates as null.

INotifyPropertyChanged and similar APIs

We’ve all seen code like this in a class the implements INotifyPropertyChanged:


public string Name {

    get { return name;  }

    set {

        if (name != value)


            name = value;

            PropertyChanged(this, new PropertyChangedEventArgs("Name"));




private string name;


I hope your are cringing now. This code will crash if it’s used in a situation where no code subscribes to the INotifyPropertyChanged.PropertyChanged event. It raises that event even when there are no listeners. 

When faced with that situation, many developers write something like the following:


public string Name {

    get { return name;  }

    set {

        if (name != value)


            name = value;

            if (PropertyChanged != null)

                PropertyChanged(this, new PropertyChangedEventArgs("Name"));




private string name;


OK, this is a little better, and will likely work in most production situations. However, there is a possible race condition lurking in this code. If a subscriber removes a handler between the ‘if’ check and the line that raises the event, this code can still crash. It’s the kind of insidious bug that may only show up months after deploying an application.  The proper fix is to create a temporary reference to the existing handler, and raise the event on that object rather than allowing the race condition on the PropertyChanged public event:


public string Name {

    get { return name;  }

    set {

        if (name != value)


            name = value;

            var handler = PropertyChanged;

            if (handler != null)

                handler(this, new PropertyChangedEventArgs("Name"));




private string name;


It’s more code, and it’s a few different techniques to remember every time you raise the PropertyChanged event. In a large program, it seems like someone forgets at least once.

C# 6 to the rescue!

In C# 6, the null coalescing operator implements all the checks I mentioned above. You can replace the extra checks, and the local variable with a ?. and a call to Invoke:


public string Name {

    get { return name;  }

    set {

        if (name != value)


            name = value;

            PropertyChanged?.Invoke(this, new PropertyChangedEventArgs("Name"));




private string name;


The shorter, more modern version reads more concisely, and implements the proper idioms for raising events and managing subscribers being added or removed.

Resource Management

Occasionally, we may find that one of our types owns another object that has certain capabilities. However, that object may implement other capabilities beyond those our class requires. Usually, that’s not an issue, but what if that object implements IDisposable? Consider the case of an Evil Genius that is done working with a henchman.  The code to retire a henchman might look like this:


public void RetireHenchman()


    var disposableMinion = Minion as IDisposable;

    if (disposableMinion != null)


    Minion = null;



The null coalescing operator can make this code more concise as well:


public void RetireHenchman()


    (Minion as IDisposable)?.Dispose();

    Minion = null;



LINQ Queries

There are two different uses I’ve found for this operator when I work with LINQ queries.  One very common use if after I create a query that uses SingleOrDefault(). I’ll likely want to access some property of the (possible) single object. That’s simple with ?.


var created = members.SingleOrDefault(e => == "dateCreated")?.content;


Another use is to create a null output whenever the input sequence is null:


members?.Select(m => (XElement)XmlValue.MakeValue(m))


The addition of this feature has made me realize just how much code checks for null, and how much more concise and readable our code will be by having a more concise syntax for checking against null, and taking some default action based on the ‘null-ness’ of a variable.

This feature has changed how I code everyday. I can’t wait for the general release and getting more of my customers to adopt the new version.

Created: 2/27/2015 2:56:56 PM

I’m excited to announce that I’ve been renewed for the Microsoft Regional Director program. 


It’s an exciting time in technology, and especially in the Microsoft space. Since the last RD renewal cycle several big innovations have happened:

  • The C# and VB compilers have been rewritten in C# and VB.NET, respectively.
  • The C# and VB compilers have been released as Open Source projects with permissible cross-platform licenses.
  • The core CLR is being released under permissible cross-platform Open Source licenses.
  • The core .NET Framework is being released under the same permissible Open Source license.
  • The Azure team keeps innovating at an amazing pace. (Too many updates to summarize in one blog post.)
  • Windows 10 and a unified core OS across phone, tablet, PC, and server is on the horizon.

It’s an amazing time. I’m excited that my continued involvement in the Regional Director program can help my customers navigate all these changes. I can’t wait to see what the next two years brings.

I’m also excited to welcome many new RDs to the program. This latest wave recognizes many changes in the global technology industry in recent years. As I look at the list of new RDs, I have a number of observations:

  • The program is more global. Many countries and regions that were under represented now have RDs in their locales. More folks are from Europe, Asia, Australia and New Zealand, and the Middle East.
  • The program represents the changing face of software and enterprise IT. Many of the new members in the program come from an IT Pro / DevOps background. This represents the changing needs of our customers. More and more businesses run more and more software that is strategic to their success. The data and system management needs have an increased importance.
  • The program represents more different platform knowledge. It’s a multi-device, multi-platform world. It’s an interconnected world. The program represents people with knowledge of multiple platforms, languages, and devices. Interconnected systems are increasingly important.
  • It represents a renewed community commitment. All the RDs are very engaged with their local communities, and the global community. We’re all committed to sharing what we know with our peers and colleagues. And, we’re always learning from other members of our community.

It’s been a great ride for my first 10 years in the program. I can’t wait to see what comes next.

Created: 2/24/2015 8:57:45 PM

During CodeMash, Stephen Cleary gave me a copy of his Concurrency in C# Cookbook.

It’s an essential reference for any C# developer using concurrency in their applications. These days, hopefully that means most C# developers.

The 13 chapters provide short recipes for many uses of concurrency. The recipes are short. Therefore the recipes provide minimal breadth and depth on how each different recipe works. That’s great if you have a good understanding of how these different features, and you’re looking to decide which recipe is the best one for your current challenge. However, if you are looking for a tutorial or introduction to concurrency this book will leave you with many questions.

Stephen’s material is clear, concise, and will help you follow the proper guidance in a variety of situations. It will give you more options for concurrent programming, and you will be better at using them. If you know some of the concurrent techniques available to modern .NET developers, you’ll quickly catch on to the style and you’ll be exposed to recipes you may not know. That will make you a better developer.

This book is not for developers that have no experience with concurrent programming. Stephen assumes you know the basics. His explanations assume a background in the tools used.

This book has earned a handy place on my shelf. In particular, the chapter on Data Flows helps me remember to use this handy library more often. I believe I’ll reach the point where I reference this book whenever I’m looking at how to build a concurrency related library or program.

Created: 2/5/2015 3:01:05 PM

There’s some new naming conventions coming for MSSQL LocalDB when Visual Studio 2015 ships. I found these while working on the Humanitarian Toolbox Crisis Checkin project. As more developers start using machines with VS 2015, especially machines that didn’t have a previous version installed, you may run into the same problem.

The web.config for Crisis Checkin on a developer box contains this connection string (highlight added):


<add name="CrisisCheckin" connectionString="Data Source=(localdb)\v11.0;AttachDbFilename=|DataDirectory|\crisischeckin.mdf;Integrated Security=True; MultipleActiveResultSets=True;" providerName="System.Data.SqlClient" />


When I ran this on my new Surface Pro III, using VS 2015 Preview, the application would not load. The app could not create and load the database.  After some investigation, creating apps using the new templates, and comparing, I found that I needed to change the connection string as follows:

<add name="CrisisCheckin" connectionString="Data Source=(localdb)\MSSQLLocalDB;AttachDbFilename=|DataDirectory|\crisischeckin.mdf;Integrated Security=True; MultipleActiveResultSets=True;" providerName="System.Data.SqlClient" />

I asked around, and with the help of a couple team members, I now understand and know what to do.

Starting with this VS2015, the team is moving away from version dependent connection strings. That means, once you adopt VS 2015, you have the option of using a version independent  connection string moving forward. That’s the first recommendation:

To fix this issue for now, and future versions of Visual Studio, replace the version dependent connection string (e.g. “v11.0”) with “MSSQLLocalDB”.

However, you may need to continue to work with team members that are still using VS 2013. If that’s the case, you can follow the second recommendation:

Install the Version 11 LocalDB, which is free. You can then use Visual Studio 2015 (preview), but your database engine will be MSSQL LocalDB version 11.0.

Personally, I’d gravitate toward using the version independent connection strings as soon as possible. We will do that on crisis checkin once VS 2015 is officially released.

I hope this saves you some research when you first encounter this change.

Created: 2/2/2015 4:59:35 PM

I’m teaming up with Grand Circus to deliver another .NET developer bootcamp this Spring. This bootcamp will teach the core skills necessary to build modern web applications using ASP.NET and the core .NET framework.

For most of my regular readers, this bootcamp is probably too introductory for you. It does not assume that your have any prior knowledge of C#, or ASP.NET. We start at the beginning, and move as fast as possible to learn the core skills needed.

However, if any of you have friends or associates that are interested in making a career change and learning to be a developer, this is a great way to start. During the 8 weeks, students learn the core skills needed to make that transition. In addition, Grand Circus will be providing placement assistance to help students land that first developer position.

If anyone you know wants to become a developer, and be part of Detroit’s Renaissance, tell them to sign up here. We had great success with the first bootcamp (over 80% of those students have landed developer jobs), and I’m anticipating the same outcome this time.

Created: 1/21/2015 7:46:39 PM

I’m a bit late this year, but here are my thoughts on the Software industry as we move into 2015.

It’s a great time to be a Developer

I’ve said it before: The talent war is over, and the talent won. Software developers are in high demand everywhere. My readers probably know this. You likely get as many recruiter emails as I do every week. I don’t see this changing. All the demographic and economic information available indicates that demand for software developers will continue to outpace the supply of people with those necessary skills.

But don’t be complacent

But, like all shortages, economics will change this situation as well. More and more people are entering the software field because there is such high demand for developers.But, unlike a generation ago, you will need to compete against people everywhere in the world. If you want to stay in demand, you need to have more skills besides core software development.

There are many directions to go here in addition to the traditional advice of learning the business and adding soft skills. Are you good at explaining software development to others? Help mentor teams. Do you have some of the specific skills that are in high-demand? (Read on for my thoughts on what those might be.) Are there particular verticals you want to explore?

Whatever it is, become a multi-dimensional asset. One day, “folks that can code” will not be as in demand as they are now. But, high-quality software people will still be in demand.

And with that, on to some more technology based thoughts.

Cloud and Big Data

I’m lumping these together because Big Data analysis requires a lot of resources, and cloud computing puts those resources in the hands of many more organizations.

I’m amazed at the amount of information that can be discovered using big data analysis. While it’s not an area I work in extensively, the innovations there are amazing. I expect this trend to continue as more and more data is online for analysis and research.

If it hasn’t already, 2015 spells the time when Cloud Computing is mainstream. I’m firmly convinced that I will never buy a server again. Furthermore, I’m certain all my hosting will be at a cloud provider, not a traditional hosting service. My current choice is Azure, but this is an area of strong competition. I believe this trend will accelerate as companies need to retire and replace existing servers. That will drive more cloud adoption. Faced with the choice of buying a depreciating asset, or migrating to the cloud, the cloud will win.

.NET and C# are on the rise (again)

To paraphrase Mark Twain, “The reports of .NET’s death have been greatly exaggerated.” The last year saw preview releases of the Roslyn compilers, Visual Studio 2015, ASP.NET vNext, and more. License and language changes, along with major investments from Xamarin make the .NET and C# ecosystems cross-platform.

There’s more interest in the C# language, including all the new features coming soon in C# 6. Now that the Roslyn codebase has gotten to the point where the existing C# features were supported, the language team is working hard to add new features to the language. It’s an incredibly powerful and expressive language, and getting more so every release.

Open Source all the things!

An important driver or that resurgence is that the .NET ecosystem is becoming Open Source. The Roslyn compilers are on Github, along with the core .NET framework, ASP.NET vNext, Entity Framework, and more. The licenses governing these releases have been updated to address platform concerns. (Namely, the new licenses allow these projects to be used on non-Windows platforms; that had previously been disallowed).

At this time, C# and .NET provide a real cross-platform open source strategy for backend, mobile, web, and tablet applications.

That fact makes the .NET resurgence real, and important for all developers, not just those targeting Windows.

The programmable web

The web is programmable, and users, customers, and software decision makers expect modern applications to run in a browser.

That means JavaScript has become a core programming skill. Like it, tolerate it, or hate it, but you must have a solid understanding of the JavaScript language to be effective in this world.


Even though JavaScript is ubiquitous, it is not the only way to write programs for the web. CoffeeScript, Dart, and TypeScript all provide paths to write code in other languages, and compile that code to JavaScript for execution in any modern browser.

Of those three, my bets are on TypeScript. TypeScript is compatible with JavaScript, but extends it with important features that make large-scale development more productive.

I expect innovation to continue here, including changes to the ECMA standard for the core ECMAscript (JavaScript) language.

The big question mark: Mobile Strategy

It’s dangerous to make predictions, especially about the future. There’s one thing I’m confident in: No one is sure what the right mobile strategy is. There are just too many options. You can use browser based applications for everything. You can use a cross-platform web based framework like Cordoba. You can use a cross-platform native framework like Xamarin. You can also create separate apps for each platform. They all have different advantages and disadvantages. And, different teams have different existing skillsets, which means that the cost of each strategy is different.

Personally, I’ve created apps using the first three. I really don’t know which I prefer. I like the native feel of using the Xamarin tools. And yet, the web only versions do mean more portability.

It’s still going to take time for better answers to become clear.

And a brief look back

I’ll close this with a brief look backward. Last year was a pretty big change, and I enjoyed every minute of work. I had the pleasure to teach developers in multiple countries, and different continents, and at different skill levels. I’m excited by the response so many people had learning new skills.

And, at Humanitarian Toolbox, we kept building software to support humanitarian efforts. We got official 501(c)(3) approval, and we are now ready to build more this year.

Created: 12/22/2014 7:55:04 PM

For the past year, I’ve been creating Github repositories to store the code demos for my public presentations. I’ve found it had quite a few advantages. Now, some other speakers are starting to follow the same procedure (or even extending it). Earlier this week fellow MVP Joe Guadagno asked me about how it works, and then used it for his next presentation.

He also started a broader conversation about the technique on twitter. A number of other speakers chimed in with their thoughts. Twitter has great immediacy, but doesn’t let me explain how I do this at length. In this post, I’ll discuss why and how I use git repositories for code presentations.

Why use Source Control for Presentations?

I’d grown very dissatisfied with the common practice of having a zip file for the starter project and a zip file for the finished code. Way too much information was missing.  My best presentations show attendees more than just what to do. I also discuss alternatives that don’t work correctly, and why. These small side steps are missing when the only assets are “before” and “after”. Missing is “why choose this path?”

What was missing was any roadmap to “tried this, found these problems and changed directions.”

I can preserve all that history using a source control system. In fact, I’d been using my own internal source control for presentation demos for years. I could practice by rewinding the history, replaying the checkins and seeing each step along the way through the demos.

Github has now made that possible for anyone attending the presentation. It’s public, and I can represent each step of every demo as a branch in the repository.

The Process

My process starts after I’ve thought through the demonstration that I want to show during the talk. This hasn’t changed at all: what do I want people to learn, and how will I explain those concepts?

Once I have the story, and the arc of the demo, I’m ready to make the repository. I run through my practice sessions (using git at each step) and the I’m ready to record.

I make the “start” project for the demo. Maybe this is direct from File: New Project, or maybe it’s a starter project that already has some features.I’ll create a new git repository, and make regular commits as I get to the initial state. Once I’ve reached the initial state, I make a branch. For these repositories, every branch starts with a 2 digit number, so that my talk attendees can easily follow along. The first branch is likely “01-Start” or something similar.

I continue to build the demo, by following my script. I use commits for small-grained changes that I want people to follow, if they are working on their own. I use branches for larger-grained changes that I discuss during the talk. You can see each section of the talk by checking out each labeled branch in turn. You can follow each commit to see all the smaller changes that make up each new feature.

One very powerful feature that I use for talk demos s to be able to show what happens down the wrong path. For example, I may make an async demo where I’ll explore the problems that happen from using async void methods. Letting the audience see those mistakes is important for their learning. Using Git, I can explore dead ends, and leave those branches as examples of problems. I can back up and checkout an earlier branch, then show the correct path. Attendees can follow the straight-line path, or check out the dead ends for reasons why certain paths are avoided.

Once I’ve finished running through the script of my demos, I check the repo and the commits and branches in order one last time. I make sure each of the branches can be checked out in order and that each branch correctly represents the next checkpoint in the demo.

On Stage

Having a repository does not mean I don’t write code during the presentation. Before the talk, I checkout the starter branch. There are two rules I follow during the talk:

  1. No one wants to watch me type.
  2. It’s important to show the progress from start to finish.

You may notice that these two rules contradict each other.  I do show some of the changes as I move forward through the demo. But, I don’t type everything out. Often, I prefer doing a ‘git checkout’ to move the code to the next step.

If I don’t get any questions, comments, or feedback on the discussion, the demo goes forward from its starting branch to its final branch. I’ll mix typing some of the changes (or using the Visual Studio tools to make large-scale changes, depending on the demo), with using git to move forward by checking out the next branch.

Because I’ve done some typing, git complains when I want to roll forward from branch to branch.

One important tip is to remember to do a ‘git checkout –f’ so that you don’t have to stop, rewind anything you typed, and then move forward.

I can also go off script to answer questions or respond to different feedback. This changes the demo, but I can get it back on track by checking out the next branch.

I don’t have hard and fast rules for when I type vs. when I roll forward in git. My heuristic is that when attendees will learn from seeing the code typed, I type it in. When that transformation is not instructive to the topic, I roll forward. I’m not certain I always get this right, but I like the guidance so far.

At the end of the talk, I tell attendees the URL of the repository, and tell them to explore.

After the talk

Github provides several tools to continue the conversation with attendees after the talk. I’ve used all of these after different talks.

  1. Code Review Comment (or question): Attendees can use github to view any commit, and ask questions about the code. Those questions may be about the topic of the demo, or can be unrelated questions. I like handling those unrelated questions this way, because it doesn’t change the fundamental story of the talk, and attendees do get their questions answered.
  2. Fork and Pull Request: Sometimes I’ll have attendees ask why I wrote code one way vs. another. They can fork the repo, and show me their idea in a pull request. Sometimes I explain my reasons and the attendee learns something new. Sometimes the attendee has a better idea, and I learn something new. Either way, it’s great.
  3. Fork and Branch: Sometimes an attendee wants to take the project in a new direction. They can do that by forking the repository, and extending it on their own. It’s no longer about the original talk, and I’m happy to see that it inspired someone to do something new.

This process is still somewhat new, and I’m sure I’ll continue to refine and change how I do it in the coming years. However, I really like where it is heading.

What do you think?

Created: 12/4/2014 8:53:23 PM

One of the new features in C# 6 is expression bodied members. This feature enables us to create members (either methods, property get accessors, or get methods for indexers) that are expressions rather than statement blocks.  Consider this class, and not the highlighted portion:


public class Point


    public double X { get; set; }

    public double Y { get; set; }

    public double Distance




            return Math.Sqrt(X * X + Y * Y);





That’s quite a bit of typing for a very simple method.  C# 6 lets you replace that accessor with a lambda expression:


public double Distance => Math.Sqrt(X * X + Y * Y);


The body of the get accessor is replaced by the simple expression that returns the distance of the point from the origin.

As I mentioned above, expression bodied members can also be used for methods:

public override string ToString() => string.Format("{{0}, {1}}", X, Y);


Lest you worry that this feature only works with non-void methods, here’s a TranslateX method that moves the point in the X direction by x units. It has a void return:

public void TranslateX(double x) => X += x;

It’s one of the features I like a great deal in C# 6. When you use it with simple, one – line methods, it increases readability and clarity of a class.

When I speak on C# 6, I always get reservations from from people about how this feature could be abused. Developers are concerned that someone (not them mind you, but other developers they work with) would abuse this feature and write methods are are pages long using this syntax.

Don’t worry. The compiler won’t allow it.  Suppose I tried to extend the TranslateX method to take two parameters are translate the point in both the X and Y directions.  The compiler does not support expression bodied members with block statements.  None of these attempts to write a method with more than one statement as an expression bodied member will compile:

// Does not compile.

public void Translate(double x, double y) => X += x; Y += y;


// Does not compile.

public void Translate(double x, double y) => { X += x; Y += y; };


// Does not compile.

public void Translate(double x, double y) => if (true) { X += x; Y += y; };


At this point, even the most evil of your co-developers will give up and use the standard syntax for a method that has more than one statement.

Overall the new syntax leads to clearer code, and the restrictions on the syntax that the method must be an expression ensures that it can’t be abused (much).

Current Projects

I create content for .NET Core. My work appears in the .NET Core documentation site. I'm primarily responsible for the section that will help you learn C#.

All of these projects are Open Source (using the Creative Commons license for content, and the MIT license for code). If you would like to contribute, visit our GitHub Repository. Or, if you have questions, comments, or ideas for improvement, please create an issue for us.

I'm also the president of Humanitarian Toolbox. We build Open Source software that supports Humanitarian Disaster Relief efforts. We'd appreciate any help you can give to our projects. Look at our GitHub home page to see a list of our current projects. See what interests you, and dive in.

Or, if you have a group of volunteers, talk to us about hosting a codeathon event.