Bill Blogs in C# -- C#

Bill Blogs in C# -- C#

Created: 1/21/2016 3:35:52 PM

I was honored to speak at NDC London last week. It’s an awesome event, with a very skilled set of developers attending the event.

I gave two talks at NDC London.  The first was a preview of the features that are currently under discussion for C# 7. C# 7 marks a sea change in the evolution of the C# Language. It’s the first version where all the design discussions are taking place in the open, on the Roslyn GitHub Repository. If you are interested in the evolution of the C# language, you can participate in those discussions nnow.

Instead of posting the slides from that talk, I’ll be blogging about each of the features, with references to the issue on GitHub. I’ll cover the current thinking and some of the open issues relating to each of the features.

Watch for those posts coming over the next month. I’ll mix in a forward looking post along with language infomration you can use now.

My second talk was on building Roslyn based Analyzers and Code Fixes. You can find the slides, and demos here on my GitHub repository: https://github.com/BillWagner/NonVirtualEventAnalyzer

If you have any questions on the samples, the code, or concepts, please make an issue at the repository, and I’ll address it there.

Created: 1/5/2016 4:41:43 PM

This is my second post about the changing year. In this post, I look at the areas where I will invest my time in the coming year.

.NET Everywhere

The .NET Core platform is a version of the .NET stack that runs on OSX and Linux in addition to Windows platforms. This is a pre-release project at this time, but there is enough there to beging experimenting.

As this platform matures, .NET truly becomes cross-platform. You’ll be able to use C#, F#, VB.NET, and the bulk of the .NET Base Class Libraries to create software that runs on multiple platforms. The Roslyn compiler platform provides one of the key building blocks by running on OSX and Linux. That means the managed portions of the .NET Framework can be compiled on these platforms, and will run effectively. The remaining work is primarily to make the unmanaged portions of the CLR and the libraries run on other platforms.

Some of the work here is technical in nature. Other work is to ensure that the licenses used by the .NET framework components are compatible with a cross-platform strategy. Most of the components are released under either the Apache 2.0 license, or the MIT license. Check each library for details. (Notably, the previous restrictions on earlier MS-PL licenses for Windows only support has been removed from anything related to .NET Core).

While this work is going on, there is a parallel effort to build out learning materials for .NET Core. This project is also open source, and accepting contributions. (I’ve had a couple PRs merged here already).

I’m really looking forward to the date later in 2016 when a production ready .NET environment runs on Windows, Linux, and OSX.

C# 7 designing in the Open

This will be a very interesting version of C#. The team is using GitHub issues to openly discuss language features and their ramifications.  Go aehead and participate in the discussions. It is really exciting to see the community participate with such passion for different features that they would like to see in their favorite language.

As I discussed in the last post, the compiler is Open Source. If you want to experiment with some of the new features, you can try. There are experimental branches for some of proposed features. (The maturity of the implementation varies.) It’s also important to understand that the features haven’t been formally committed to. Do not use those branches in production applications (yet).

Clouds, Containers, and Devices

We’ve entered a world where our web-based applications are managing more data, and scaling to ever larger numbers of users.

OK, this trend has been in place for a while, but it’s only accelerating and growing.

We have new ways to deliver and scale web-based software. We have cloud platforms that enables us to change the number of instances running. We have docker containers. And, on a related trend, we can offload more and more processing to the client device. That may mean platform specific mobile applications, or SPA style browser based applications.

We will be doing more and more work with software that needs to scale by running multiple copies of your application in different configurations. There are many different options for this, and wise developers will learn a bit about each of them.

Rise of the Machine (Learning)

Machines are good at examining very large datasets. Machines are also good at running lots of different scenarios.

Put the two together, and I see a trend for more machine learning in the future. The applications we use generate tremendous amounts of data every day. As machines observe and analyze that data, more insights can be gained then ever before.  Machine Learning and related algorithms can enable us to play ‘what if’ with more and more different scenarios and make better decisions based on larger and larger data sets.

The Era of Big Data means Small Applications

Our data is a bigger and bigger part of our lives. The software necessary for us to interact with that data is smaller (in relation). This trend affects the ‘stickiness’ of any given platform.

This trend has a huge impact on how network effects work for modern systems.

As an example, consider music applications. Every platfrom has an app that can play music. But, what’s important to users is that they can play the music they’ve already purchased. If you already have an iTunes subscription, you want to play your iTunes music. If you have a Microsoft Groove subscription, you want to play your Groove music.

The important feature is access to the data (music) that you’ve already purchased (or subscribed to, or created).

That impacts the ‘stickiness’ of a platform. Finding and installing programms for a new device takes a fraction of the time (and possibly money) of updating every subscription to a new platfrom. I believe this portends an interesting trend in the network effects for different applications. Namely, changing device platforms will be a trivial exercise, compared to changing the provider of cloud based data.

I believe this means that future platform stickiness will be based on your cloud based data, not your device. Do you want to switch between mobile platforms? That will be easy, as long as your data can come along. If that means recreating or resubscribing to a service, its going to be a non starter.

That leads me to the concluson that the clould is more critical than the device.

What’s next for HTBox?

In the coming year, we’ll be focusing on reaching the 1.0 milestone (not beta) for the apps currently in development. We’ll also be ading new applications as the year progresses.

I’m excited for the progres, and I’d be happy to see more participation. If you are interested, head over to our home page on github and check out our projects.

Created: 12/31/2015 8:09:58 PM

This is the first of two posts on my thoughts for the coming year. This is a mixture of personal and global ideas. It’s my perspective, based on my experiences. In this post, I look back at the important trends and events I saw and experienced. (Part II will look at the year to come.)

The Growth of Humanitarian Toolbox

I’m very happy with all that has happened with Humanitarian Toolbox in the past year. We’ve continued to work on two different applications: Crisis Checkin and AllReady. The .NET Developer Community has really come together to help us achieve important milestones.

Crisis Checkin is being enhanced to support Operation Dragon Fire which will provide better data sharing during crisis. Watch the Github repo for updates on new feature requests to support this effort.

While Crisis Checkin has been moving along at a reasonable pace, AllReady has been moving incredibly fast. I need to thank our friends and colleagues on the Microsoft Visual Studio team for the incredible contributions they’ve been making to your effort.

The Visual Studio team started development as a showcase for many of the new features that shipped with Visual Studio 2015. They recorded a series of videos that documented that initial effort. HTBox took over the code and open sourced it shortly thereafter. We continued to work with community members over the summer, at ThatConference, and remotely to add features. Fall came, and we worked with Microsoft after the MVP Summit to get the application ready for Beta. You can see some of the experience at that sprint here.

We successfully hit our beta milestones, and our next step has been a pilot with the Red Cross in Chicago. The pilot has been successful, and we’ve been generating new tasks and feature requests from the pilot.

The success we’ve had building software has also brought an increase in contributions. We’re by no means a large charity, but we’r past the bootstrap phase and well on our way to a successful startup venture.

We owe a lot to everyone that has contributed:

  • The .NET Developer Community members that have contributed to our projects.
  • The Microsoft Visual Studio Team that adopted and help launch development of AllReady.
  • The donors that have made us a financially viable organization.
  • But most of all, my fellow board members at HTBox. I’ve never worked with a stronger and more dedicated leadership team.

I’m confident that we’ll continue this momentum over the next year.

C# 6 / Visual Studio 2015

Earlier this year, we saw the release of Visual Studio 2015, and with it the 6.0 version of the C# language. This is the first release using the Roslyn Codebase. I’m super excited about the rejuvenation of the C# and .NET community as the team reached this important milestone.

We have the Roslyn APIs to create analyzers, code fixes, and refactorings.

We have numerous new language features to support modern development.

We have an IDE and compiler that share more code, and thereby use less memory and have better performance.

And….

Open Source all the things

The code for the C# and VB.NET compilers are Open Source (Apache 2.0 license) and stored on Github. Want to learn more about how it works? Look in the source. Want to participate? Grab an up-for-grabs issue, submit your fix. Want to experiment? Grab the source and try it out.

It’s also very instructive to see just how many tests and gates there are for the source code in the compilers.

But it doesn’t stop there. In addition to the Rosly source, many other parts of the .NET framework are currently, or are planned to be, Open Source as well. The full list is quite long, view it here.

The model of Open Source development is becoming the norm. Apple even responded by making Swift Open Source, also with a permissive license.

I believe the Open Source model is one of the key reasons for the rejuvenation of the .NET ecosystem.

Which brings me to…

.NET Foundation

The .NET Foundation is an indepent organization to support Open Source development in the .NET ecosystem.

While it was announced in 2014, its growth really started in 2015. (I’m biased, as I’m on the .NET Foundation Advisory Board).

The foundation nw includes parts of the .NET Framework, with source originally from the Microsoft product teams. It now also includes projects that started in the community, and have been brought under the .NET Foundation umbrella.

And, this post was written with the newest .NET Foundation project: Open Live Writer.

In my next post, I’ll talk about the topics that I think will be key in 2016.

Created: 11/23/2015 4:28:37 PM

This past weekend, I was honored to speak at Boston Code Camp 24. I had two different talks.

The first was on creating Angular 1.x applications using TypeScript. It was a great audience, and I enjoyed the conversation that went along with the discussions. The slides and demos for this talk are available here on github. As always, the branches are labeled using the steps in the demo. You can follow along as the application grows.

The second talk was on creating Diagnostics and CodeFixes using the Roslyn APIs. This one had a smaller audience (which I expected). I was quite happy that the people who did attend were very interested in the topic. Slides and demos are here (on github again). The branches walk through the tasks of building a diagnostic and a codefix. The specific diagnostic was for Item 24 in More Effective C# (“Create only Non-Virtual Events”).

That brings me to my call to action for attendees. I’ll repeat it here: Would you be interested in a Roslyn based analyzer that enforced the rules in Effective C# and More Effective C#? Leave me a note here, and let me know what you think.

Tags: C#
Created: 10/13/2015 12:31:01 PM

Recently, I was asked a question about performance in event handlers. One of my regular readers had been told that using Lambda syntax created an event handlers that would execute more slowly than using the ‘classic’ delegate syntax and defining a separate method for the event handler.

TL;DR version:

Yes, but you probably don’t care. And if you do, you might want to reconsider your design.

A thorough explanation.

Dear readers, don’t accept these kinds of assertions just because someone says it’s true. Let’s write a bit of code, and let’s look at what’s generated.

Let’s start by creating a simple example that demonstrates both of the available options in action:

    class EventSource : Progress<int>

    {

        public async Task<int> PerformExpensiveCalculation()

        {

            var sum = 0;

            for(int i = 0; i < 100; i++)

            {

                await Task.Delay(100);

                sum += i;

                this.OnReport(sum);

            }

            return sum;

        }

    }

    class Program

    {

        static void Main(string[] args)

        {

            var source = new EventSource();

 

            EventHandler<int> handler = (_, progress) => Console.WriteLine(progress);

            source.ProgressChanged += handler;

            Console.WriteLine(source.PerformExpensiveCalculation().Result);

            source.ProgressChanged -= handler;

 

            source.ProgressChanged += ProgressChangedMethod;

            Console.WriteLine(source.PerformExpensiveCalculation().Result);

            source.ProgressChanged -= ProgressChangedMethod;

        }

    }

 

This program calls two different versions of the event source driver, one where the event source is connected and disconnected using Lambda syntax, one where a separate method is defined for the delegate. Let’s think about how we would run performance tests to tell the difference. Where might any performance changes exist? Will it be in processing the event, or connecting and disconnecting the event handler? or would it be in processing the event? The question is important in testing the performance of these two different versions. If it’s in processing the event, this version would be fine to instrument. However,if the performance differences are in connecting and disconnecting the event handlers, this application won’t be useful. We will be unlikely to measure the difference with connecting and disconnecting the handler only once.

Before making extensive measurements, let’s look at the generated IL.  Here’s the IL from VerionOne (which uses the lambda syntax)

IL_0007: ldsfld class [mscorlib]System.EventHandler`1<int32> blogExample.Program/'<>c'::'<>9__0_0'
IL_000c: dup
IL_000d: brtrue.s IL_0026

IL_000f: pop
IL_0010: ldsfld class blogExample.Program/'<>c' blogExample.Program/'<>c'::'<>9'
IL_0015: ldftn instance void blogExample.Program/'<>c'::'<Main>b__0_0'(object,  int32)
IL_001b: newobj instance void class [mscorlib]System.EventHandler`1<int32>::.ctor(object,  native int)
IL_0020: dup
IL_0021: stsfld class [mscorlib]System.EventHandler`1<int32> blogExample.Program/'<>c'::'<>9__0_0'

IL_0026: stloc.1
IL_0027: ldloc.0
IL_0028: ldloc.1
IL_0029: callvirt instance void class [mscorlib]System.Progress`1<int32>::add_ProgressChanged(class [mscorlib]System.EventHandler`1<!0>)
IL_002e: nop
IL_002f: ldloc.0
IL_0030: callvirt instance class [mscorlib]System.Threading.Tasks.Task`1<int32> blogExample.EventSource::PerformExpensiveCalculation()
IL_0035: callvirt instance int32 class [mscorlib]System.Threading.Tasks.Task`1<int32>::get_Result()
IL_003a: call void [mscorlib]System.Console::WriteLine(int32)
IL_003f: nop
IL_0040: ldloc.0
IL_0041: ldloc.1
IL_0042: callvirt instance void class [mscorlib]System.Progress`1<int32>::remove_ProgressChanged(class [mscorlib]System.EventHandler`1<!0>)

 

There are 5 instructions that set the event handler (IL_0010 to IL_0029). There is one instruction to disconnect the handler (IL_0042).

 

Let’s compare that to the version where the method is declared as a function member of the class:

IL_004a: ldftn void blogExample.Program::ProgressChangedMethod(object,  int32)
IL_0050: newobj instance void class [mscorlib]System.EventHandler`1<int32>::.ctor(object,  native int)
IL_0055: callvirt instance void class [mscorlib]System.Progress`1<int32>::add_ProgressChanged(class [mscorlib]System.EventHandler`1<!0>)
IL_005a: nop
IL_005b: ldloc.0
IL_005c: callvirt instance class [mscorlib]System.Threading.Tasks.Task`1<int32> blogExample.EventSource::PerformExpensiveCalculation()
IL_0061: callvirt instance int32 class [mscorlib]System.Threading.Tasks.Task`1<int32>::get_Result()
IL_0066: call void [mscorlib]System.Console::WriteLine(int32)
IL_006b: nop
IL_006c: ldloc.0
IL_006d: ldnull
IL_006e: ldftn void blogExample.Program::ProgressChangedMethod(object,  int32)
IL_0074: newobj instance void class [mscorlib]System.EventHandler`1<int32>::.ctor(object,  native int)
IL_0079: callvirt instance void class [mscorlib]System.Progress`1<int32>::remove_ProgressChanged(class [mscorlib]System.EventHandler`1<!0>)

 

 

Here, there are 3 lines of IL for attaching the event handler (IL_004A to IL_0055). And also 3 lines of IL for disconnecting (IL_006e to IL_0079). Well, does seem to indicate one extra line of IL to define the first version. Could that add up?

Well, not really.But, let’s measure to be sure.

I’ll modify the program to perform some measurements.  I want to measure adding and removing the event handler, not the whole process. So, I commented out the calls to PerformExpensiveOperation(), and I’m simply adding and removing the event handler:

 

class Program

{

    static void Main(string[] args)

    {

        for (int repeats = 10; repeats <= 1000000; repeats *= 10)

        {

            VersionOne(repeats);

            VersionTwo(repeats);

        }

    }

 

    private static void VersionOne(int repeats)

    {

        var timer = new Stopwatch();

        timer.Start();

        var source = new EventSource();

        for (int i = 0; i < repeats; i++)

        {

            EventHandler<int> handler = (_, progress) => Console.WriteLine(progress);

            source.ProgressChanged += handler;

            // Console.WriteLine(source.PerformExpensiveCalculation().Result);

            source.ProgressChanged -= handler;

        }

        timer.Stop();

        Console.WriteLine($"Version one: {repeats} add/remove takes {timer.ElapsedMilliseconds}ms");

    }

    private static void VersionTwo(int repeats)

    {

        var timer = new Stopwatch();

        timer.Start();

        var source = new EventSource();

        for (int i = 0; i < repeats; i++)

        {

            source.ProgressChanged += ProgressChangedMethod;

            // Console.WriteLine(source.PerformExpensiveCalculation().Result);

            source.ProgressChanged -= ProgressChangedMethod;

        }

        timer.Stop();

        Console.WriteLine($"Version two: {repeats} add/remove takes {timer.ElapsedMilliseconds}ms");

    }

 

    private static void ProgressChangedMethod(object sender, int e)

    {

        Console.WriteLine(e);

    }

}

 

And, here are the results:

Version one: 10 add/remove takes 0ms
Version two: 10 add/remove takes 0ms
Version one: 100 add/remove takes 0ms
Version two: 100 add/remove takes 0ms
Version one: 1000 add/remove takes 0ms
Version two: 1000 add/remove takes 0ms
Version one: 10000 add/remove takes 1ms
Version two: 10000 add/remove takes 1ms
Version one: 100000 add/remove takes 9ms
Version two: 100000 add/remove takes 13ms
Version one: 1000000 add/remove takes 79ms
Version two: 1000000 add/remove takes 93ms

So, if you are adding and removing at least 1 million event handlers during the normal execution of your program, you can save 11ms.

However, I would argue that if you are adding and removing more than 1000000 event handlers in your program, I would suggest you re-examine your overall design.

A common mistake.

Before I leave you, I am going to make one final change to the test program that has a huge impact on performance. Note the updated code for the lambda syntax (VersionOne):

class Program

{

    static void Main(string[] args)

    {

        for (int repeats = 10; repeats <= 1000000; repeats *= 10)

        {

            VersionOne(repeats);

            VersionTwo(repeats);

        }

    }

 

    private static void VersionOne(int repeats)

    {

        var timer = new Stopwatch();

        timer.Start();

        var source = new EventSource();

        for (int i = 0; i < repeats; i++)

        {

            source.ProgressChanged += (_, progress) => Console.WriteLine(progress);

            // Console.WriteLine(source.PerformExpensiveCalculation().Result);

            source.ProgressChanged -= (_, progress) => Console.WriteLine(progress);

        }

        timer.Stop();

        Console.WriteLine($"Version one: {repeats} add/remove takes {timer.ElapsedMilliseconds}ms");

    }

    private static void VersionTwo(int repeats)

    {

        var timer = new Stopwatch();

        timer.Start();

        var source = new EventSource();

        for (int i = 0; i < repeats; i++)

        {

            source.ProgressChanged += ProgressChangedMethod;

            // Console.WriteLine(source.PerformExpensiveCalculation().Result);

            source.ProgressChanged -= ProgressChangedMethod;

        }

        timer.Stop();

        Console.WriteLine($"Version two: {repeats} add/remove takes {timer.ElapsedMilliseconds}ms");

    }

 

    private static void ProgressChangedMethod(object sender, int e)

    {

        Console.WriteLine(e);

    }

}

 

Instead of declaring a local variable for the lambda expression, I write it inline in the add and remove statements for the event handler. Here are the performance results from this version:

Version one: 10 add/remove takes 1ms
Version two: 10 add/remove takes 0ms
Version one: 100 add/remove takes 0ms
Version two: 100 add/remove takes 0ms
Version one: 1000 add/remove takes 99ms
Version two: 1000 add/remove takes 0ms
Version one: 10000 add/remove takes 6572ms
Version two: 10000 add/remove takes 1ms
Version one: 100000 add/remove takes 803267ms
Version two: 100000 add/remove takes 13ms

 

You’ll notice that the lambda version is much, much, slower. Note that I stopped execution after 100,000 add/remove sequences because of the time taken. Why? Well, let’s look at the generated IL for the code inside the loop in VersionOne():

.loop
{
    IL_0018: nop
    IL_0019: ldloc.1
    IL_001a: ldsfld class [mscorlib]System.EventHandler`1<int32> blogExample.Program/'<>c'::'<>9__1_0'
    IL_001f: dup
    IL_0020: brtrue.s IL_0039

    IL_0022: pop
    IL_0023: ldsfld class blogExample.Program/'<>c' blogExample.Program/'<>c'::'<>9'
    IL_0028: ldftn instance void blogExample.Program/'<>c'::'<VersionOne>b__1_0'(object,  int32)
    IL_002e: newobj instance void class [mscorlib]System.EventHandler`1<int32>::.ctor(object,  native int)
    IL_0033: dup
    IL_0034: stsfld class [mscorlib]System.EventHandler`1<int32> blogExample.Program/'<>c'::'<>9__1_0'

    IL_0039: callvirt instance void class [mscorlib]System.Progress`1<int32>::add_ProgressChanged(class [mscorlib]System.EventHandler`1<!0>)
    IL_003e: nop
    IL_003f: ldloc.1
    IL_0040: ldsfld class [mscorlib]System.EventHandler`1<int32> blogExample.Program/'<>c'::'<>9__1_1'
    IL_0045: dup
    IL_0046: brtrue.s IL_005f

    IL_0048: pop
    IL_0049: ldsfld class blogExample.Program/'<>c' blogExample.Program/'<>c'::'<>9'
    IL_004e: ldftn instance void blogExample.Program/'<>c'::'<VersionOne>b__1_1'(object,  int32)
    IL_0054: newobj instance void class [mscorlib]System.EventHandler`1<int32>::.ctor(object,  native int)
    IL_0059: dup
    IL_005a: stsfld class [mscorlib]System.EventHandler`1<int32> blogExample.Program/'<>c'::'<>9__1_1'

    IL_005f: callvirt instance void class [mscorlib]System.Progress`1<int32>::remove_ProgressChanged(class [mscorlib]System.EventHandler`1<!0>)
    IL_0064: nop
    IL_0065: nop
    IL_0066: ldloc.2
    IL_0067: stloc.3
    IL_0068: ldloc.3
    IL_0069: ldc.i4.1
    IL_006a: add
    IL_006b: stloc.2

    IL_006c: ldloc.2
    IL_006d: ldarg.0
    IL_006e: clt
    IL_0070: stloc.s V_4
    IL_0072: ldloc.s V_4
    IL_0074: brtrue.s IL_0018
}

Focus on the four highlighted lines of code above. Notice that the event handler added is not the same as the event handler removed! Removing an event handler that is not attached does not generate any errors, but it also doesn’t do anything. So, what happens now is that over the course of the looping test, VersionOne adds more than 1,000,000 event handlers to the delegate. They are all the same handler, but they still consume memory and CPU resources.

 

I think this may be where the misconception arises that event handlers written as lambda expressions are slower than handlers written as regular functions. If you don’t add and remove handlers correctly using the lambda syntax, that error quickly shows up in performance metrics.

Created: 9/24/2015 2:24:33 PM

It’s stated as conventional wisdom that in .NET a throw expression must throw an object of a type that is System.Exception, or derived from System.Exception.  Here’s the language from the C# specification (Section 8.9.5):

A throw statement with an expression throws the value produced by evaluating the expression. The expression must denote a value of the class type System.Exception, of a class type that derives from System.Exception, or of a type parameter type that has System.Exception (or a subclass thereof) as its effective base class.

Let’s play with the edge cases.  What will this do:

throw default(NotImplementedException);

 

The expression is typed correctly (it is a NotImplementedException.) But, it’s null. The answer to this is in the next sentence of the C# Specification:

If evaluation of the expression produces null, a System.NullReferenceException is thrown instead.

That means this is also legal:

throw null;

It will throw a NullReferenceException, as specified  above.

Now, let’s see what happens if we work with a type that can be converted to an exception:

struct Exceptional
{     public static implicit operator Exception(Exceptional e)     {         return null;     }
}

 

An Exceptional can be converted (implicitly) to an Exception. But this results in a compile time error:

 

throw new Exceptional();

 

The compiler reports that the type thrown must be derived from System.Exception. So, let’s rewrite the code so that it is:

Exception e = new Exceptional();
throw e;

 

The first line creates an Exceptional struct. Assigning it to a variable of type Exception invokes the implicit conversion. Now, it is of type Exception, and can be thrown.

Finally, what about this expression:

dynamic d = new Exceptional();
throw d;

 

In most places in the language, where an expression must evaluate to a specific type, an expression of type dynamic is allowed.

Here we have the joy of edge cases. The spec (as I’ve quoted above) doesn’t speak to this case. Where the spec is silent, sometimes developers interpret them differently. The classic compiler (pre-Roslyn) accepts throwing an expression that is dynamic. The Roslyn (VS 2015) compiler does not. I expect this may change, and the spec may get updated to explicitly state the behavior.

Created: 8/4/2015 4:07:44 PM

Whenever I teach or present, people ask me why their copy of Visual studio doesn’t look like mine. The reason is the extensions I have installed. I’m really impressed with Visual Studio’s extension model, and how rich the ecosystem for Visual Studio extensions are.

It’s even gotten more rich with the release of Visual Studio 2015, and Roslyn based analyzers.

 

Here (in alphabetical order) are the extensions I use all the time:

The .NET Compiler Platform SDK:  This extension provides all the tools and templates that I need to create Roslyn based analyzers, diagnostics, and refactorings. As of RTM, this extension includes the Roslyn syntax visualizer, which was previously a separate install.

C# Essentials: This extension is a set of Roslyn based analyzers and code fixes that help you adopt the new features of C# 6. If you falling into old habits, and using old constructs, when you want to adopt the new language syntax, this analyzer will help you build those new habits.

Chutzpah Test Adapter: This extension runs Jasmine based unit tests for my JavaScript code in the Visual Studio Test explorer window. Adding this extension means that my JavaScript tests run with every code change, just like my C# based unit tests run on every build. It’s a great way to enforce TDD practices with your JavaScript client code.

Chutzpah Test Runner Context Menu Extension: Sometimes I want to run my Jasmine tests in the browser to use the browser based debugging tools. With this extension, I can right-click on a test, and Chutzpah will generate a test HTML file with JavaScript loaded to run the tests.

Code Cracker for C#: This is another Roslyn-based analzyer that finds and fixes many common coding errors. It’s also available as a NuGet package, if you’d rather install it on a project-by-project basis.

Github extension for Visual Studio: I use this extension in addition to the standalone GitHub for Windows application. It makes it easy to stay inside Visual Studio while working with GitHub Flow.

Telerik JustDecompile extension: This requires you to install JustDecompile, which is free. I use these extensions to understand what IL my C# compiles down to. It helps me when I’m writing articles, or looking through the C# spec.

Powershell tools for Visual Studio 2015: This extension helps when I’m writing or debugging Powershell scripts.

Productivity Power Tools 2015: This extension provides many useful UI features: Presentation mode (switches font sizes), Tab Well enhancements, Visualizer enhancements in Solution Explorer, and many, many more.

Web Essentials: If you do web development, you need this. Period. It makes so many of the web programming tasks much, much, easier.

If you compare my list to yours, note that the above is an incomplete list. I did not include any of the extensions that install as part of the Visual Studio install (like ASP.NET templates, TypeScript support).

That’s my list. What’s yours?

Created: 7/23/2015 8:53:43 PM

This past Monday, Microsoft released the production version of Visual Studio 2015. Let’s get right to the lead: Visual Studio Community edition is free (for independent and Open Source developers). You have no excuse not to get this.

I’m not going to repeat all the information about the release and the total set of new features. Soma did that on his blog quite well. Instead, I’m going to focus on my areas of interest, and some of the resources I’ve written that can help you learn about these new features.

New Language Features

There are a number of new features in C# 6. Members of the team have been updating this page on Github with information about the new features.

I’ve written a number of articles about C# 6 for InformIT. The first four are live:

I recorded a Microsoft Virtual Academy Jump Start on C# 6 with Anthony D. Green, one of the program managers on the Managed Languages team.

And finally, I’ve written quite a few blog entries on the new language features. You can see a list here.

Language Service APIs

This version is particularly exciting because of the new compiler services that are part of Roslyn. These are a rich set of APIs that enable you (yes, you!) to write code that analyzes C# code, up to and including providing fixes for mistakes or other poor practices.

I’ve written an article for InformIT about the analyzers. You can read it here. I also did a Jump Start for Microsoft Virtual Academy with Jennifer Marsman.  If you missed the live event, watch the Microsoft Virtual Academy home page for updates. The recording should go live soon.

You can also learn more by exploring some of the open source projects that contain analyzers and code fixes:

  • Code Cracker. This project contains a number of different analyzers for common coding mistakes in C#.
  • DotNetAnalyzers. This contains a set of projects for different analyzers. Some manage particular practices. Others enforce rules, similar to the Style Cop effort.
  • CSharp Essentials. This project is a collection of analyzers that make it easier to adopt C# 6. If you don’t want to build it yourself, you can install it as an extension in VS 2015.

Humanitarian Toolbox and the All Ready app.

Finally, I was thrilled at the contribution from members of the Visual Studio team to Humanitarian Toolbox. Several members from different parts of the Visual Studio team worked for three days to build the initial release of the allReady application. The application source is on Github, under the HTBox organization.

This application was requested by the Red Cross. It provides features to lessen the impact of disasters on familes and communities. You can learn more about the project here on the Humanitarian Toolbox website.

The Visual Studio 2015 launch event included a profile of the developers and the process for building the initial code for allReady. You can watch the entire event on Channel 9. If you are interested in just the allReady app, and how it was built with Visual Studio 2015, look for the “In the Code” segments. There may be more In the Code episodes coming as the application grows.

All of us at Humanitarian Toolbox are grateful for the contribution from the Visual Studio team.

As a developer, I’m also grateful for the great new tools.

Created: 7/16/2015 5:05:29 PM

I was asked to review a bit of code for a friend the other day, and the result may be illustrative for others. I’ve stripped out much of the code to simplify the question. Examine the following small program:

 

public class SomeContainer

{

    public IList<int> SomeNumbers => new List<int>();

}

class Program

{

    static void Main(string[] args)

    {

        var container = new SomeContainer();

        var range = Enumerable.Range(0, 10);

        foreach (var item in range)

            container.SomeNumbers.Add(item);

 

        Console.WriteLine(container.SomeNumbers.Count);

    }

}

 

If you run this sample, you’ll find that container.SomeNumbers has 0 elements.

 

How is that possible? Why does the container not have 10 elements?

 

The problem is this line of code:

 

public IList<int> SomeNumbers => new List<int>();

 

The author had used an expression bodied member when he meant to use an initializer for auto-property.

An expression bodied member evaluates the expression whenever the public member is accessed. That means that every time the SomeNumbers property of the container gets accessed, a new List<int> is allocated, and assigned to the hidden backing field for the SomeNumbers property. It’s as though the author wrote this:

 

public IList<int> SomeNumbers { get { return new List<int>(); } }

 

When you see it using the familiar syntax, the problem is obvious.

This fix is also obvious. Use the syntax for an initializer:

 

public IList<int> SomeNumbers { get; } = new List<int>();

Notice the changes from the original code. SomeNumbers is now a read only auto property. It also has an initializer, This would be the equivalent of writing:

 

public IList<int> SomeNumbers { get { return storage; } }

private readonly List<int> storage = new List<int>();

 

That expresses the design perfectly.

 

This just illustrates that as we get new vocabulary in our languages, we need to fully understand the semantics of the new features. We need to make sure that we correctly express our designs using the new vocabulary.

It’s really easy to spot when you make a mistake like this: every time you look at this property in the debugger, you’ll see that it re-initializes the property. That’s the clue that you’ve made this mistake.

 

 

Created: 7/5/2015 4:11:33 PM

Sometimes, despite everyone’s best planning, you can’t help having features collide a bit. Thankfully, when it happens, there are almost always some way to reorganize the code to make it work. The key is to understand why your original code causes some issues.

I’ve written before about the new string interpolation feature in C# 6. I find it a welcome improvement over the previous incarnations.

Two goals of the feature are:

  1. It supports format specifiers for .NET types.
  2. It supports rich C# expression syntax between the braces.

Those two goals conflict because of the C# symbols used for each of them.

To specify a format for an expression, you place a colon ‘:’ after the expression, and then the remaining characters inside the braces consist of the format specifier. Here’s an example:

 

Console.WriteLine($"{DateTime.Now:MMM dd, yyyy}");

 

This will print the date in the form “Month day, year” where day is always a 2 digit number.

The C# compiler supports these format specifiers by interpreting the first first ‘:’ character in a string interpolation expression as the start of the format specifier. That works great, except in one case: where you want the colon to mean something else, like part of a conditional expression:

 

Console.WriteLine($"When {condition} is true, {condition ? "it's true!" : "It's False"}");

 

You can see that the syntax highlighting is getting a bit confused as it is not following the later string correctly. That’s because the compiler interprets the ‘:’ as the beginning of a format specifier.

It’s easy to fix this. Just put the conditional expression inside parentheses:

 

Console.WriteLine($"When {condition} is true, {(condition ? "it's true!" : "It's False")}");

That way, the C# parser views everything inside the parentheses as part of the expression, and does not view the : as the beginning of a format specifier.

See? It’s easy to get both features to work. You just have to know why the first expression is interpreted differently than you’d expect.

Created: 4/7/2015 5:02:02 PM

I’m teaching a second .NET bootcamp in Detroit this spring. It’s quite a rewarding experience. Like the previous cohort, these students are all adults that are motivated to make a career shift. I always think I learn as much as the students do when I teach one of these groups. I’ve got four important lessons for all my readers based on the first few weeks of the bootcamp experience.

Lesson 1: Developers are in Demand

My first surprise was the experience that some of the students have coming into the class. Everyone has been successful in different fields, from business to medicine to finance. And they all want to be developers. Developers are in serious demand everywhere. This may be an exaggeration, but I believe the unemployment rate among developers is approaching 0. Every growing company I work with wants to hire skilled developers. It’s become a barrier to their growth.

Investing in yourself by learning to code will pay off. It opens doors.

There’s a corollary to this lesson: Having other skills also pays off. As we’ve been discussing next steps, we discuss where everyone’s past experience will also pay off. Several of the students have very strong backgrounds in different vertical businesses. Those skills will help to set them apart from other entry level developers.

Lesson 2: Anyone can Code.

I’ve been really happy to see this result. There are too many people that have the world view that someone is “born with” the skills or the mindset necessary to be a developer. These classes, and the students that have attended, prove that’s bunk. Most of the students enter with no programming experience at all.

8 weeks later, they can develop code, and feel comfortable with the .NET framework.

Now, I don’t want to overstate this: they are all still beginners, and ready for entry level jobs as developers. They don’t yet have the experience many of my typical readers do. But, that’s a function of time, not innate ability. I was beginner once, as were all of you, dear readers. These students will continue to grow, as they keep coding.

Anyone can learn to code. It takes motivation, some help, and a path. If you know someone interested in learning, get them involved. Point them in the direction of learning resources. Encourage them to try and build something. We’ve all enjoyed developing software. There’s plenty of room for more. And, anyone can learn.

There’s a corollary here: I continue to be impressed by just how fast new folks pick up the core skills. There’s so much vocabulary and concepts that we work with. We have learned a lot and have a lot of experience behind us. I am truly impressed by how quickly I see these new developers learn and grow the skills we’ve already internalized. It does seem very frustrating for a day or two, until they get past that “Hello World” stage. Thankfully, within a week, they are building classes, understanding core concepts, and creating real code. It’s great to see.

Lesson 3: There are stages of understanding

This has been the most interesting piece to observe. There’s the famous quote from Joseph Joubert: “To teach is to learn twice”. I’m finding that students really go through four distinct phases of understanding: reading code, doing guided labs, working independently, and helping peers.

In that first phase, they can see code that I’ve written and begin to understand what it does. They don’t yet have the vocabulary, and they are kind of unsure exactly what they are reading. But, they certainly beginning to understand.

The second phase is where students can work with a guided lab, and understand what’s being added. They can follow the instructions, type in the code, and do the debugging and proofreading necessary to make a guided lab work.

The third phase is when they can create their own code and their own algorithms to build software that does something useful. It’s where a lot of entry level developers spend much of their time. Their code works, but they may not be able to completely understand and articulate how it works.

That fourth phase is the key to mastery: Once students get to the point where they can explain what they’ve built, how it works, and how it uses the underlying libraries, they have achieved a new level of mastery.

Well, what about you?

I’ve truly enjoyed working with new developers and helping them join this career. There are large numbers of people that want to write code. Can you help? It would be a great opportunity for you learn twice. Maybe it’s not beginners, maybe it’s mentoring junior developers in your organization.

Created: 4/3/2015 12:50:42 AM

I’m thrilled to have been nominated and accepted as a member of the .NET Foundation Advisory Board.

I’m very excited about the role we can play in growing the Open Source ecosystem around .NET. We’ve just gotten started, so there is not a lot of progress to report, but I’m excited by the potential. Our role is to provide a channel between the .NET Foundation Board of Directors and the .NET developer community. We will be helping to refine policies to accept new projects, grow and nurture the projects under the .NET Foundation, and overall, make .NET Open Source Development better and richer for everyone.

Shaun Walker is the chairman of the .NET Foundation Advisory Board, and his announcement here is a great description of the rationale and thought process that went into creating the advisory board.

I’m excited to participate in growing Open Source development around .NET and the great languages and frameworks that are coming from the developer teams. This is a large and important initiative. It covers everything from the Roslyn compiler projects, to the TypeScript compiler to ASP.NET vNext to the Core CLR and core .NET Framework releases. And that’s just the major projects from inside Microsoft. There are so many tremendous projects (like ScriptCS, just to name one) that are part of a growing .NET ecosystem.

We’ve got quite a bit of work to do. The Foundation is a new organization, and we need to advise the board on everything from what kinds of projects we’ll accept, to the process for accepting new projects, to the governance of the advisory board. It’s a lot of work, but it’s also a lot of fun.

It’s an exciting time to be a .NET developer. I’m glad to be in the middle of it.

Created: 3/3/2015 4:34:42 PM

The more I work with C# 6 in projects, the more I find myself using ?. to write cleaner, simpler, and more readable code.  Here are four different uses I’ve found for the null coalescing operator.

Deep Containment Designs

Suppose I’m writing code that needs to find the street location for the home address for a contact person for vendor. Maybe there’s an awesome event, and I need to program my GPS. Using earlier versions of C#, I’d need to write a staircase of if statements checking each property along the way:

 

var location = default(string);

if (vendor != null)

{

    if (vendor.ContactPerson != null)

    {

        if (vendor.ContactPerson.HomeAddress != null)

        {

            location = vendor.ContactPerson.HomeAddress.LineOne;

        }

    }

}

 

Now, using C# 6, this same idiom becomes much more readable:

 

var location = vendor?.ContactPerson?.HomeAddress?.LineOne;

 

The null coalescing operator short-circuits, so evaluation stops as soon as any single property evaluates as null.

INotifyPropertyChanged and similar APIs

We’ve all seen code like this in a class the implements INotifyPropertyChanged:

 

public string Name {

    get { return name;  }

    set {

        if (name != value)

        {

            name = value;

            PropertyChanged(this, new PropertyChangedEventArgs("Name"));

        }

    }

}

private string name;

 

I hope your are cringing now. This code will crash if it’s used in a situation where no code subscribes to the INotifyPropertyChanged.PropertyChanged event. It raises that event even when there are no listeners. 

When faced with that situation, many developers write something like the following:

 

public string Name {

    get { return name;  }

    set {

        if (name != value)

        {

            name = value;

            if (PropertyChanged != null)

                PropertyChanged(this, new PropertyChangedEventArgs("Name"));

        }

    }

}

private string name;

 

OK, this is a little better, and will likely work in most production situations. However, there is a possible race condition lurking in this code. If a subscriber removes a handler between the ‘if’ check and the line that raises the event, this code can still crash. It’s the kind of insidious bug that may only show up months after deploying an application.  The proper fix is to create a temporary reference to the existing handler, and raise the event on that object rather than allowing the race condition on the PropertyChanged public event:

 

public string Name {

    get { return name;  }

    set {

        if (name != value)

        {

            name = value;

            var handler = PropertyChanged;

            if (handler != null)

                handler(this, new PropertyChangedEventArgs("Name"));

        }

    }

}

private string name;

 

It’s more code, and it’s a few different techniques to remember every time you raise the PropertyChanged event. In a large program, it seems like someone forgets at least once.

C# 6 to the rescue!

In C# 6, the null coalescing operator implements all the checks I mentioned above. You can replace the extra checks, and the local variable with a ?. and a call to Invoke:

 

public string Name {

    get { return name;  }

    set {

        if (name != value)

        {

            name = value;

            PropertyChanged?.Invoke(this, new PropertyChangedEventArgs("Name"));

        }

    }

}

private string name;

 

The shorter, more modern version reads more concisely, and implements the proper idioms for raising events and managing subscribers being added or removed.

Resource Management

Occasionally, we may find that one of our types owns another object that has certain capabilities. However, that object may implement other capabilities beyond those our class requires. Usually, that’s not an issue, but what if that object implements IDisposable? Consider the case of an Evil Genius that is done working with a henchman.  The code to retire a henchman might look like this:

 

public void RetireHenchman()

{

    var disposableMinion = Minion as IDisposable;

    if (disposableMinion != null)

        disposableMinion.Dispose();

    Minion = null;

}

 

The null coalescing operator can make this code more concise as well:

 

public void RetireHenchman()

{

    (Minion as IDisposable)?.Dispose();

    Minion = null;

}

 

LINQ Queries

There are two different uses I’ve found for this operator when I work with LINQ queries.  One very common use if after I create a query that uses SingleOrDefault(). I’ll likely want to access some property of the (possible) single object. That’s simple with ?.

 

var created = members.SingleOrDefault(e => e.name == "dateCreated")?.content;

 

Another use is to create a null output whenever the input sequence is null:

 

members?.Select(m => (XElement)XmlValue.MakeValue(m))

 

The addition of this feature has made me realize just how much code checks for null, and how much more concise and readable our code will be by having a more concise syntax for checking against null, and taking some default action based on the ‘null-ness’ of a variable.

This feature has changed how I code everyday. I can’t wait for the general release and getting more of my customers to adopt the new version.

Created: 2/24/2015 8:57:45 PM

During CodeMash, Stephen Cleary gave me a copy of his Concurrency in C# Cookbook.

It’s an essential reference for any C# developer using concurrency in their applications. These days, hopefully that means most C# developers.

The 13 chapters provide short recipes for many uses of concurrency. The recipes are short. Therefore the recipes provide minimal breadth and depth on how each different recipe works. That’s great if you have a good understanding of how these different features, and you’re looking to decide which recipe is the best one for your current challenge. However, if you are looking for a tutorial or introduction to concurrency this book will leave you with many questions.

Stephen’s material is clear, concise, and will help you follow the proper guidance in a variety of situations. It will give you more options for concurrent programming, and you will be better at using them. If you know some of the concurrent techniques available to modern .NET developers, you’ll quickly catch on to the style and you’ll be exposed to recipes you may not know. That will make you a better developer.

This book is not for developers that have no experience with concurrent programming. Stephen assumes you know the basics. His explanations assume a background in the tools used.

This book has earned a handy place on my shelf. In particular, the chapter on Data Flows helps me remember to use this handy library more often. I believe I’ll reach the point where I reference this book whenever I’m looking at how to build a concurrency related library or program.

Created: 2/5/2015 3:01:05 PM

There’s some new naming conventions coming for MSSQL LocalDB when Visual Studio 2015 ships. I found these while working on the Humanitarian Toolbox Crisis Checkin project. As more developers start using machines with VS 2015, especially machines that didn’t have a previous version installed, you may run into the same problem.

The web.config for Crisis Checkin on a developer box contains this connection string (highlight added):

 

<add name="CrisisCheckin" connectionString="Data Source=(localdb)\v11.0;AttachDbFilename=|DataDirectory|\crisischeckin.mdf;Integrated Security=True; MultipleActiveResultSets=True;" providerName="System.Data.SqlClient" />

 

When I ran this on my new Surface Pro III, using VS 2015 Preview, the application would not load. The app could not create and load the database.  After some investigation, creating apps using the new templates, and comparing, I found that I needed to change the connection string as follows:

<add name="CrisisCheckin" connectionString="Data Source=(localdb)\MSSQLLocalDB;AttachDbFilename=|DataDirectory|\crisischeckin.mdf;Integrated Security=True; MultipleActiveResultSets=True;" providerName="System.Data.SqlClient" />

I asked around, and with the help of a couple team members, I now understand and know what to do.

Starting with this VS2015, the team is moving away from version dependent connection strings. That means, once you adopt VS 2015, you have the option of using a version independent  connection string moving forward. That’s the first recommendation:

To fix this issue for now, and future versions of Visual Studio, replace the version dependent connection string (e.g. “v11.0”) with “MSSQLLocalDB”.

However, you may need to continue to work with team members that are still using VS 2013. If that’s the case, you can follow the second recommendation:

Install the Version 11 LocalDB, which is free. You can then use Visual Studio 2015 (preview), but your database engine will be MSSQL LocalDB version 11.0.

Personally, I’d gravitate toward using the version independent connection strings as soon as possible. We will do that on crisis checkin once VS 2015 is officially released.

I hope this saves you some research when you first encounter this change.

Current Projects

I create content for .NET Core. My work appears in the .NET Core documentation site. I'm primarily responsible for the section that will help you learn C#.

All of these projects are Open Source (using the Creative Commons license for content, and the MIT license for code). If you would like to contribute, visit our GitHub Repository. Or, if you have questions, comments, or ideas for improvement, please create an issue for us.

I'm also the president of Humanitarian Toolbox. We build Open Source software that supports Humanitarian Disaster Relief efforts. We'd appreciate any help you can give to our projects. Look at our GitHub home page to see a list of our current projects. See what interests you, and dive in.

Or, if you have a group of volunteers, talk to us about hosting a codeathon event.