Bill Blogs in C# -- Roslyn

Bill Blogs in C# -- Roslyn

Created: 4/12/2016 8:48:30 PM

//Build changes Everything

I started this series after giving a presentation at NDC London on the potential features up for discussion in C# 7. That information was based on the public design discussions on GitHub. Now that //build has happened, we know a bit more about the plans. There is even a public preview of Visual Studio 15 available for download. There are two versions: a ‘classic’ installer, and a new lightweight installer that runs much more quickly but has fewer scenarios supported.

I have installed both installers of Visual Studio 15 on my machine. They install side-by-side, and my machine works well. (It is also a machine with Visual Studio 2015 on it, and that’s been unaffected. This is still pre-release software, and you should proceed with some caution.

All of which means that there are two important updates to this series: First, the plans have been updated. The team announced at //build that the languages will have a faster cadence than before. That’s great news. But, it comes at a price. Some of the features that were slated for C# 7 are likely to be pushed to the release that follows C# 7. Private Protected is one of those features. Non-Nullable Reference Types (covered in my NDC talk, but not yet in this blog series) may be another.

Immutable Types and With Expressions

Now that those announcements are made, let’s discuss the addition of ‘with expressions’ to make it easier to work with immutable types.

Immutable types are becoming a more common part of our design toolkit. Immutable types make it easier to manage multi threaded code. Shared data does not create issues when that data can’t change.

However, working with immutable types can become very cumbersome. Making any change means making a new object, and initializing it with all the properties of the original object, except the one you want to change.

With expressions are meant to address this issue. In their most basic use, consider that you have created an immutable Person object:

var scott = new Person(“Scott”, “Hanselman”);

Later, you find you need an object that’s almost the same, but must have a different last name:

var coolerScott = scott with { LastName = “Hunter” };

This simple example shows the syntax, but doesn’t provide great motivation for using the feature. It’s almost as simple to create a new object and explicitly set the two fields by calling the constructor. With expressions become much more useful in real world scenarios where more fields are needed to initialize the object. Imagine a more extensive Person class that included employer, work address and so on. When a Person accepts a new role, the code to create the new object becomes much more heavyweight. And it’s all boilerplate code. That represents a lot of busy work that adds minimal value. In those cases, With expressions are your friend.

This feature will leverage a convention found throughout the Roslyn APIs. You may have seen that many types have .With() methods that create a new object by copying an existing object and replacing one property. The proposed feature would use a With() method if one was available. If not, one proposal would generate a call to a constructor and explicitly set all the properties. Another concept would only support types that had an appropriate With() method.

The syntax for With expressions was originally proposed for record types (which I will cover in a future blog post). Record types are a new feature, and the compiler can generate all the necessary code to support new syntax like With expressions. The current proposal would specify that Record types would generate With() methods that would support this language feature.

When With Expressions are applied to record types, the generated With() method provides a great example of how such a method can be generated that would support many permutations of With Expressions. That proposal minimizes the amount of work necessary to support a full set of With Expressions for all combinations of updated properties.

Open Questions

In the previous section, I said that one proposal would fall back to a constructor if a With() method was not available. The advantage to that design is that With Expressions would work with all existing types. The advantage of requiring a With() method is that it enables richer support for positional and name mapping.

But there are more questions. In the scenario above, suppose the Person type was a base class for other types: Teacher, Student, Teaching Assistant, Tutor, Advisor. Should a With Expression that uses a variable of type ‘Person’ work correctly on any derived type? There’s a goal to enable those scenarios. You can read about the current thinking in the February C# Design Notes.

With Expressions are one language feature that will make working with immutable types more pleasant and natural in C#. These features will make it easier to create the designs we want to support. It’s part of that “Pit of Success” design goal for C#: Make it easier to do the proper design.

Most importantly, these issues are still being discussed and debated. If you have ideas, visit the links I’ve put in place above. Participate and add your thoughts.

Created: 4/6/2016 2:48:30 PM

Private Protected access likely comes back in C# 7.

My readers are likely familiar with the four access modifiers in C#: public, protected, internal, and private. Public access means accessible by any code. Protected access enables access for all derived classes. Internal access enables access from any code in the same assembly. Private access is limited to code in the same class. C# also supports “protected internal” access. Protected Internal access enables access from all code in the same assembly, and all derived classes.

What was missing was a more restrictive access: enable only code in the same assembly AND derived from this class to access those members. The CLR has supported this for some time, but it was not legal in C#. The team wanted to ad it in C# 6, using the keywords “private protected”. That generated a tremendous amount of feedback. While everyone liked the feature, there was a lot of negative feedback on the syntax. Well, after much discussion, thought, and experimentation, it’s back. It’s back with the same syntax.

Let’s explain some of the thinking behind this.

One overriding goal for the team was that this feature should not require a new keyword that could potentially break code. New keywords might be used in existing code as identifiers (variables, fields, methods, class names, and so on). In fact, the C# language design team has managed to avoid adding any new global keywords since C# 2.0. All the features for LINQ, dynamic, async and await, and more have been implemented using contextual keywords. Contextual keywords have special meaning only when used in a particular context. That enabled the language designers to add new features with less concern that they could be breaking existing code.

Using Contextual Keywords is very hard when you are talking about access modifiers. Remember that access modifiers are optional. Members of a class have a default access: private. Therefore, when the language parser looks at a method declaration, the first token may be an optional access modifier, ore it may be the return type. So, some new keyword for the new restrictive access would have the potential to break code: if some developer had created a type with the name of the proposed modifier, that code would break.

So, new keywords are out. That removes any suggestions like “protected or internal” and “protected and internal”. Those would be great suggestions, were it not for the breaking code problem.

However this feature was going to be implemented, it needed to use a combination of the current keywords. This new access is more restrictive than the current “protected internal” access. The modifier used should reflect that. The design question now becomes what combination of access modifier keywords would reflect a more restrictive access, and yet express that both internal and protected modifiers are in play?

Let’s reject out of hand the suggestion that the current ‘protected internal’ access should be repurposed for this feature, and a new combination of keywords used for the existing feature. That would break way too much code, and there’s no way for tools to know if you meant the old meaning, or the new meaning.

The other possible suggestion was to make “protected internal” have the current meaning, and make “internal protected” take on the new meaning. Well, that’s also a breaking change. In today’s world, you can type the ‘protected’ and ‘internal’ keywords in either order, and it has the same meaning. That fails the breaking change concern.

Of the possible combinations, “private protected” comes out best. Along with “private internal” it’s the only combination of 2 access modifiers that make sense, and isn’t already in use. One other option could be “private protected internal”, but that’s a lot of extra typing.

Overall, there are a lot of requests for adding the feature and enabling this accessibility. The proposed syntax is still the best way to express it. The language design team thought through alternatives, polled the community, and asked in public. This is still the best expression for this feature.

I’m glad it’s back.

Created: 3/2/2016 4:30:52 PM

Let’s discuss another of the features that may be coming to the next version of the C# language: Local Functions.

This post discusses a proposed feature. This feature may or may not be released. If it is released, it may or may not be part of the next version of C#. You can contribute to the ongoing discussions here.

Local functions would enhance the language by enabling you to define a function inside the scope of another function. That supports scenarios where, today, you define a private method that is called from only one location in your code. A couple scenarios show the motivation for the feature.

Suppose I created an iterator method that was a more extended version of Zip(). This version puts together items from three different source sequences. A first implementation might look like this:

public static IEnumerable<TResult> SuperZip<T1, T2, T3, TResult>(IEnumerable<T1> first,
   
IEnumerable<T2> second,
   
IEnumerable<T3> third,
   
Func<T1, T2, T3, TResult> Zipper)
{
   
var e1 = first.GetEnumerator();
   
var e2 = second.GetEnumerator();
   
var e3 = third.GetEnumerator();
   
while (e1.MoveNext() && e2.MoveNext() && e3.MoveNext())
       
yield return Zipper(e1.Current, e2.Current, e3.Current); }

 

This method would throw a NullReferenceException in the case where any of the source collections was null, or if the Zipper function was null. However, because this is an iterator method (using yield return), that exception would not be thrown until the caller begins to enumerate the result sequence.

That can make it hard to work with this method: errors may be observed in code locations that are not near the code that introduced the error. As a result, many libraries split this into two methods. The public method validates arguments. A private method implements the iterator logic:

public static IEnumerable<TResult> SuperZip<T1, T2, T3, TResult>(IEnumerable<T1> first,
   
IEnumerable<T2> second,
   
IEnumerable<T3> third,
   
Func<T1, T2, T3, TResult> Zipper)
{
   
if (first == null)
       
throw new NullReferenceException("first sequence cannot be null");
   
if (second == null)
       
throw new NullReferenceException("second sequence cannot be null");
   
if (third == null)
        
throw new NullReferenceException("third sequence cannot be null");
   
if (Zipper == null)
        
throw new NullReferenceException("Zipper function cannot be null");

   
return SuperZipImpl(first, second, third, Zipper);
}
private static IEnumerable<TResult> SuperZipImpl<T1, T2, T3, TResult>(IEnumerable<T1> first,
   
IEnumerable<T2> second,
   
IEnumerable<T3> third,
   
Func<T1, T2, T3, TResult> Zipper) {
   
var e1 = first.GetEnumerator();
   
var e2 = second.GetEnumerator();
   
var e3 = third.GetEnumerator();
   
while (e1.MoveNext() && e2.MoveNext() && e3.MoveNext())
       
yield return Zipper(e1.Current, e2.Current, e3.Current); }

 

This solves the problem. The arguments are evaluated, and if any are null, an exception is thrown immediately. But it isn’t as elegant as we might like. The SuperZipImpl method is only called from the SuperZip() method. Months later, it may be more difficult to understand what was originally written, and that the SuperZipImpl is only referred to from this one location.

Local functions make this code more readable. Here would be the equivalent code using a Local Function implementation:

 

public static IEnumerable<TResult> SuperZip<T1, T2, T3, TResult>(IEnumerable<T1> first,
   
IEnumerable<T2> second,
   
IEnumerable<T3> third,
   
Func<T1, T2, T3, TResult> Zipper) {
   
if (first == null)
       
throw new NullReferenceException("first sequence cannot be null");
   
if (second == null)
       
throw new NullReferenceException("second sequence cannot be null");
   
if (third == null)
       
throw new NullReferenceException("third sequence cannot be null");
   
if (Zipper == null)
       
throw new NullReferenceException("Zipper function cannot be null");

    IEnumerable<
TResult>Iterator()
    {
       
var e1 = first.GetEnumerator();
       
var e2 = second.GetEnumerator();
       
var e3 = third.GetEnumerator();

       
while (e1.MoveNext() && e2.MoveNext() && e3.MoveNext())
           
yield return Zipper(e1.Current, e2.Current, e3.Current);
    }
   
return Iterator(); }

 

Notice that the local function does not need to declare any arguments. All the arguments and local variables in the outer function are in scope. This minimizes the number of arguments that need to be declared for the inner function. It also minimizes errors. The local Iterator() method can be called only from inside SuperZip(). It is very easy to see that all the arguments have been validated before calling Iterator(). In larger classes, it could be more work to guarantee that if the iterator method was a private method in a large class.

This same idiom would be used for validating arguments in async methods.

This example method shows the pattern:

 

public static async Task<int> PerformWorkAsync(int value)
{
    
if (value < 0)
        
throw new ArgumentOutOfRangeException("value must be non-negative");
    
if (value > 100)
        
throw new ArgumentOutOfRangeException("You don't want to delay that long!");
 
   
// Simulate doing some async work     await Task.Delay(value * 500);

   
return value * 500; }

 

 

This exhibits the same issue as the iterator method. This method doesn’t synchronously throw exceptions, because it is marked with the ‘async’ modifier. Instead, it will return a faulted task. That Task object contains the exception that caused the fault. Calling code will not observe the exception until the Task returned from this method is awaited (or its result is examined).

In the current version of C#, that leads to this idiom:

 

public static Task<int> PerformWorkAsync2(int value)
{
    
if (value < 0)
        
throw new ArgumentOutOfRangeException("value must be non-negative");
    
if (value > 100)
        
throw new ArgumentOutOfRangeException("You don't want to delay that long!");
    
return PerformWorkImpl(value); }

private static async Task<int> PerformWorkImpl(int value) {
    
await Task.Delay(value * 500);
    
return value * 500; }

 

Now, the programming errors cause a synchronous exception to be thrown (from PerformWorkAsync) before calling the async method that leverages the async and await features. This idiom is also easier to express using local functions:

 

public static Task<int> PerformWorkAsync(int value)
{
    
if (value < 0)
        
throw new ArgumentOutOfRangeException("value must be non-negative");
    
if (value > 100)
        
throw new ArgumentOutOfRangeException("You don't want to delay that long!");
     async Task<
int> AsyncPart()
     {
        
await Task.Delay(value * 500);
        
return value * 500;
     }
 
   
return AsyncPart();
}

 

The overall effect is a more clear expression of your design. It’s easier to see that a local function is scoped to its containing function. It’s easier to see that the local function and its containing method are closely related.

This is just a small way where C# 7 can make it easier to write code that more clearly expresses your design.

Created: 2/9/2016 4:47:17 PM

I’m continuing my discussion on proposed C# 7 features. Let’s take a brief look at Slices. The full discussion is here on GitHub. Please add your thoughts on that issue, rather than commenting directly to me. That ensures that your thoughts directly reach the team.

Let’s start with the problem that Slices are designed to solve. Arrays are a very common data structure (not just in C#, but in many languages.) That said, it’s often that you work with a subset of an entire array. You’ll likely want to send a subset of an array to a some method for changes, or read only processing. That leaves you with two sub-optimal choices. You can either copy the portion of the Array that you want to send to another method. Or, you use the entire array, and passes indices for the portion that ahould be used.

The first approach requires often copying sub-arrays for use by APIs. The second approach means trusting the called method to not move outside the bounds of the sub-array.

Slices would provide a third approach; one that enforces the boundaries to the proper subset of an array, without requiring copies of the original data structure.

Feature Overview

The feature requires two pieces. First, there is a Slice<T> class that would support a slice of an array. There is also a related ReadOnlySlice<T> class that would support a readony slice of an array.

In addition, the C# Language would support features designed to create those slice objects from another array:

 

Person[] slice = people.GetSlice(3, 9);

byte[] bytes = GetFromNative();

There is quite a bit of discussion about the final implementation, including whether or not CLR support would be needed. Some of that discussion moved to the C# Language Design Notes here.

This post is somewhat light on details, because the syntax, and the full feature implementation is still under discussion. You can look at one implementation here: https://github.com/dotnet/corefxlab/tree/master/src/System.Slices

I will write new posts as the feature list and the syntax and impementation becomre more concrete.

Created: 1/28/2016 5:12:15 PM

This is the first of a series of blog posts where I discuss the upcoming feature proposals for C# 7. At the time that I am writing these posts, these are all proposals. They may change form, or may not be delivered with C# 7, or ever. Each post will include links to the proposal issue on GitHub so that you can follow along with the ongoing disussions on the features.

This is an interesting time for C#. The next version of the language is being designed in the open, with comments and discussion from the team members and the community as the team determines what features should be added to the language.

Ref Returns and Local are described in the Roslyn Repository on GitHub, in Issue #118.

What is Ref Return and Ref Locals?

This proposal adds C# support for returning values from methods by reference. In addition, local variables could be declared as ‘ref’ variables. A method could return a reference to an internal data structure. Instead of returning a copy, the return value would be a reference to the internal storage:

 

ref PhysicalObject GetAsteroid(Position p)
{
    int index = p.GetIndex();
    return ref Objects[index];
}

ref var a = ref GetAsteroid(p);
a.Explode();

 

Once the language defines ref return values, it’s a natural extension to also have ref local variables, to refer to heap allocated storage by reference:

    ref PhysicalObject obj = ref Objects[index];

This enables scenarios where you want pass references to internal structures without resorting to unsafe code (pointers to pinned memory). Those mechanisms are both unsafe and inefficient.

Why is it useful?

Returning values by reference can improve performance in cases where the alternative is either pinned memory, or copying resources. This features enables developers to continue to use verifiably safe code, while avoiding unnecessary copies.

It may not be a feature you use on a daily basis, but for algorithms that require large amounts of memory for different structures, this feature can have a significant positive impact on the performance of your application.

Verifying Object Lifetimes

One of the interesting design challenges around this feature is to ensure that the reference being returned is reachable after being returned. The compiler will ensure that the object returned by a ref return continues to be reachable after he method has exited. If the object would be unreachable, and subject to garbage collection, that will cause a compiler error.

Effectively, this means you would not be able to return a ref to a local variable, or a parameter that was not a ref parameter. There is a lengthy discussion in the comments GitHub that go into quite a bit of detail on how the compiler can reason about the lifetime of any object that would be returned by reference.

Created: 1/21/2016 3:35:52 PM

I was honored to speak at NDC London last week. It’s an awesome event, with a very skilled set of developers attending the event.

I gave two talks at NDC London.  The first was a preview of the features that are currently under discussion for C# 7. C# 7 marks a sea change in the evolution of the C# Language. It’s the first version where all the design discussions are taking place in the open, on the Roslyn GitHub Repository. If you are interested in the evolution of the C# language, you can participate in those discussions nnow.

Instead of posting the slides from that talk, I’ll be blogging about each of the features, with references to the issue on GitHub. I’ll cover the current thinking and some of the open issues relating to each of the features.

Watch for those posts coming over the next month. I’ll mix in a forward looking post along with language infomration you can use now.

My second talk was on building Roslyn based Analyzers and Code Fixes. You can find the slides, and demos here on my GitHub repository: https://github.com/BillWagner/NonVirtualEventAnalyzer

If you have any questions on the samples, the code, or concepts, please make an issue at the repository, and I’ll address it there.

Created: 9/24/2015 2:24:33 PM

It’s stated as conventional wisdom that in .NET a throw expression must throw an object of a type that is System.Exception, or derived from System.Exception.  Here’s the language from the C# specification (Section 8.9.5):

A throw statement with an expression throws the value produced by evaluating the expression. The expression must denote a value of the class type System.Exception, of a class type that derives from System.Exception, or of a type parameter type that has System.Exception (or a subclass thereof) as its effective base class.

Let’s play with the edge cases.  What will this do:

throw default(NotImplementedException);

 

The expression is typed correctly (it is a NotImplementedException.) But, it’s null. The answer to this is in the next sentence of the C# Specification:

If evaluation of the expression produces null, a System.NullReferenceException is thrown instead.

That means this is also legal:

throw null;

It will throw a NullReferenceException, as specified  above.

Now, let’s see what happens if we work with a type that can be converted to an exception:

struct Exceptional
{     public static implicit operator Exception(Exceptional e)     {         return null;     }
}

 

An Exceptional can be converted (implicitly) to an Exception. But this results in a compile time error:

 

throw new Exceptional();

 

The compiler reports that the type thrown must be derived from System.Exception. So, let’s rewrite the code so that it is:

Exception e = new Exceptional();
throw e;

 

The first line creates an Exceptional struct. Assigning it to a variable of type Exception invokes the implicit conversion. Now, it is of type Exception, and can be thrown.

Finally, what about this expression:

dynamic d = new Exceptional();
throw d;

 

In most places in the language, where an expression must evaluate to a specific type, an expression of type dynamic is allowed.

Here we have the joy of edge cases. The spec (as I’ve quoted above) doesn’t speak to this case. Where the spec is silent, sometimes developers interpret them differently. The classic compiler (pre-Roslyn) accepts throwing an expression that is dynamic. The Roslyn (VS 2015) compiler does not. I expect this may change, and the spec may get updated to explicitly state the behavior.

Created: 7/23/2015 8:53:43 PM

This past Monday, Microsoft released the production version of Visual Studio 2015. Let’s get right to the lead: Visual Studio Community edition is free (for independent and Open Source developers). You have no excuse not to get this.

I’m not going to repeat all the information about the release and the total set of new features. Soma did that on his blog quite well. Instead, I’m going to focus on my areas of interest, and some of the resources I’ve written that can help you learn about these new features.

New Language Features

There are a number of new features in C# 6. Members of the team have been updating this page on Github with information about the new features.

I’ve written a number of articles about C# 6 for InformIT. The first four are live:

I recorded a Microsoft Virtual Academy Jump Start on C# 6 with Anthony D. Green, one of the program managers on the Managed Languages team.

And finally, I’ve written quite a few blog entries on the new language features. You can see a list here.

Language Service APIs

This version is particularly exciting because of the new compiler services that are part of Roslyn. These are a rich set of APIs that enable you (yes, you!) to write code that analyzes C# code, up to and including providing fixes for mistakes or other poor practices.

I’ve written an article for InformIT about the analyzers. You can read it here. I also did a Jump Start for Microsoft Virtual Academy with Jennifer Marsman.  If you missed the live event, watch the Microsoft Virtual Academy home page for updates. The recording should go live soon.

You can also learn more by exploring some of the open source projects that contain analyzers and code fixes:

  • Code Cracker. This project contains a number of different analyzers for common coding mistakes in C#.
  • DotNetAnalyzers. This contains a set of projects for different analyzers. Some manage particular practices. Others enforce rules, similar to the Style Cop effort.
  • CSharp Essentials. This project is a collection of analyzers that make it easier to adopt C# 6. If you don’t want to build it yourself, you can install it as an extension in VS 2015.

Humanitarian Toolbox and the All Ready app.

Finally, I was thrilled at the contribution from members of the Visual Studio team to Humanitarian Toolbox. Several members from different parts of the Visual Studio team worked for three days to build the initial release of the allReady application. The application source is on Github, under the HTBox organization.

This application was requested by the Red Cross. It provides features to lessen the impact of disasters on familes and communities. You can learn more about the project here on the Humanitarian Toolbox website.

The Visual Studio 2015 launch event included a profile of the developers and the process for building the initial code for allReady. You can watch the entire event on Channel 9. If you are interested in just the allReady app, and how it was built with Visual Studio 2015, look for the “In the Code” segments. There may be more In the Code episodes coming as the application grows.

All of us at Humanitarian Toolbox are grateful for the contribution from the Visual Studio team.

As a developer, I’m also grateful for the great new tools.

Created: 7/16/2015 5:05:29 PM

I was asked to review a bit of code for a friend the other day, and the result may be illustrative for others. I’ve stripped out much of the code to simplify the question. Examine the following small program:

 

public class SomeContainer

{

    public IList<int> SomeNumbers => new List<int>();

}

class Program

{

    static void Main(string[] args)

    {

        var container = new SomeContainer();

        var range = Enumerable.Range(0, 10);

        foreach (var item in range)

            container.SomeNumbers.Add(item);

 

        Console.WriteLine(container.SomeNumbers.Count);

    }

}

 

If you run this sample, you’ll find that container.SomeNumbers has 0 elements.

 

How is that possible? Why does the container not have 10 elements?

 

The problem is this line of code:

 

public IList<int> SomeNumbers => new List<int>();

 

The author had used an expression bodied member when he meant to use an initializer for auto-property.

An expression bodied member evaluates the expression whenever the public member is accessed. That means that every time the SomeNumbers property of the container gets accessed, a new List<int> is allocated, and assigned to the hidden backing field for the SomeNumbers property. It’s as though the author wrote this:

 

public IList<int> SomeNumbers { get { return new List<int>(); } }

 

When you see it using the familiar syntax, the problem is obvious.

This fix is also obvious. Use the syntax for an initializer:

 

public IList<int> SomeNumbers { get; } = new List<int>();

Notice the changes from the original code. SomeNumbers is now a read only auto property. It also has an initializer, This would be the equivalent of writing:

 

public IList<int> SomeNumbers { get { return storage; } }

private readonly List<int> storage = new List<int>();

 

That expresses the design perfectly.

 

This just illustrates that as we get new vocabulary in our languages, we need to fully understand the semantics of the new features. We need to make sure that we correctly express our designs using the new vocabulary.

It’s really easy to spot when you make a mistake like this: every time you look at this property in the debugger, you’ll see that it re-initializes the property. That’s the clue that you’ve made this mistake.

 

 

Created: 7/5/2015 4:11:33 PM

Sometimes, despite everyone’s best planning, you can’t help having features collide a bit. Thankfully, when it happens, there are almost always some way to reorganize the code to make it work. The key is to understand why your original code causes some issues.

I’ve written before about the new string interpolation feature in C# 6. I find it a welcome improvement over the previous incarnations.

Two goals of the feature are:

  1. It supports format specifiers for .NET types.
  2. It supports rich C# expression syntax between the braces.

Those two goals conflict because of the C# symbols used for each of them.

To specify a format for an expression, you place a colon ‘:’ after the expression, and then the remaining characters inside the braces consist of the format specifier. Here’s an example:

 

Console.WriteLine($"{DateTime.Now:MMM dd, yyyy}");

 

This will print the date in the form “Month day, year” where day is always a 2 digit number.

The C# compiler supports these format specifiers by interpreting the first first ‘:’ character in a string interpolation expression as the start of the format specifier. That works great, except in one case: where you want the colon to mean something else, like part of a conditional expression:

 

Console.WriteLine($"When {condition} is true, {condition ? "it's true!" : "It's False"}");

 

You can see that the syntax highlighting is getting a bit confused as it is not following the later string correctly. That’s because the compiler interprets the ‘:’ as the beginning of a format specifier.

It’s easy to fix this. Just put the conditional expression inside parentheses:

 

Console.WriteLine($"When {condition} is true, {(condition ? "it's true!" : "It's False")}");

That way, the C# parser views everything inside the parentheses as part of the expression, and does not view the : as the beginning of a format specifier.

See? It’s easy to get both features to work. You just have to know why the first expression is interpreted differently than you’d expect.

Created: 4/3/2015 12:50:42 AM

I’m thrilled to have been nominated and accepted as a member of the .NET Foundation Advisory Board.

I’m very excited about the role we can play in growing the Open Source ecosystem around .NET. We’ve just gotten started, so there is not a lot of progress to report, but I’m excited by the potential. Our role is to provide a channel between the .NET Foundation Board of Directors and the .NET developer community. We will be helping to refine policies to accept new projects, grow and nurture the projects under the .NET Foundation, and overall, make .NET Open Source Development better and richer for everyone.

Shaun Walker is the chairman of the .NET Foundation Advisory Board, and his announcement here is a great description of the rationale and thought process that went into creating the advisory board.

I’m excited to participate in growing Open Source development around .NET and the great languages and frameworks that are coming from the developer teams. This is a large and important initiative. It covers everything from the Roslyn compiler projects, to the TypeScript compiler to ASP.NET vNext to the Core CLR and core .NET Framework releases. And that’s just the major projects from inside Microsoft. There are so many tremendous projects (like ScriptCS, just to name one) that are part of a growing .NET ecosystem.

We’ve got quite a bit of work to do. The Foundation is a new organization, and we need to advise the board on everything from what kinds of projects we’ll accept, to the process for accepting new projects, to the governance of the advisory board. It’s a lot of work, but it’s also a lot of fun.

It’s an exciting time to be a .NET developer. I’m glad to be in the middle of it.

Created: 3/3/2015 4:34:42 PM

The more I work with C# 6 in projects, the more I find myself using ?. to write cleaner, simpler, and more readable code.  Here are four different uses I’ve found for the null coalescing operator.

Deep Containment Designs

Suppose I’m writing code that needs to find the street location for the home address for a contact person for vendor. Maybe there’s an awesome event, and I need to program my GPS. Using earlier versions of C#, I’d need to write a staircase of if statements checking each property along the way:

 

var location = default(string);

if (vendor != null)

{

    if (vendor.ContactPerson != null)

    {

        if (vendor.ContactPerson.HomeAddress != null)

        {

            location = vendor.ContactPerson.HomeAddress.LineOne;

        }

    }

}

 

Now, using C# 6, this same idiom becomes much more readable:

 

var location = vendor?.ContactPerson?.HomeAddress?.LineOne;

 

The null coalescing operator short-circuits, so evaluation stops as soon as any single property evaluates as null.

INotifyPropertyChanged and similar APIs

We’ve all seen code like this in a class the implements INotifyPropertyChanged:

 

public string Name {

    get { return name;  }

    set {

        if (name != value)

        {

            name = value;

            PropertyChanged(this, new PropertyChangedEventArgs("Name"));

        }

    }

}

private string name;

 

I hope your are cringing now. This code will crash if it’s used in a situation where no code subscribes to the INotifyPropertyChanged.PropertyChanged event. It raises that event even when there are no listeners. 

When faced with that situation, many developers write something like the following:

 

public string Name {

    get { return name;  }

    set {

        if (name != value)

        {

            name = value;

            if (PropertyChanged != null)

                PropertyChanged(this, new PropertyChangedEventArgs("Name"));

        }

    }

}

private string name;

 

OK, this is a little better, and will likely work in most production situations. However, there is a possible race condition lurking in this code. If a subscriber removes a handler between the ‘if’ check and the line that raises the event, this code can still crash. It’s the kind of insidious bug that may only show up months after deploying an application.  The proper fix is to create a temporary reference to the existing handler, and raise the event on that object rather than allowing the race condition on the PropertyChanged public event:

 

public string Name {

    get { return name;  }

    set {

        if (name != value)

        {

            name = value;

            var handler = PropertyChanged;

            if (handler != null)

                handler(this, new PropertyChangedEventArgs("Name"));

        }

    }

}

private string name;

 

It’s more code, and it’s a few different techniques to remember every time you raise the PropertyChanged event. In a large program, it seems like someone forgets at least once.

C# 6 to the rescue!

In C# 6, the null coalescing operator implements all the checks I mentioned above. You can replace the extra checks, and the local variable with a ?. and a call to Invoke:

 

public string Name {

    get { return name;  }

    set {

        if (name != value)

        {

            name = value;

            PropertyChanged?.Invoke(this, new PropertyChangedEventArgs("Name"));

        }

    }

}

private string name;

 

The shorter, more modern version reads more concisely, and implements the proper idioms for raising events and managing subscribers being added or removed.

Resource Management

Occasionally, we may find that one of our types owns another object that has certain capabilities. However, that object may implement other capabilities beyond those our class requires. Usually, that’s not an issue, but what if that object implements IDisposable? Consider the case of an Evil Genius that is done working with a henchman.  The code to retire a henchman might look like this:

 

public void RetireHenchman()

{

    var disposableMinion = Minion as IDisposable;

    if (disposableMinion != null)

        disposableMinion.Dispose();

    Minion = null;

}

 

The null coalescing operator can make this code more concise as well:

 

public void RetireHenchman()

{

    (Minion as IDisposable)?.Dispose();

    Minion = null;

}

 

LINQ Queries

There are two different uses I’ve found for this operator when I work with LINQ queries.  One very common use if after I create a query that uses SingleOrDefault(). I’ll likely want to access some property of the (possible) single object. That’s simple with ?.

 

var created = members.SingleOrDefault(e => e.name == "dateCreated")?.content;

 

Another use is to create a null output whenever the input sequence is null:

 

members?.Select(m => (XElement)XmlValue.MakeValue(m))

 

The addition of this feature has made me realize just how much code checks for null, and how much more concise and readable our code will be by having a more concise syntax for checking against null, and taking some default action based on the ‘null-ness’ of a variable.

This feature has changed how I code everyday. I can’t wait for the general release and getting more of my customers to adopt the new version.

Created: 12/4/2014 8:53:23 PM

One of the new features in C# 6 is expression bodied members. This feature enables us to create members (either methods, property get accessors, or get methods for indexers) that are expressions rather than statement blocks.  Consider this class, and not the highlighted portion:

 

public class Point

{

    public double X { get; set; }

    public double Y { get; set; }

    public double Distance

    {

        get

        {

            return Math.Sqrt(X * X + Y * Y);

        }

    }

}

 

That’s quite a bit of typing for a very simple method.  C# 6 lets you replace that accessor with a lambda expression:

 

public double Distance => Math.Sqrt(X * X + Y * Y);

 

The body of the get accessor is replaced by the simple expression that returns the distance of the point from the origin.

As I mentioned above, expression bodied members can also be used for methods:

public override string ToString() => string.Format("{{0}, {1}}", X, Y);

 

Lest you worry that this feature only works with non-void methods, here’s a TranslateX method that moves the point in the X direction by x units. It has a void return:

public void TranslateX(double x) => X += x;

It’s one of the features I like a great deal in C# 6. When you use it with simple, one – line methods, it increases readability and clarity of a class.

When I speak on C# 6, I always get reservations from from people about how this feature could be abused. Developers are concerned that someone (not them mind you, but other developers they work with) would abuse this feature and write methods are are pages long using this syntax.

Don’t worry. The compiler won’t allow it.  Suppose I tried to extend the TranslateX method to take two parameters are translate the point in both the X and Y directions.  The compiler does not support expression bodied members with block statements.  None of these attempts to write a method with more than one statement as an expression bodied member will compile:

// Does not compile.

public void Translate(double x, double y) => X += x; Y += y;

 

// Does not compile.

public void Translate(double x, double y) => { X += x; Y += y; };

 

// Does not compile.

public void Translate(double x, double y) => if (true) { X += x; Y += y; };

 

At this point, even the most evil of your co-developers will give up and use the standard syntax for a method that has more than one statement.

Overall the new syntax leads to clearer code, and the restrictions on the syntax that the method must be an expression ensures that it can’t be abused (much).

Created: 11/24/2014 3:55:52 PM

The preview of Visual Studio 2015 is public, and the Roslyn APIs are stabilizing.  A group of language MVPs have begun working on a series of code analyzers and code fix tools that will help you write better code.  To facilitate this work, we’ve created an organization on Github, .NET Analyzers, that contain the repositories for the analyzers, code fix projects, and refactorings that we’ve created.

We’ve got seven code repositories in place already. There’s also a Proposals repository that has potential ideas for new code fix or analyzers projects.

The code repositories are varied, both in completeness and in scope. Some are for a single refactoring. Others will grow to a full suite of analyzers.  Some are nearing completion. Others haven’t had any code uploaded yet.

Mark Michaelis and I started this group while at the MVP summit. Since then, I’ve been very impressed with the response. In particular, Sam Harwell, Tugberk Ugurlu,  and Adam Speight have been working very diligently on several of the analyzers and code fix algorithms. There have been many others that have contributed and offered suggestions and help. I’m singling those three out because they have really spent significant time working on this.

We’d love to see other people get involved as well.

If you want to join us, check out the organization on GitHub. Look at the issues in the code repositories and contribute. Make your modifications, and submit a pull request. We are generally following Github flow, but that’s not a firm requirement. If you are new to Github, just ask one of us via Github messaging, and we’ll help walk you through it.

If you have an idea for something you want to implement, please add it as an issue to the Proposals repository. Mention that you want to work on it. We’ll help you define the requirements, and suggest if the new feature should go in an existing repository, or if it should be a new repository.

If you have ideas, but aren’t sure how to implement them, add them as issues to the Proposals repository. We’ll discuss it and prioritize it with you. If you want to implement it, but need help, just ask.

Special note for the VB.NET community: We’re interested in having support for important VB idioms as well. However, we need the VB community to help us. Most of us are more involved in the C# community than the VB.NET community.

We want to see this grow and become an indispensable set of extensions for .NET developers. Please join us. Let’s see what we can build together.

Created: 4/9/2014 3:00:03 PM

By now, I’m sure you’ve heard the big news: There’s a new public preview of the Roslyn C# and VB.NET Compilers.  And, even bigger news: The Roslyn Compilers are Open Source.

This is an important milestone for these projects. The C# compiler is now written in C#; the VB.NET compiler is written in VB.  (See below). They also now do have full support for all the features in C# version 5 (and then some).  The rest of this blog post walks through some of the important ideas in the Roslyn ecosystem.

TL;DR version: Go to https://roslyn.codeplex.com/ and get started.

Using the Roslyn Compilers (and IDE updates)

You can get the Roslyn compiler from a NuGet package, or by signing up and downloading the End User Preview. You will want to do this before you explore the Roslyn Source. That’s because the Roslyn compiler source makes use of new language features only available in the Roslyn compiler. That means you can’t build the Roslyn source with the compilers that ship with VS 2013.

Once you have the end user preview. You’ll find quite a few new features in the IDE. Most of these are covered in the //build talks, available online. My one recommendation: train yourself to look for the light bulb in the left margin of source. There’s lots of new information there.

If you want to dive deeper into Roslyn, get the SDK Preview. That provides templates and examples for writing extensions based on the Roslyn Language services API. One design goal for the Roslyn compilers is to provide the same language services to the compiler, the IDE, and 3rd Party add-ins. Those APIs let you view program source at its syntactic and semantic tree level, and manipulate those trees. If you are interested in analysis tools, or Refactorings, that’s where you start.

Learning about the source (and more about the languages)

The Roslyn source is hosted on CodePlex. It is a git – based repository. You can clone it using your favorite git tools. In my case, I cloned it from the command line using posh-git. Once cloned, I added it to my list of local repositories in GitHub for Windows. That’s worked well for me.

Once you’ve gotten the source, follow the instructions here to build it. Congratulations, you’ve built a compiler.

Feel like learning more, get involved in the discussions.  First, read the language design notes for C# (or VB if that’s your favorite) and then check the Language Feature Implementation status. The language design notes will give you the history and background for the current language features. It’s very illuminating. Personally, I’m thrilled that the teams have the confidence to share this level of detail with the public. You’ll learn an incredible amount by reading about the alternative implementations and the thoughts and discussions around each decision.

For many of the proposed new features, discussions are ongoing. In particular, the discussion on the null propagating operator is rather lively. If you have an opinion, join in.

Do you have ideas?

Well, we all do. And here’s where life gets very tricky. The Roslyn compilers are licensed under the Apache v2 license.  You, me, and everyone else on the planet can fork them and add our own new language features. On the one hand, it’s a fantastic move to release the compilers as open source. On the other hand, it could lead to chaos.

On the plus side, academics (and their students) can teach and learn compiler construction by working with a world class commercial compiler for two of the most popular programming languages on the planet.

Also on the plus side, community members can prototype ideas for language features and submit them for review. That will do a lot to keep the C# and VB communities vibrant and involved.

And, by looking at the compiler source, you’ll better understand how to build extensions and addins that leverage the Roslyn APIs.

But there is a dark side.

Part of the success of VB.NET and C# is that they are consistent languages. The Microsoft and Xamarin compilers both track the standard very closely, and have very few divergences. That’s a positive for the language.

I worry that it’s possible to have several dialects of C# in the future that all behave differently. If that happens, this global experiment to open source Roslyn will have been a mistake.

Rather than worry, here’s my recommendation to anyone looking at creating new language features: Get involved in the feature discussions first. Talk with the team. They are already active in the discussion forums. Get buy-in for the new feature.

If that happens, this is a marvelous new era. If community members create and discuss great new features, and those features become part of the standard C# Language, the global experiment to open source Roslyn will come to be seen as one of the defining positive moments in computing history. That’s what I want to see happen.

Current Projects

I create content for .NET Core. My work appears in the .NET Core documentation site. I'm primarily responsible for the section that will help you learn C#.

All of these projects are Open Source (using the Creative Commons license for content, and the MIT license for code). If you would like to contribute, visit our GitHub Repository. Or, if you have questions, comments, or ideas for improvement, please create an issue for us.

I'm also the president of Humanitarian Toolbox. We build Open Source software that supports Humanitarian Disaster Relief efforts. We'd appreciate any help you can give to our projects. Look at our GitHub home page to see a list of our current projects. See what interests you, and dive in.

Or, if you have a group of volunteers, talk to us about hosting a codeathon event.