Bill Blogs in C#

Bill Blogs in C#

Created: 2/9/2016 4:47:17 PM

I’m continuing my discussion on proposed C# 7 features. Let’s take a brief look at Slices. The full discussion is here on GitHub. Please add your thoughts on that issue, rather than commenting directly to me. That ensures that your thoughts directly reach the team.

Let’s start with the problem that Slices are designed to solve. Arrays are a very common data structure (not just in C#, but in many languages.) That said, it’s often that you work with a subset of an entire array. You’ll likely want to send a subset of an array to a some method for changes, or read only processing. That leaves you with two sub-optimal choices. You can either copy the portion of the Array that you want to send to another method. Or, you use the entire array, and passes indices for the portion that ahould be used.

The first approach requires often copying sub-arrays for use by APIs. The second approach means trusting the called method to not move outside the bounds of the sub-array.

Slices would provide a third approach; one that enforces the boundaries to the proper subset of an array, without requiring copies of the original data structure.

Feature Overview

The feature requires two pieces. First, there is a Slice<T> class that would support a slice of an array. There is also a related ReadOnlySlice<T> class that would support a readony slice of an array.

In addition, the C# Language would support features designed to create those slice objects from another array:

 

Person[] slice = people.GetSlice(3, 9);

byte[] bytes = GetFromNative();

There is quite a bit of discussion about the final implementation, including whether or not CLR support would be needed. Some of that discussion moved to the C# Language Design Notes here.

This post is somewhat light on details, because the syntax, and the full feature implementation is still under discussion. You can look at one implementation here: https://github.com/dotnet/corefxlab/tree/master/src/System.Slices

I will write new posts as the feature list and the syntax and impementation becomre more concrete.

Created: 2/5/2016 5:50:41 PM

As I’ve been working with developers looking to grow with their careers, I’ve been asked one question (or variations on it) repeatedly:

How do I grow my career while remaining technical? I don’t want to be a manager, I want to be a senior technical leader.

That question has prompted me to think about my own career, and what it means to be a senior technical leader. I’ve come to this conclusion:

Growing and having more influence means scaling your reach.

Let me go into some detail about what I mean by that statement.

No matter how good you are, there’s only so much code you can write. Being more senior doesn’t mean you code faster. You don’t finish designs exponentially faster. You are still a single individual, and you will still only get so much done in so much time.

Managers scale because have significant influence on their staff. To grow as a technical leader, you need to answer the question of how you scale? If you continue to view yourself as an “individual contributor”, you will find that you reach a point where you cannot continue to grow. (It’s even more important for your management to have a good answer to how you scale in the organization.)

Your value as a technical leader is measured by the positive influence you have on the other members of the team.

Scaling your Technical Skills

There are many ways that your technical skills can scale across your team. They all have a common theme: As a technical leader, your responsibility is to raise the technical skill level of the development team as a whole.

Here are a number of examples that your could take on to have a positive influence:

  • Pair Programming with Junior Staff: You can help junior developers learn good habits, better coding skills, frameworks, language techniques. You can also ensure they are following proper practices (as defined by your organization, be that TDD, formal specifications or other processes).
  • Start an in-house learning program: This could be a lunch and learn series, where developers present ideas to the team. You can take the role of program chair, and drive the content. You can present your ideas, and thoughts on software engineering and development.
  • Mentor Developers: Work with your management to make mentoring junior developers an explicit task. Measure progress by showing how you are helping these junior developers achieve their growth targets.
  • Proactively review designs and code: Reach out to the other developers on your team, and offer to help. Look at what they are doing, and offer feedback. Make sure that the outcomes get noticed: Did you positively influence the design? Are the team members growing as a result of working with you?
  • Propose and Implement training programs: There are a lot of options available for those that want to improve. Everything from online offerings, to libraries of content, to conferences to in person training programs. Work with your team, get proposals, get budget, and make it happen. Measure outcomes: show management that the team is improving.
  • Build something on the side: Work with team members to build proof-of-concept implementations for new libraries and new tools you want to use. You can even consider working with a charity (like HTBox) to buid software that can help their organization.

It’s About Your Growth too!

If you follow all  those suggestions, you may become a very respected senior leader for your tech team. Or, you may be seen as that jerk that criticizes everyone.

It’s about how you deliver your ideas.

The best senior technical leaders have learned how to critique, teach and help people grow. They are able to make suggestions that help people grow. They are able to have people take pride in their work, and yet want to improve it.

If you want to take on this role, you need to spend a lot of time working on how you deliver your ideas to your peers, and to the team members you want to view you as a technical leader. You’re not a manager; you have no real authority. You need to work on your inter-personal skills, and grow the muscle to affect change with everyone on your teams.

Yes, I’m talking about soft skills.

One of the best bits of advice I ever received was this:

There are few people that listen. Most are just waiting for their turn to talk. Be a listener.

These developers you want to mentor: they have knowledge you don’t. They’ve thought about the problems at hand. They are doing the best work they can today. Your job as a mentor is to tell them they are wrong, but to augment their thinking with your insight. That’s only possible when you listen to the thought process behind their design and code. You have to learn as much as you can about the problem, the proposed solution, and the thinking behind the process.

Only after you have listened, and learned as much as you can, will your guidance carry weight.

What about the company?

The last piece of the puzzle is to determine if your current organization’s culture values the role of a technical leader. Some do; some do not. You need to work with your managers to see if that role exists in your orgamnization.

If not, you can work with your managers to create it.

And, if your company and your management team aren’t interested in a role you want, you need to decide if it’s the right fit for you.

Some Closing Thoughts

More and more, it is possible to remain in a technical role and achieve career growth. You need to learn how to make technical skill scale across an organization. You need to convince the team and management of it’s worth. Most importantly, you need to grow the skills needed to be viewed as a technical leader.

Created: 1/28/2016 5:12:15 PM

This is the first of a series of blog posts where I discuss the upcoming feature proposals for C# 7. At the time that I am writing these posts, these are all proposals. They may change form, or may not be delivered with C# 7, or ever. Each post will include links to the proposal issue on GitHub so that you can follow along with the ongoing disussions on the features.

This is an interesting time for C#. The next version of the language is being designed in the open, with comments and discussion from the team members and the community as the team determines what features should be added to the language.

Ref Returns and Local are described in the Roslyn Repository on GitHub, in Issue #118.

What is Ref Return and Ref Locals?

This proposal adds C# support for returning values from methods by reference. In addition, local variables could be declared as ‘ref’ variables. A method could return a reference to an internal data structure. Instead of returning a copy, the return value would be a reference to the internal storage:

 

ref PhysicalObject GetAsteroid(Position p)
{
    int index = p.GetIndex();
    return ref Objects[index];
}

ref var a = ref GetAsteroid(p);
a.Explode();

 

Once the language defines ref return values, it’s a natural extension to also have ref local variables, to refer to heap allocated storage by reference:

    ref PhysicalObject obj = ref Objects[index];

This enables scenarios where you want pass references to internal structures without resorting to unsafe code (pointers to pinned memory). Those mechanisms are both unsafe and inefficient.

Why is it useful?

Returning values by reference can improve performance in cases where the alternative is either pinned memory, or copying resources. This features enables developers to continue to use verifiably safe code, while avoiding unnecessary copies.

It may not be a feature you use on a daily basis, but for algorithms that require large amounts of memory for different structures, this feature can have a significant positive impact on the performance of your application.

Verifying Object Lifetimes

One of the interesting design challenges around this feature is to ensure that the reference being returned is reachable after being returned. The compiler will ensure that the object returned by a ref return continues to be reachable after he method has exited. If the object would be unreachable, and subject to garbage collection, that will cause a compiler error.

Effectively, this means you would not be able to return a ref to a local variable, or a parameter that was not a ref parameter. There is a lengthy discussion in the comments GitHub that go into quite a bit of detail on how the compiler can reason about the lifetime of any object that would be returned by reference.

Created: 1/21/2016 3:35:52 PM

I was honored to speak at NDC London last week. It’s an awesome event, with a very skilled set of developers attending the event.

I gave two talks at NDC London.  The first was a preview of the features that are currently under discussion for C# 7. C# 7 marks a sea change in the evolution of the C# Language. It’s the first version where all the design discussions are taking place in the open, on the Roslyn GitHub Repository. If you are interested in the evolution of the C# language, you can participate in those discussions nnow.

Instead of posting the slides from that talk, I’ll be blogging about each of the features, with references to the issue on GitHub. I’ll cover the current thinking and some of the open issues relating to each of the features.

Watch for those posts coming over the next month. I’ll mix in a forward looking post along with language infomration you can use now.

My second talk was on building Roslyn based Analyzers and Code Fixes. You can find the slides, and demos here on my GitHub repository: https://github.com/BillWagner/NonVirtualEventAnalyzer

If you have any questions on the samples, the code, or concepts, please make an issue at the repository, and I’ll address it there.

Created: 1/10/2016 2:16:24 AM

I spent this past week at the 10th annual CodeMash conference in Sandusky OH. Every single event has been enjoyable, envigorating, and a great way to kick-start the year.

The event has changed dramatically over the past decade, but it still has the same core values from when it was started. It’s a group of people passionate about technology in many incarnations, and willing to share and learn from each other. Looking back at 10 years of CodeMash, several larger trends appear.

Early on, the languages discussed most were Java, C#, VB.NET, and Python. Over time, more and more interest in Ruby grew. Java waned for a time. Functional languages like F#, Haskell, and Erlang became more popular. There were a few Scala sessions.

More recently, the modern web became a focus: JavaScript, CSS and modern design techniques are now mainstays.

Interesting, for me, is that C# and .NET have always had a presence, and continue to be relevant and modern. Some of the most popular talks this year were on C# 7 and the upcoming ASP.NET 5 release. I’ve given a talk at every CodeMash. I’ve spoken on C# 3, LINQ, dynamic support in C# 4, async and await in C# 5, TypeScript, and C# 6 features. I’ve had a great time, and I hoe that the attendees have learned and enjoyed by talks.

I remember some of the amazing people that have been speakers at CodeMash: From Scott Guthrie, to Bruce Eckel, to Mads Torgersen, to Scott Hanselman, to Jon Skeet to Jesse LIberty to Lino Tadros to Katlhleen Dollard. I’m also sure I’m missing several prominent people that have spoken over the years. Carl Franklin and Richard Campbell have recorded several .NET Rocks! shows while in Sanduskey.

The attendance has grown from around 200 to roughly 2000. That still amazes me. It’s lost som of the feel of those early events. In the first few years, it always felt lie you knoew everyone. But, the current event does have the same culture, the same caring community, and the core group of friends are still there year after year.

Over the years, I’ve driven through ice, snow, and in some years Sptring like weather for CodeMash. My family has enjoyed the waterpark on the weekend after the event.

I want to congratulate the crazy dreamers that started CodeMash, and made it happen. It’s amazing to see what you’ve built, and how much fun the first 10 years have been.

Created: 1/5/2016 4:41:43 PM

This is my second post about the changing year. In this post, I look at the areas where I will invest my time in the coming year.

.NET Everywhere

The .NET Core platform is a version of the .NET stack that runs on OSX and Linux in addition to Windows platforms. This is a pre-release project at this time, but there is enough there to beging experimenting.

As this platform matures, .NET truly becomes cross-platform. You’ll be able to use C#, F#, VB.NET, and the bulk of the .NET Base Class Libraries to create software that runs on multiple platforms. The Roslyn compiler platform provides one of the key building blocks by running on OSX and Linux. That means the managed portions of the .NET Framework can be compiled on these platforms, and will run effectively. The remaining work is primarily to make the unmanaged portions of the CLR and the libraries run on other platforms.

Some of the work here is technical in nature. Other work is to ensure that the licenses used by the .NET framework components are compatible with a cross-platform strategy. Most of the components are released under either the Apache 2.0 license, or the MIT license. Check each library for details. (Notably, the previous restrictions on earlier MS-PL licenses for Windows only support has been removed from anything related to .NET Core).

While this work is going on, there is a parallel effort to build out learning materials for .NET Core. This project is also open source, and accepting contributions. (I’ve had a couple PRs merged here already).

I’m really looking forward to the date later in 2016 when a production ready .NET environment runs on Windows, Linux, and OSX.

C# 7 designing in the Open

This will be a very interesting version of C#. The team is using GitHub issues to openly discuss language features and their ramifications.  Go aehead and participate in the discussions. It is really exciting to see the community participate with such passion for different features that they would like to see in their favorite language.

As I discussed in the last post, the compiler is Open Source. If you want to experiment with some of the new features, you can try. There are experimental branches for some of proposed features. (The maturity of the implementation varies.) It’s also important to understand that the features haven’t been formally committed to. Do not use those branches in production applications (yet).

Clouds, Containers, and Devices

We’ve entered a world where our web-based applications are managing more data, and scaling to ever larger numbers of users.

OK, this trend has been in place for a while, but it’s only accelerating and growing.

We have new ways to deliver and scale web-based software. We have cloud platforms that enables us to change the number of instances running. We have docker containers. And, on a related trend, we can offload more and more processing to the client device. That may mean platform specific mobile applications, or SPA style browser based applications.

We will be doing more and more work with software that needs to scale by running multiple copies of your application in different configurations. There are many different options for this, and wise developers will learn a bit about each of them.

Rise of the Machine (Learning)

Machines are good at examining very large datasets. Machines are also good at running lots of different scenarios.

Put the two together, and I see a trend for more machine learning in the future. The applications we use generate tremendous amounts of data every day. As machines observe and analyze that data, more insights can be gained then ever before.  Machine Learning and related algorithms can enable us to play ‘what if’ with more and more different scenarios and make better decisions based on larger and larger data sets.

The Era of Big Data means Small Applications

Our data is a bigger and bigger part of our lives. The software necessary for us to interact with that data is smaller (in relation). This trend affects the ‘stickiness’ of any given platform.

This trend has a huge impact on how network effects work for modern systems.

As an example, consider music applications. Every platfrom has an app that can play music. But, what’s important to users is that they can play the music they’ve already purchased. If you already have an iTunes subscription, you want to play your iTunes music. If you have a Microsoft Groove subscription, you want to play your Groove music.

The important feature is access to the data (music) that you’ve already purchased (or subscribed to, or created).

That impacts the ‘stickiness’ of a platform. Finding and installing programms for a new device takes a fraction of the time (and possibly money) of updating every subscription to a new platfrom. I believe this portends an interesting trend in the network effects for different applications. Namely, changing device platforms will be a trivial exercise, compared to changing the provider of cloud based data.

I believe this means that future platform stickiness will be based on your cloud based data, not your device. Do you want to switch between mobile platforms? That will be easy, as long as your data can come along. If that means recreating or resubscribing to a service, its going to be a non starter.

That leads me to the concluson that the clould is more critical than the device.

What’s next for HTBox?

In the coming year, we’ll be focusing on reaching the 1.0 milestone (not beta) for the apps currently in development. We’ll also be ading new applications as the year progresses.

I’m excited for the progres, and I’d be happy to see more participation. If you are interested, head over to our home page on github and check out our projects.

Created: 12/31/2015 8:09:58 PM

This is the first of two posts on my thoughts for the coming year. This is a mixture of personal and global ideas. It’s my perspective, based on my experiences. In this post, I look back at the important trends and events I saw and experienced. (Part II will look at the year to come.)

The Growth of Humanitarian Toolbox

I’m very happy with all that has happened with Humanitarian Toolbox in the past year. We’ve continued to work on two different applications: Crisis Checkin and AllReady. The .NET Developer Community has really come together to help us achieve important milestones.

Crisis Checkin is being enhanced to support Operation Dragon Fire which will provide better data sharing during crisis. Watch the Github repo for updates on new feature requests to support this effort.

While Crisis Checkin has been moving along at a reasonable pace, AllReady has been moving incredibly fast. I need to thank our friends and colleagues on the Microsoft Visual Studio team for the incredible contributions they’ve been making to your effort.

The Visual Studio team started development as a showcase for many of the new features that shipped with Visual Studio 2015. They recorded a series of videos that documented that initial effort. HTBox took over the code and open sourced it shortly thereafter. We continued to work with community members over the summer, at ThatConference, and remotely to add features. Fall came, and we worked with Microsoft after the MVP Summit to get the application ready for Beta. You can see some of the experience at that sprint here.

We successfully hit our beta milestones, and our next step has been a pilot with the Red Cross in Chicago. The pilot has been successful, and we’ve been generating new tasks and feature requests from the pilot.

The success we’ve had building software has also brought an increase in contributions. We’re by no means a large charity, but we’r past the bootstrap phase and well on our way to a successful startup venture.

We owe a lot to everyone that has contributed:

  • The .NET Developer Community members that have contributed to our projects.
  • The Microsoft Visual Studio Team that adopted and help launch development of AllReady.
  • The donors that have made us a financially viable organization.
  • But most of all, my fellow board members at HTBox. I’ve never worked with a stronger and more dedicated leadership team.

I’m confident that we’ll continue this momentum over the next year.

C# 6 / Visual Studio 2015

Earlier this year, we saw the release of Visual Studio 2015, and with it the 6.0 version of the C# language. This is the first release using the Roslyn Codebase. I’m super excited about the rejuvenation of the C# and .NET community as the team reached this important milestone.

We have the Roslyn APIs to create analyzers, code fixes, and refactorings.

We have numerous new language features to support modern development.

We have an IDE and compiler that share more code, and thereby use less memory and have better performance.

And….

Open Source all the things

The code for the C# and VB.NET compilers are Open Source (Apache 2.0 license) and stored on Github. Want to learn more about how it works? Look in the source. Want to participate? Grab an up-for-grabs issue, submit your fix. Want to experiment? Grab the source and try it out.

It’s also very instructive to see just how many tests and gates there are for the source code in the compilers.

But it doesn’t stop there. In addition to the Rosly source, many other parts of the .NET framework are currently, or are planned to be, Open Source as well. The full list is quite long, view it here.

The model of Open Source development is becoming the norm. Apple even responded by making Swift Open Source, also with a permissive license.

I believe the Open Source model is one of the key reasons for the rejuvenation of the .NET ecosystem.

Which brings me to…

.NET Foundation

The .NET Foundation is an indepent organization to support Open Source development in the .NET ecosystem.

While it was announced in 2014, its growth really started in 2015. (I’m biased, as I’m on the .NET Foundation Advisory Board).

The foundation nw includes parts of the .NET Framework, with source originally from the Microsoft product teams. It now also includes projects that started in the community, and have been brought under the .NET Foundation umbrella.

And, this post was written with the newest .NET Foundation project: Open Live Writer.

In my next post, I’ll talk about the topics that I think will be key in 2016.

Created: 11/23/2015 4:28:37 PM

This past weekend, I was honored to speak at Boston Code Camp 24. I had two different talks.

The first was on creating Angular 1.x applications using TypeScript. It was a great audience, and I enjoyed the conversation that went along with the discussions. The slides and demos for this talk are available here on github. As always, the branches are labeled using the steps in the demo. You can follow along as the application grows.

The second talk was on creating Diagnostics and CodeFixes using the Roslyn APIs. This one had a smaller audience (which I expected). I was quite happy that the people who did attend were very interested in the topic. Slides and demos are here (on github again). The branches walk through the tasks of building a diagnostic and a codefix. The specific diagnostic was for Item 24 in More Effective C# (“Create only Non-Virtual Events”).

That brings me to my call to action for attendees. I’ll repeat it here: Would you be interested in a Roslyn based analyzer that enforced the rules in Effective C# and More Effective C#? Leave me a note here, and let me know what you think.

Created: 11/18/2015 6:17:00 PM

One of the major challenges we faced with the AllReady app was building a custom kudu deployment script. There is incredible power in this system, but it takes a bit of research, and a bit of help to get all the pieces working.

Let’s start with the simple goal: to make testing easier, we wanted to deploy to a live site automatically whenever we merged a pull request into master.

Azure supports this by default, including for asp.net 5 sites. Using Kudu was an obvious choice.

Adding Web Jobs

Life got complicated when we added a web job to the project. We added a web job because one of the actions the Allreedy application performs is to send messages to all registered volunteers. We don’t want to tie up a thread running in the asp.net worker process for the duration of sending messages through a third party service. That could take quite some time.

Instead, we want to queue up the emails on a separate webjob so that the site remains responsive.

Complicating the Deployment

That’s where deployment got complicated. You see, when we started, webjobs aren’t supported under asp.net 5 yet. The webjob builds using asp.net 4.6, using Visual Studio 2015. We also have one interface assembly that contains the types that are shared between the web application (asp.net 5) and the web job (asp.net 4.6).

So our build now includes:

1. Build the assemblies that are part of the web application using DNX.

2. Build the assemblies that are part of the web job using MSBuild. (Note that this means building one assembly twice)

3. Deploy the web site.

4. Deploy the web job.

Those extra steps require creating a custom deployment script.

Creating the Custom Deployment Script

Here are the steps we needed for adding our own custom Kudu script. There were several resources that helped create this. First, this page explains the process in general. It is a shortened version of this blog series (link is to part 1).

The first task was to create a custom build script that performed exactly the same process that the standard build script process performs. I downloaded the azure tools for deployment, and generated a deployment.cmd that mirrored the standard process.

You need to have node installed so you can run the azure-cli tool. Then, install the azure-cli:

npm install azure-cli –g

Then, run the azure CLI to generate the script. In my case, that was:

azure site deploymentscript –aspWAP allreadyApp/Web-App/AllReady/AllReady.xproj –s allready.sln

Notice that I’m directing azure cli to generate a script based on my xproj file. But, notice that this does not build the .csproj for the web jobs.

Before modifying the script, I wanted to verify that the generated script worked. It’s a good thing I did, because the default generated script did not work right away. The script generator assumes that the root of your github repository is the directory where your .sln file lives. That’s not true for allready. We have a docs directory, and a code directory under the root of the repo.

So, the first change I had to make is to modify the script to find the solution file in the sub-directory. After that, the script worked to deploy the website (but not the webjobs). Doing that build required three changes. First, I needed to restore all the NuGet packages for the .csproj style projects:

call :ExecuteCmd nuget restore "%DEPLOYMENT_SOURCE%\NotificationsProcessor\packages.config" -SolutionDirectory "%DEPLOYMENT_SOURCE%" -source https://www.nuget.org/api/v2/

Getting this right got me stuck for a long time. In fact, I needed to get some product team support during our coding event from David Fowler. The version of nuget running in Azure needs to use the V2 feed when it’s restoring packages for a .csproj based project. *Huge* thanks to David for helping us find that.

Next, we needed to build the .csproj bsed projects:

call :ExecuteCmd "%MSBUILD_PATH%" "%DEPLOYMENT_SOURCE%\AllReady.Models\AllReady.Models.csproj"
IF !ERRORLEVEL! NEQ 0 goto error
call :ExecuteCmd "%MSBUILD_PATH%" "%DEPLOYMENT_SOURCE%\NotificationsProcessor\NotificationsProcessor.csproj"
IF !ERRORLEVEL! NEQ 0 goto error

The final step is to deploy the webjobs. This meant copying the web jobs into the correct location. This step happens after the build, and before the Kudu Sync process:

mkdir "%DEPLOYMENT_TEMP%\wwwroot\app_data\jobs\continuous\notificationsprocessor\"
call xcopy /S "%DEPLOYMENT_SOURCE%\NotificationsProcessor\bin\debug" "%DEPLOYMENT_TEMP%\wwwroot\app_data\jobs\continuous\notificationsprocessor\"

The deployment script and the .deployment config are in our repository, so if you want to explore, check it out. Our repository is here: http://www.github.com/htbox/allready. And, if you want to help, check out the issues and send us a pull request.

Tags: C#
Created: 10/13/2015 12:31:01 PM

Recently, I was asked a question about performance in event handlers. One of my regular readers had been told that using Lambda syntax created an event handlers that would execute more slowly than using the ‘classic’ delegate syntax and defining a separate method for the event handler.

TL;DR version:

Yes, but you probably don’t care. And if you do, you might want to reconsider your design.

A thorough explanation.

Dear readers, don’t accept these kinds of assertions just because someone says it’s true. Let’s write a bit of code, and let’s look at what’s generated.

Let’s start by creating a simple example that demonstrates both of the available options in action:

    class EventSource : Progress<int>

    {

        public async Task<int> PerformExpensiveCalculation()

        {

            var sum = 0;

            for(int i = 0; i < 100; i++)

            {

                await Task.Delay(100);

                sum += i;

                this.OnReport(sum);

            }

            return sum;

        }

    }

    class Program

    {

        static void Main(string[] args)

        {

            var source = new EventSource();

 

            EventHandler<int> handler = (_, progress) => Console.WriteLine(progress);

            source.ProgressChanged += handler;

            Console.WriteLine(source.PerformExpensiveCalculation().Result);

            source.ProgressChanged -= handler;

 

            source.ProgressChanged += ProgressChangedMethod;

            Console.WriteLine(source.PerformExpensiveCalculation().Result);

            source.ProgressChanged -= ProgressChangedMethod;

        }

    }

 

This program calls two different versions of the event source driver, one where the event source is connected and disconnected using Lambda syntax, one where a separate method is defined for the delegate. Let’s think about how we would run performance tests to tell the difference. Where might any performance changes exist? Will it be in processing the event, or connecting and disconnecting the event handler? or would it be in processing the event? The question is important in testing the performance of these two different versions. If it’s in processing the event, this version would be fine to instrument. However,if the performance differences are in connecting and disconnecting the event handlers, this application won’t be useful. We will be unlikely to measure the difference with connecting and disconnecting the handler only once.

Before making extensive measurements, let’s look at the generated IL.  Here’s the IL from VerionOne (which uses the lambda syntax)

IL_0007: ldsfld class [mscorlib]System.EventHandler`1<int32> blogExample.Program/'<>c'::'<>9__0_0'
IL_000c: dup
IL_000d: brtrue.s IL_0026

IL_000f: pop
IL_0010: ldsfld class blogExample.Program/'<>c' blogExample.Program/'<>c'::'<>9'
IL_0015: ldftn instance void blogExample.Program/'<>c'::'<Main>b__0_0'(object,  int32)
IL_001b: newobj instance void class [mscorlib]System.EventHandler`1<int32>::.ctor(object,  native int)
IL_0020: dup
IL_0021: stsfld class [mscorlib]System.EventHandler`1<int32> blogExample.Program/'<>c'::'<>9__0_0'

IL_0026: stloc.1
IL_0027: ldloc.0
IL_0028: ldloc.1
IL_0029: callvirt instance void class [mscorlib]System.Progress`1<int32>::add_ProgressChanged(class [mscorlib]System.EventHandler`1<!0>)
IL_002e: nop
IL_002f: ldloc.0
IL_0030: callvirt instance class [mscorlib]System.Threading.Tasks.Task`1<int32> blogExample.EventSource::PerformExpensiveCalculation()
IL_0035: callvirt instance int32 class [mscorlib]System.Threading.Tasks.Task`1<int32>::get_Result()
IL_003a: call void [mscorlib]System.Console::WriteLine(int32)
IL_003f: nop
IL_0040: ldloc.0
IL_0041: ldloc.1
IL_0042: callvirt instance void class [mscorlib]System.Progress`1<int32>::remove_ProgressChanged(class [mscorlib]System.EventHandler`1<!0>)

 

There are 5 instructions that set the event handler (IL_0010 to IL_0029). There is one instruction to disconnect the handler (IL_0042).

 

Let’s compare that to the version where the method is declared as a function member of the class:

IL_004a: ldftn void blogExample.Program::ProgressChangedMethod(object,  int32)
IL_0050: newobj instance void class [mscorlib]System.EventHandler`1<int32>::.ctor(object,  native int)
IL_0055: callvirt instance void class [mscorlib]System.Progress`1<int32>::add_ProgressChanged(class [mscorlib]System.EventHandler`1<!0>)
IL_005a: nop
IL_005b: ldloc.0
IL_005c: callvirt instance class [mscorlib]System.Threading.Tasks.Task`1<int32> blogExample.EventSource::PerformExpensiveCalculation()
IL_0061: callvirt instance int32 class [mscorlib]System.Threading.Tasks.Task`1<int32>::get_Result()
IL_0066: call void [mscorlib]System.Console::WriteLine(int32)
IL_006b: nop
IL_006c: ldloc.0
IL_006d: ldnull
IL_006e: ldftn void blogExample.Program::ProgressChangedMethod(object,  int32)
IL_0074: newobj instance void class [mscorlib]System.EventHandler`1<int32>::.ctor(object,  native int)
IL_0079: callvirt instance void class [mscorlib]System.Progress`1<int32>::remove_ProgressChanged(class [mscorlib]System.EventHandler`1<!0>)

 

 

Here, there are 3 lines of IL for attaching the event handler (IL_004A to IL_0055). And also 3 lines of IL for disconnecting (IL_006e to IL_0079). Well, does seem to indicate one extra line of IL to define the first version. Could that add up?

Well, not really.But, let’s measure to be sure.

I’ll modify the program to perform some measurements.  I want to measure adding and removing the event handler, not the whole process. So, I commented out the calls to PerformExpensiveOperation(), and I’m simply adding and removing the event handler:

 

class Program

{

    static void Main(string[] args)

    {

        for (int repeats = 10; repeats <= 1000000; repeats *= 10)

        {

            VersionOne(repeats);

            VersionTwo(repeats);

        }

    }

 

    private static void VersionOne(int repeats)

    {

        var timer = new Stopwatch();

        timer.Start();

        var source = new EventSource();

        for (int i = 0; i < repeats; i++)

        {

            EventHandler<int> handler = (_, progress) => Console.WriteLine(progress);

            source.ProgressChanged += handler;

            // Console.WriteLine(source.PerformExpensiveCalculation().Result);

            source.ProgressChanged -= handler;

        }

        timer.Stop();

        Console.WriteLine($"Version one: {repeats} add/remove takes {timer.ElapsedMilliseconds}ms");

    }

    private static void VersionTwo(int repeats)

    {

        var timer = new Stopwatch();

        timer.Start();

        var source = new EventSource();

        for (int i = 0; i < repeats; i++)

        {

            source.ProgressChanged += ProgressChangedMethod;

            // Console.WriteLine(source.PerformExpensiveCalculation().Result);

            source.ProgressChanged -= ProgressChangedMethod;

        }

        timer.Stop();

        Console.WriteLine($"Version two: {repeats} add/remove takes {timer.ElapsedMilliseconds}ms");

    }

 

    private static void ProgressChangedMethod(object sender, int e)

    {

        Console.WriteLine(e);

    }

}

 

And, here are the results:

Version one: 10 add/remove takes 0ms
Version two: 10 add/remove takes 0ms
Version one: 100 add/remove takes 0ms
Version two: 100 add/remove takes 0ms
Version one: 1000 add/remove takes 0ms
Version two: 1000 add/remove takes 0ms
Version one: 10000 add/remove takes 1ms
Version two: 10000 add/remove takes 1ms
Version one: 100000 add/remove takes 9ms
Version two: 100000 add/remove takes 13ms
Version one: 1000000 add/remove takes 79ms
Version two: 1000000 add/remove takes 93ms

So, if you are adding and removing at least 1 million event handlers during the normal execution of your program, you can save 11ms.

However, I would argue that if you are adding and removing more than 1000000 event handlers in your program, I would suggest you re-examine your overall design.

A common mistake.

Before I leave you, I am going to make one final change to the test program that has a huge impact on performance. Note the updated code for the lambda syntax (VersionOne):

class Program

{

    static void Main(string[] args)

    {

        for (int repeats = 10; repeats <= 1000000; repeats *= 10)

        {

            VersionOne(repeats);

            VersionTwo(repeats);

        }

    }

 

    private static void VersionOne(int repeats)

    {

        var timer = new Stopwatch();

        timer.Start();

        var source = new EventSource();

        for (int i = 0; i < repeats; i++)

        {

            source.ProgressChanged += (_, progress) => Console.WriteLine(progress);

            // Console.WriteLine(source.PerformExpensiveCalculation().Result);

            source.ProgressChanged -= (_, progress) => Console.WriteLine(progress);

        }

        timer.Stop();

        Console.WriteLine($"Version one: {repeats} add/remove takes {timer.ElapsedMilliseconds}ms");

    }

    private static void VersionTwo(int repeats)

    {

        var timer = new Stopwatch();

        timer.Start();

        var source = new EventSource();

        for (int i = 0; i < repeats; i++)

        {

            source.ProgressChanged += ProgressChangedMethod;

            // Console.WriteLine(source.PerformExpensiveCalculation().Result);

            source.ProgressChanged -= ProgressChangedMethod;

        }

        timer.Stop();

        Console.WriteLine($"Version two: {repeats} add/remove takes {timer.ElapsedMilliseconds}ms");

    }

 

    private static void ProgressChangedMethod(object sender, int e)

    {

        Console.WriteLine(e);

    }

}

 

Instead of declaring a local variable for the lambda expression, I write it inline in the add and remove statements for the event handler. Here are the performance results from this version:

Version one: 10 add/remove takes 1ms
Version two: 10 add/remove takes 0ms
Version one: 100 add/remove takes 0ms
Version two: 100 add/remove takes 0ms
Version one: 1000 add/remove takes 99ms
Version two: 1000 add/remove takes 0ms
Version one: 10000 add/remove takes 6572ms
Version two: 10000 add/remove takes 1ms
Version one: 100000 add/remove takes 803267ms
Version two: 100000 add/remove takes 13ms

 

You’ll notice that the lambda version is much, much, slower. Note that I stopped execution after 100,000 add/remove sequences because of the time taken. Why? Well, let’s look at the generated IL for the code inside the loop in VersionOne():

.loop
{
    IL_0018: nop
    IL_0019: ldloc.1
    IL_001a: ldsfld class [mscorlib]System.EventHandler`1<int32> blogExample.Program/'<>c'::'<>9__1_0'
    IL_001f: dup
    IL_0020: brtrue.s IL_0039

    IL_0022: pop
    IL_0023: ldsfld class blogExample.Program/'<>c' blogExample.Program/'<>c'::'<>9'
    IL_0028: ldftn instance void blogExample.Program/'<>c'::'<VersionOne>b__1_0'(object,  int32)
    IL_002e: newobj instance void class [mscorlib]System.EventHandler`1<int32>::.ctor(object,  native int)
    IL_0033: dup
    IL_0034: stsfld class [mscorlib]System.EventHandler`1<int32> blogExample.Program/'<>c'::'<>9__1_0'

    IL_0039: callvirt instance void class [mscorlib]System.Progress`1<int32>::add_ProgressChanged(class [mscorlib]System.EventHandler`1<!0>)
    IL_003e: nop
    IL_003f: ldloc.1
    IL_0040: ldsfld class [mscorlib]System.EventHandler`1<int32> blogExample.Program/'<>c'::'<>9__1_1'
    IL_0045: dup
    IL_0046: brtrue.s IL_005f

    IL_0048: pop
    IL_0049: ldsfld class blogExample.Program/'<>c' blogExample.Program/'<>c'::'<>9'
    IL_004e: ldftn instance void blogExample.Program/'<>c'::'<VersionOne>b__1_1'(object,  int32)
    IL_0054: newobj instance void class [mscorlib]System.EventHandler`1<int32>::.ctor(object,  native int)
    IL_0059: dup
    IL_005a: stsfld class [mscorlib]System.EventHandler`1<int32> blogExample.Program/'<>c'::'<>9__1_1'

    IL_005f: callvirt instance void class [mscorlib]System.Progress`1<int32>::remove_ProgressChanged(class [mscorlib]System.EventHandler`1<!0>)
    IL_0064: nop
    IL_0065: nop
    IL_0066: ldloc.2
    IL_0067: stloc.3
    IL_0068: ldloc.3
    IL_0069: ldc.i4.1
    IL_006a: add
    IL_006b: stloc.2

    IL_006c: ldloc.2
    IL_006d: ldarg.0
    IL_006e: clt
    IL_0070: stloc.s V_4
    IL_0072: ldloc.s V_4
    IL_0074: brtrue.s IL_0018
}

Focus on the four highlighted lines of code above. Notice that the event handler added is not the same as the event handler removed! Removing an event handler that is not attached does not generate any errors, but it also doesn’t do anything. So, what happens now is that over the course of the looping test, VersionOne adds more than 1,000,000 event handlers to the delegate. They are all the same handler, but they still consume memory and CPU resources.

 

I think this may be where the misconception arises that event handlers written as lambda expressions are slower than handlers written as regular functions. If you don’t add and remove handlers correctly using the lambda syntax, that error quickly shows up in performance metrics.

Created: 9/24/2015 2:24:33 PM

It’s stated as conventional wisdom that in .NET a throw expression must throw an object of a type that is System.Exception, or derived from System.Exception.  Here’s the language from the C# specification (Section 8.9.5):

A throw statement with an expression throws the value produced by evaluating the expression. The expression must denote a value of the class type System.Exception, of a class type that derives from System.Exception, or of a type parameter type that has System.Exception (or a subclass thereof) as its effective base class.

Let’s play with the edge cases.  What will this do:

throw default(NotImplementedException);

 

The expression is typed correctly (it is a NotImplementedException.) But, it’s null. The answer to this is in the next sentence of the C# Specification:

If evaluation of the expression produces null, a System.NullReferenceException is thrown instead.

That means this is also legal:

throw null;

It will throw a NullReferenceException, as specified  above.

Now, let’s see what happens if we work with a type that can be converted to an exception:

struct Exceptional
{     public static implicit operator Exception(Exceptional e)     {         return null;     }
}

 

An Exceptional can be converted (implicitly) to an Exception. But this results in a compile time error:

 

throw new Exceptional();

 

The compiler reports that the type thrown must be derived from System.Exception. So, let’s rewrite the code so that it is:

Exception e = new Exceptional();
throw e;

 

The first line creates an Exceptional struct. Assigning it to a variable of type Exception invokes the implicit conversion. Now, it is of type Exception, and can be thrown.

Finally, what about this expression:

dynamic d = new Exceptional();
throw d;

 

In most places in the language, where an expression must evaluate to a specific type, an expression of type dynamic is allowed.

Here we have the joy of edge cases. The spec (as I’ve quoted above) doesn’t speak to this case. Where the spec is silent, sometimes developers interpret them differently. The classic compiler (pre-Roslyn) accepts throwing an expression that is dynamic. The Roslyn (VS 2015) compiler does not. I expect this may change, and the spec may get updated to explicitly state the behavior.

Created: 9/8/2015 3:54:31 PM

I’ve started working with a new client and it’s re-affirmed my belief in the importance of asking the question “Why”?

Ask Why?

Ask it again.

Follow up with more Why?

Make sure you get to the root why.

I’m not focusing on the “5 Whys” popularized by Toyota and now part of the 6 Sigma process, although that is important. Rather, it is to focus on understanding why a process was put in place, and why these processes have been established. More than anything else, it’s about listening.

Why’s for the team

In the current scenario, I’m helping a team that is near midway through a product release cycle. They’ve adopted a series of agile processes, build and deployment processes, and common practices for branching in git, deploying changes, and so on.

Like all teams, half way through a project, they aren’t sure if every process is working out well. This introduces friction and concern: Should the processes be changed? If so, to what?

This is where ‘why’ becomes important.

The first ‘why’ to ask is “Why did you adopt this process | tool | guideline?” From this question, I learn what gains a team hoped to achieve. What problems had existed, and how this was intended to help. Once that’s known, it’s easier to discuss the benefits and any unseen costs that a new initiative has brought. On the whole, was it good?

The second “why” in this situation is “Why is this new initiative not generating the benefits you expected?” Is there more friction? Are you finding that adopting new techniques took more time and investment than you thought? Are you losing productivity because a technique is not familiar yet? Does the team fear that “we’re not doing it right?” This starts to get to the cause of this new discussion.

The third ‘why’ for the team is “Why change now?” One important goal for high-functioning teams is “continuous improvement”. And while it’s important to always look for opportunities to improve, it’s also important to pick the right time for change. That’s especially true if the proposed ‘change’ is at large scale. (Example: I’m not switching Source Code systems a month before release. Too much churn, not enough gain). Related to that, if a team did implement a major change before starting this release cycle, has it been fully explored?

 

Why’s for the tools

OK, give me a little license here. I know I can’t ask questions of a software tool. But, the point is that sometimes teams adopt a tool, or toolset, and then fight that tool because they don’t want to understand why it works the way it does.

One example from a previous client is git. They had planned an initiative to move from SVN to git. However, they did not expect this to change their workflow, or their overall development workflow. They didn’t develop a branching strategy. They didn’t work through the distinction between commits and push/pull. Git failed badly for them. They were fighting against the toolset. This example is not meant to take a position on centralized vs. distributed source control, or on a particular vendor. Both workflows can be done well. Both workflows can fail. The point is to work with your tools, not against them.

Sometimes that means picking different tools. Sometimes that means adopting a different process because of the tools you want to use.

In addition to asking “why” a tools was designed they way it was, it’s important to ask “why” the team picked a certain toolset. Did they originally intend to adopt the mindset supported by the tool? Or were there other drivers?

The key skill: Listening

Early in my career, I received one of the best pieces of advice from a mentor:

Some people really listen. Others simply wait for their turn to talk. Be the former.

That sums up what’s necessary to really bring about change and to really have a positive impact. If you listen to all the team members explain their issues, describe what is and what is not working, you will be in a much better position to make a positive impact. You’ll also be solving real problems. Problems that real team members have described.

What’s the point?

Listen. Ask Questions. And remember that the most important question is “why?”

You’ll have a bigger impact.

Created: 9/1/2015 11:35:22 AM

Yesterday, this blog post documented how a bug accidentally cost Carlo $6500 in a few hours because his AWS keys were compromised. Please read the whole post.

Synopsis: He was using the GitHub Extension for Visual Studio. He published a new repository that he *thought* would be private, but due to a bug, he created a public repository. That repository contained his Amazon Access Key. Scanners found it and created lots of resources.

First, some good news and a task for all of you:  This issue was reported on the GitHub project for that extension (https://github.com/github/VisualStudio/issues/62)  That page shows the quick response by the team involved for that extension, and a fix has already been implemented, and deployed to the Visual Studio Extension Gallery: https://visualstudiogallery.msdn.microsoft.com/75be44fb-0794-4391-8865-c3279527e97d

 

A task for all of you:  If you are using the GitHub for Visual Studio Extension, update it now. You’ll get a notification from Visual Studio that there is an update available. Install it. Now. (This extension does not auto-update, so you will need to perform this task.)

A bit of background on how this happened so fast

One fact I learned yesterday was that there are organizations that constantly monitor Github commits to see if any contain credentials to AWS or other cloud providers. That is the price of success. A tremendous amount of code is added or modified on github.com every day. It’s an attractive target.

As soon as a public repository gets created, or updated, that information will be captured. Be careful.

What habits could have prevented this?

Clearly, hindsight is 20:20, I’m writing this to share a few tips that might help you. I’m also interested in hearing what techniques you use to manage these kinds of secrets. My experience is with the open source projects for Humanitarian Toolbox. Those projects are deployed on Azure, and are all Open Source projects on GitHub. That means we must manage to keep secrets out of the repository. Here are the recommendations we use:

No Credentials in *.config

The default web.config files that are in the open source repositories are set for a localDB on the developer’s drive. There are no keys for our deployed resources.

AzurePortal_thumb

Instead, we use the Azure Portal to configure any and all secrets for our sites.  (click on the image for a larger view..) This does mean that we, as an organization, must protect our Azure portal settings. And, folks like me must make sure to obscure the secrets in images like this.

Deploy Directly from Source Control.

We deploy our applications directly from GitHub source control. The staging site deploys from the master branch, whenever we merge pull requests. If you are one of our contributors, you can see your changes live as soon as we merge the pull request.

For release builds, we use the Release branch in GitHub. One of the commit team must merge changes from master to the release branch for those to be live. We do that after our partners have had the chance to validate the changes on Master.

Consider Visual Studio Online

For my projects that are not planned to be open source, but will remain private, I use Visual Studio Onilne. (https://www.visualstudio.com/en-us/products/what-is-visual-studio-online-vs.aspx) It supports Git, and has some pull request support for teams. (I’d like a model where I could use Pull Requests even with my shared repositories, but I haven’t found that feature yet.)

VSO is designed around a team model, not an Open Source model. It’s less likely (well, almost impossible) that I accidentally make a public repository on VSO.

What next?

The key takeaway for me is that the era of Open Source requires all of us to learn some new skills and take care of new potential risks. These security breaches can and do happen to experienced people. Teamwork, care, and multiple levels of defense are needed.

Most importantly, professionals did their job: I’m truly impressed with the speed and professionalism of the response to this issue by team members from both Microsoft and GitHub.

Created: 8/18/2015 6:39:52 PM

Last weekend, Humanitarian Toolbox held a very successful coding event at That Conference. We got quite a bit done on both the crisis checkin and the All Ready applications. Thanks to the organizers of ThatConference, and everyone that attended during the weekend. I’m always impressed by how many developers join us there, and by how much they contribute.

OK, enough of the public service announcement. I wrote this post to help with a common issue: Pull Requests vs. Commit Privileges.

All the Humanitarian Toolbox projects use the Fork & Pull model for development. It enables us to keep the core contributor teams small, and yet enables anyone that wants to contribute to make changes and submit them. I, sadly, didn’t announce that clearly enough to all the volunteers when we started our event. Many of the volunteers thought we were using the Shared Repository Model.

That meant that later in the event, I had a number of developers come to me with issues because they could not commit to the main repository. That’s because they didn’t have the rights.

Announcement: Don’t fix this by just giving commit privileges. It’s not necessary.

Announcement II: Don’t fix this by trying to copy your changes and merging by hand. It’s not necessary either.

Thankfully, this is really easy to fix, once you understand how git works. It does also require using the git command line. The git command line isn’t that hard, and it’s important to learn as you do more with git.  If you ever run into this issue, follow the instructions below to fix it.

What Happened to Cause this Problem?

Before I explain the steps to fix the problem, let me describe what happened, and why it’s a problem. Look at the image below. It’s a portion of the main window in my Github for Windows application. You can see that I have two copies of Crisis Checkin on my local machine. The top one is the clone of a fork that I made in my account. The bottom one is the clone of the master HTBox repository. These two copies have different remote repositories.

 

ForkVsClone

 

If I run ‘git remote –v’ in both directories, you can see the difference. Here’s the output from my fork:

 

C:\Users\billw\Documents\GitHub\TheBillWagner\crisischeckin [master]> git remote -v
origin  https://github.com/BillWagner/crisischeckin.git (fetch)
origin  https://github.com/BillWagner/crisischeckin.git (push)

Note the difference when I run the same command in the main repository:

C:\Users\billw\Documents\GitHub\HTBox\crisischeckin [master]> git remote -v
origin  https://github.com/HTBox/crisischeckin.git (fetch)
origin  https://github.com/HTBox/crisischeckin.git (push)

When you execute a ‘git push’, you’ll send your changes to the git repository identified by origin (by default).  If you cloned the main repository, git will try to push to that main repository. If you forked, and then cloned your fork, git will try to push to that forked repository.

How to Fix the Problem (and not lose your work)

The goal is to push the changes you made in your desktop to a fork of the repository where you have commit rights. The drawing below shows the relationship between the repositories, and the commands used to create each one.

FreshPaint-0-2015.08.17-02.48.48

To save your work, and create a pull request, you’ll need to create a fork, and push the work you’ve already done to that fork. The specific commands are as follows:

Create your fork:

I usually create a fork from the github.com website. (Just press the “fork” button in the upper right corner). That creates a copy of the main repository in your account. This copy is also located on the github servers, not on your drive.

This fork is where you want to push your changes.

Add Your Fork as a remote

Now, you need to configure your fork as a remote repository for your working directory. Looking at the image above, you want to push from your desktop (where you have made changes) to the fork (the upper right repository).

You add a remote by typing:

‘git remote add fork https://github.com/BillWagner/crisischeckin.git’

Replace ‘fork’ and the URL with a name you want to use and the URL of your fork. I use ‘fork’ as the name, because it’s easy for me to remember.

You can add new remotes as long as the additional remotes are related to the origin remote. (Unless you start forcing git to do things. That means you can’t accidentally push your changes to crisis checkin to a fork of the Roslyn project (for example).

Now, your local copy is configured with two remotes: The source for the application (owned by HTBox) and your fork (owned by you). These two remotes are named ‘origin’ and ‘fork’.

Push to your fork.

Now, you need to push your changes from your working area to your fork. That’s just a git push, and specify the fork as the remote:

‘git push fork’.

By adding the extra parameter to the push command, you specify the remote repository where you changes should go.

It’s that easy.

Unless it isn’t. If it has been a while since you cloned the original repository, you may need to sync your working directory with upstream changes. That’s documented here. It’s  variation of what I just described, but you should be able to follow it.

Open the Pull Request

After you’ve pushed your changes to your fork, you can open a pull request to discuss your changes and get them merged into the main repository.

After I’ve finished this work, I will often make a new clone of my fork for a given repository. That way, the default remote (referred to by ‘origin’) points to my fork, rather than the main project.

I hope that helps.

Created: 8/4/2015 4:07:44 PM

Whenever I teach or present, people ask me why their copy of Visual studio doesn’t look like mine. The reason is the extensions I have installed. I’m really impressed with Visual Studio’s extension model, and how rich the ecosystem for Visual Studio extensions are.

It’s even gotten more rich with the release of Visual Studio 2015, and Roslyn based analyzers.

 

Here (in alphabetical order) are the extensions I use all the time:

The .NET Compiler Platform SDK:  This extension provides all the tools and templates that I need to create Roslyn based analyzers, diagnostics, and refactorings. As of RTM, this extension includes the Roslyn syntax visualizer, which was previously a separate install.

C# Essentials: This extension is a set of Roslyn based analyzers and code fixes that help you adopt the new features of C# 6. If you falling into old habits, and using old constructs, when you want to adopt the new language syntax, this analyzer will help you build those new habits.

Chutzpah Test Adapter: This extension runs Jasmine based unit tests for my JavaScript code in the Visual Studio Test explorer window. Adding this extension means that my JavaScript tests run with every code change, just like my C# based unit tests run on every build. It’s a great way to enforce TDD practices with your JavaScript client code.

Chutzpah Test Runner Context Menu Extension: Sometimes I want to run my Jasmine tests in the browser to use the browser based debugging tools. With this extension, I can right-click on a test, and Chutzpah will generate a test HTML file with JavaScript loaded to run the tests.

Code Cracker for C#: This is another Roslyn-based analzyer that finds and fixes many common coding errors. It’s also available as a NuGet package, if you’d rather install it on a project-by-project basis.

Github extension for Visual Studio: I use this extension in addition to the standalone GitHub for Windows application. It makes it easy to stay inside Visual Studio while working with GitHub Flow.

Telerik JustDecompile extension: This requires you to install JustDecompile, which is free. I use these extensions to understand what IL my C# compiles down to. It helps me when I’m writing articles, or looking through the C# spec.

Powershell tools for Visual Studio 2015: This extension helps when I’m writing or debugging Powershell scripts.

Productivity Power Tools 2015: This extension provides many useful UI features: Presentation mode (switches font sizes), Tab Well enhancements, Visualizer enhancements in Solution Explorer, and many, many more.

Web Essentials: If you do web development, you need this. Period. It makes so many of the web programming tasks much, much, easier.

If you compare my list to yours, note that the above is an incomplete list. I did not include any of the extensions that install as part of the Visual Studio install (like ASP.NET templates, TypeScript support).

That’s my list. What’s yours?

Current Projects

I create content for .NET Core. My work appears in the .NET Core documentation site. I'm primarily responsible for the section that will help you learn C#.

All of these projects are Open Source (using the Creative Commons license for content, and the MIT license for code). If you would like to contribute, visit our GitHub Repository. Or, if you have questions, comments, or ideas for improvement, please create an issue for us.

I'm also the president of Humanitarian Toolbox. We build Open Source software that supports Humanitarian Disaster Relief efforts. We'd appreciate any help you can give to our projects. Look at our GitHub home page to see a list of our current projects. See what interests you, and dive in.

Or, if you have a group of volunteers, talk to us about hosting a codeathon event.