Welcome to 2013. This promises to be a year of many new adventures, and continuing many of my current activities.
Let's start with the existing activities, and the changes in those programs.
I've been renewed in the RD program, which is starting 2013 with a new logo.
The new logo is cleaner, more modern, and represents the direction of the Microsoft Regional Director program moving forward to stay at the forefront of technology. The RD program represents a strong community of independent technology experts that have key positions in companies around the globe. It's an amazing community.
There will be more news about the program, and the RDs over the course of the year. You'll hear more from this group of amazing people as time goes on.
Also, yesterday I received notification that I was renewed as a C# MVP. This marks my 8th year in the program. This is another highly technical community. The difference is that this community is focused on the C# language and related technologies. The unifying traits of the C# MVPs are that they are very knowledgeable in C# development, and they have a strong desire to share this knowledge with the software community. Sometimes that's the local community; sometimes it's the regional community; and sometimes it's the global community. That brings me to the next item.
This year, we're launching classes to teach professional software developers the techniques we use to create software for our customers. We've already launched the .NET Developer Infusion Series. We're doing our part to help address the skills gap for software talent in our area.
Watch for classes in other technology stacks to launch by March.
For myself, I need to focus more time on my own learning. There's been an amazing set of tools, libraries, and technology stacks that have become popular in the last year or so. I need to learn more about a variety of up and coming skills for building software. Like so many times before, this is a great time to be a software developer, and to learn new ways to create great value for our customers.
That means exciting times for SRT Solutions. Over the course of the coming year, we are planning continued growth, and continued commitment to creating great software that helps our customers achieve success in their chosen markets. Throughout the course of the year, watch for announcements of new applications, new customers, and new people joining SRT Solutions.
Which gets to my renewed commitment to communicate more. As I went through the last year, I can see that my blogging activity dropped off last year. There were a variety of reasons. One was that there were so many new things to learn that I wasn't sure I had the right answers. I looked through the blog archives and I see that it's more important to start the conversations here than to be perfectly correct every time. This year, I will be blogging more. My goal is twice a week. As in the past, many of the posts will be on topics that I am currently learning more about (like Windows 8 development). I need to have those conversations, and hear what you, dear reader, think about the same topics. I'm hoping that by putting my (sometimes half baked) ideas out there, we'll learn more together.
And, there are a few new surprises coming. Keep your eyes and RSS readers open.
I've now spent two full weeks with my WinRT Surface. I've tried to spend as much time as possible with this device while I’m forming my opinions.
I bought the Surface Pro, and I bought both the touch cover (red, to match the SRT logo), and the type cover. I wanted to try both to decide which I liked better.
WinRT is the version of Windows8 that runs on the ARM processor. It comes with an ARM version of Microsoft Office Home and Student. (That version includes Word, Excel, PowerPoint and OneNote). Other than those applications, you must install applications from the Windows Store. You cannot install applications from any other location.
The WinRT Surface has a lot to recommend it. Because of the ARM processor, the battery life is amazing. I've been able to take notes through an entire day of meetings without finding a cord or power outlet. The form factor is comfortable to work with. The screen is very clear and bright. The weight is very light, even after a full day.
As I mentioned above, I purchased both the touch cover and the type cover. I have a very strong preference for the type cover. I tried the touch cover for a few days, and I found it too different from a normal typing experience. The type cover is a much better experience, at least for me.
The restriction that all apps must come from the store was really not a problem. I've been able to find great apps for all my non-developer tasks. I'm using Tweeterlight for my twitter client. I've got IM+ to manage my message accounts. I've got skype installed. I use FeedReader for my RSS reader. I don't mean this to be an exhaustive list. It's just to give you some indication of the main applications I'm using. Overall, I've been happy. There are one or two holes that I still need filled, but overall I've been happy with what's available.
Of course, it's not perfect. I'm ambivalent about the kickstand. It's at a great angle when I place the Surface on a desk or a table, and I'm sitting in front of it. However, the angle of the kickstand is not adjustable. It's not at the right angle for me when the surface is in my lap, or when I'm working at my stand desk. However, I can work with the surface lying flat in some of those cases. My one suggestion for a future hardware enhancement would be to allow the kickstand to have a couple of different angles built in. With that said, Microsoft clearly put a lot of thought into the design: the angle of the kickstand is great at a table for me, my wife, and my daughter (all having quite different heights).
I do think that a Surface with Windows RT (on the ARM processor) is a great device for students, home users, and many business users. If you can find the apps you need in the Windows store, you will be very happy with this device on WinRT.
There are a few classes of users for whom the Surface with Windows RT is not the best choice (for their primary device). Serious gamers will not be happy because they will not be able to install games designed for Windows 7. Developers cannot use the ARM -based devices for their main machine (but will want an ARM device. I'm writing another blog post on this topic). Users that have some specialized application needs may not find their specific application ready yet for Windows 8 yet.
Overall, I'm impressed with the Surface running WinRT. I'll use it regularly because of the battery life. It's going to be my primary machine while travelling. I'll also be adding two more of these devices to our family. Both my daughter and my wife are planning on new Surface machines.
I’m finishing up my calendar and planning for next week. It’s a big week for developers here in Southeast Michigan. There are three big events you should attend:
This coming Wednesday, our local .NET developer group, AADND, is hosting an install fest / launch event for Windows 8 and Visual Studio 2012. You can come and install Windows 8 and Visual Studio 2012 on your box. A number of people that have already worked with Windows 8 and VS2012 will be on hand to help, and to provide guidance. I’ll be there to help and to discuss the Open Source environment for Windows developers.
SRT Solutions is teaming up with the Association for Competitive Technology and the Mobile Technology Association of Michigan to raise awareness of privacy issues as it relates to mobile and app development. We’ve got development experts, FTC officials, and legal and policy experts to help navigate what can be a complicated landscape. It is important information for developers to have at their disposal. I’ll be saying some opening remarks, and helping with Q and A on Windows 8 development.
And the week ends with a full day of development goodness at Cobo Hall in Detroit. Dave McKinnon and Dave Giard have put together a strong program (well, I’m speaking too), and it promises to be a great day. I am speaking on async / await features in C# 5 and what that means for .NET developers.
The first two events are free, and a great way to learn and grow your development skills. 1DevDayDetroit is $99.00. Check them out. All three events do require pre-registration.
I often find myself in conversations with customers and prospective customers regarding how we build software. How do we build software? What’s our process? How are we tracking tasks? What documentation do we create?
With some customers, we get a lot of pushback on the lean, fast process we use. According to these people, we don’t generate enough documentation. We don’t do enough manual testing. We start coding too soon. I’ve observed an interesting quality to these conversations: The person (or people) questioning our process is most familiar and has extensive experience developing software at least 20 years ago. Equally importantly, they no longer develop software, either professionally or casually.
Like all conversations, everyone brings their own perspective to the conversation. A couple decades ago, our tools were different. That meant a different process was better. These customers are bringing that natural bias to the conversations, and it comes out in their questions. Knowing where that bias starts makes it easier to explain the differences in a positive way.
Let’s start with how today’s tools support and enable our current process. Suppose I was handed a large codebase we’ve created, and I needed to make some changes. The first thing I would do is to get that project from source control, build it, and run the automated tests. That would give me the following information: I’d expect all the tests to pass. If they didn’t, we’d have a serious problem. I’d expect that the tests cover a strong cross-section of the codebase. (I wouldn’t expect that we’d have 100% coverage, but I’d expect something in the high 70s, at a minimum).
Already, I’ve got a level of confidence. If I start making changes to this codebase, without doing any deeper investigation, I would expect that the automated tests would alert me if I’d made any change that introduced a regression bug.
Read that sentence again, because it’s a huge confidence builder for a new developer on any project: As a developer, I will know if I’ve broken anything even before I check the code into source control.
Next, I’d string trying to find the area of the code I need to modify. I’d use the search function in my IDE to find classes, methods, or modules with names that make sense. I’d pay special attention to tests the exercise the feature I’m going to work on. I’d read the test code. I’d use the “go to definition” feature of my IDE to find the code being exercised. I’d learn about the sections of code I’d need to modify.
Once I felt a bit comfortable, I’d start writing new tests to express the changes I would make. I’d leverage intellisense to see what the capabilities of the types I’m using are. I’d expect some reasonable method names to help me understand what’s already implemented.
Overall, I’d feel reasonably comfortable making changes and adding features within a day. I’d probably be quicker if the codebase was smaller.
Many open source developers learn a new project the same way. They look, they learn about the project using the source, they dive in.
But it wasn’t always like this. Decades ago, we didn’t have powerful IDEs. In those dark ages, we started by reading voluminous documents. Those documents gave developers the roadmap and that important first handle into a large codebase. Next, a developer would read and digest detailed specs on the modules to be modified. Then, and only then, would someone make the first tentative steps into modifying the code.
Unit tests weren’t standard practice yet. Developers approaching a new codebase wouldn’t have that security and that safety of knowing that mistakes would be caught early. Any mistakes will escape to a QA department. Even worse, any mistake might even make it into production or distribution.
The common languages we used were very different. C was by far the most common language. The other main players were PASCAL and FORTRAN. Today’s common idioms for encapsulation were rare. You had to be careful about modifying global variables. Global variables weren’t a code smell. They were necessary. That introduced more risk into each change. That increased risk meant more process.
The extra process was necessary because the tools weren’t able to support greater speed. The costs of mistakes were high; we didn’t have a high probability of catching problems before customers did. Worse, we couldn’t give new developers the confidence to move quickly, especially on an unfamiliar codebase.
That extra process also carried real overhead. At my first job, we had librarians. They were responsible for helping developers find the necessary documents or code for a particular problem. Yes, their entire job was managing the documents we created. That overhead may sound like waste today, but it wasn’t. These people were instrumental in helping developers be productive.
We also had a much larger QA and test department than you would expect for a similar sized organization today. That’s because so much of the testing was done by hand. And, of course, the test organization meticulously followed test plan documents. There was also the science of sampling those test plans for sniff tests and regression tests (and rotating the sample so that over time every test was executed).
The upper level managers I talk with had similar first job experiences to mine. One big difference is that they often left the ranks of main line developers to become managers many years ago. When they first hear about the lightweight process we use, they don’t see it as leveraging the tools. Instead, they see it as being sloppy. I have to frame that discussion in terms of moving fast and leveraging modern tools if I want to make any headway.
I can’t talk about the lack of test plans; I have to talk about the overwhelming number of automated tests.
I can’t talk about lack of documentation; I have to talk about documentation in the form of an Open Source project’s wiki, and IDE tools to browse, search, and learn about a codebase. Most importantly, I have to separate the act of creating a design (thinking and collaborating) from the act of writing a design document.
I can’t talk about lack of a firm plan; I have to talk about measuring velocity and responsiveness (even to change).
Most importantly, I have to explain in very clear terms how the artifacts we needed to create in the past are not necessary because the tools enable us to do more, do it faster, and become familiar with a project much faster.
I’m not saying that the people I’m referring to are dinosaurs, out of touch, or anything of the kind. I’m writing this to point out that they, like all of us, bring our own biases to every conversation. These same biases color our decisions. If you want to move these managers away from those processes they remember as necessary, you must explain why they are no longer necessary.
Remember this anecdote when you find yourself trying to change a process that’s no longer bringing value:
A newlywed couple was working on a roast for dinner. The young bride cut the ends off the roast and put them into a second smaller pan. The young husband had never prepared a roast this way, so he asked why. The bride said she didn’t know why, but it’s what her mother always did. The called the bride’s mother and asked why. She said she didn’t know either, but it was what her mother had always done. So the young couple called the bride’s grandmother. The wise grandmother laughed and laughed. When she finally stopped laughing, she said “I can’t believe you’re still doing that. I only cut the ends off the roast because when your grandpa and I were young we didn’t have a roasting pan big enough for a roast that would feed all of us.”
Don’t continue to use a process that’s no longer necessary with modern tools. Equally important, remember that those processes once had a purpose, and you need to explain why modern tools render them obsolete.
I’m thrilled to be enjoying a new experience at CodeMash 2013: I’ll be hosting a precompiler workshop on C# 5.0 async programming techniques.
I’ve spoken at every previous CodeMash, but this is the first time I’m running a half day workshop. I like the challenge of preparing a half day of async content, and taking participants on a larger journey. That’s what this is about: My goal, if you give me four hours, is to teach you to see async, await, Task<T> and related types the same way you see for, if, and foreach: tools you use every day. There’s quite a bit to cover, and I’m looking forward to every minute of it. I hope you’ll join me.
You can see all the sessions here: http://www.codemash.org/sessions They are a menus of awesome that’s better than a bacon bar.
Thanks again to Carl and Richard for inviting me to come along to Omaha to join the awesome community in Omaha. I continue to believe that the strongest development communities are in the middle of the country. There are always strong crowds, engaged people, and good old mid-western friendliness. I’d love to see more of the big name conferences try locations in the Midwest. There’s a huge untapped community of developers that would attend these conferences if not for the extra large travel expenses. And, these developers are so energized that they are starting their own conferences in many of these locations.
Carl, Richard and I had a great discussion that is available here on .NET Rocks. We discussed how the software we create continues to change the world around us. My talk before the recording was around the concept that as developers our career is about changing the world and creating new possibilities. Software changes the way we do everything:
Our jobs are helping our customers create and adopt new workflows and new ways of leveraging software to make things better.
I talked about looking for different ideas, and being aware that often the best ideas are dismissed. As an example, I mentioned Steve Jobs fight to get the iPod released. His board fought him on that, saying that it had no market. The interview with Steve Jobs and Bill Gates that I reference is here. It’s long (2 hours or more), but it’s worth your time to listen to these giants of our industry discuss how they navigated long careers starting with home built computers on to the modern devices we now use.
The final point was why WinRT needs iTunes (or something like it). So many people use iTunes and and iPod (of some flavor) for their music source. Personally, I find much better value from ZunePass (now XBox live music). However, with the demise of the zune player device, those files aren’t portable. Music you download using your subscription is DRMed and can only be played on a linked device. I can’t plug my Windows 8 slate into my car. I can’t use it while I work out, or while I’m walking the dog. I need to get my music on an extremely portable device.
Today, that means iTunes and and iPod.
If I can’t do that on a WinRT device, I need another computer to manage my music library. That means I’m more likely to buy an Intel Windows 8 device, even if it’s only to manage my music. If I had a way to get my Zune music on a portable device (besides my phone), I’m set.
We released our first Windows Store app earlier this week: A calculator app that supports multiple skins.
We’re concentrating on adding the art and user experience to make a better experience for our customers. The first skin is a steampunk skin. It’s novel look at a simple application:
The Second skin is a kids calculator skin. Note that this skin has fewer functions than the steampunk skin. we wanted to make an app you’d be happy to let small children use to check their homework:
We also added history and sharing features. If you use the calculator in snap mode, or portrait mode, the screen has 5 lines of display instead of 1. Also, you can share from this app, putting all the calculation history as text in an email message, or a document.
We’re working on other skins and more functions. Please use the comments to let me know what you’d want to see.
I received a very interesting question from a reader earlier this week:
I have a question for you about Item 12 in Effective C# (2nd edition), "Prefer Member Initializers to Assignment Statements". Here is my problem:
I recently inherited a very complicated application in which I had to find and fix a bug. It was incredibly painful. The application contained many classes that contained other classes that contained other classes that inherited from other classes that inherited from other classes. I think you get the idea. I found the member initializers particularly painful. When code was about to create a new object, and I'd expect to step into its constructor, instead I'd end up stepping through a maze of member initializers for other classes, with no idea where I was, or why I was there, before I'd get to the constructor of the class I expected to be in.
After that experience, I swore that I would not use member initializers ever, and always initialize members in my constructors, so that when stepping through the code there would be some sense of chronological order of object creation that made sense.
Item 12 tells me to prefer member initializers. You give 2 reasons why: to minimize the risk of omitting an initialization with multiple constructors, and to initialize variables as early in time as possible. I guess my question is why is it important to initialize variables as early in time as possible??? I'd like to purposely delay initialization until I'm in the constructor to help debugging seem more natural, but if there is a real benefit of initializing earlier, I'd rather do that. Are there any memory use or compiler optimization benefits to using member initializers?
This is a great question, because it does highlight language features, coding practices, and tools.
First the language features: the C# designers made the decision that variables in an object are initialized before any code in that object executes. You can initialize a variable with a field initializer, or accept the default initialization of the 0 bit pattern. That means field initializers execute before any constructor code. As I wrote in Effective C#, initializing object fields using field initializers insures that all are definitely initialized as early as possible.
I recommended initializing fields as early as possible, because it minimizes the chance of null reference exceptions. If you initialize fields before any code executes, you cannot write code that accesses those fields before you initialize the fields. The more code (even in constructors) that you may execute before initializing fields, the more likely you introduce those kinds of bugs. As code bases grow, developers may add new constructors, or add new method calls in constructors. Any of those additions can dereference unitialized fields, causing bugs. I expand on this in C# Puzzlers, where I discuss virtual method calls in constructors.
However, this coding practice can make it hard to debug. There’s a little-known feature that really helps: You can set breakpoints on field initializers. It can be confusing, unless you remember the order of initialization. That’s why I went into extensive detail in Effective C# on the initialization order of objects created using C# (and interacting with VB.NET, where the order is different). It can be somewhat tricky, especially with very deep hierarchies. But, hopefully, you have to debug initialization code less often.
Finally, note that the title says “Prefer Field Initializers”. It’s not an absolute rule: some fields are much more naturally initialized using constructors. But, absent good reason to pull the code into a constructor, the field initializer is preferred.
I was honored to be invited to the speak at the Portland, ME user group last week. It was a great group of developers, and we had a lively discussion about the new async and await features in C# 5. We went through some of the current thinking on how to leverage async in your applications.
And, I explained how async and await relate to Dr. Who.
Slides can be downloaded here.
Demos can be downloaded here.
Public Service Announcement: I interrupt our regularly scheduled technology content for a discussion of government policy. If that’s of no interest, just skip this post. Regular tech posts will resume shortly.
I spent the early part of this week in Washington, D.C. in meetings with legislative and executive staff on issues relating to the software industry. I’m glad you’re still with me. What our government does (or doesn’t do) affects us. We should pay attention.
ACT Online wrote about the event here. Morgan summarized the focus of the event quite well. Rather than repeat the main concepts, I’ll discuss my own experience from the perspective of a developer and business owner in Michigan.
The software industry is one of the main bright spots in our economy. It’s continuing to grow. The jobs the software industry creates are high-paying jobs. I went to Washington to help ensure that our policy makers understand that they have both a positive and a negative impact on our industry. The software industry is still young, and we are constantly creating new applications and new technology ideas that open up new areas of innovation. This industry has been especially instrumental in Michigan’s recovery. New businesses, like ours, have grown over the past 10 years. Larger companies are making new investments in software. The renewed investment in Detroit is almost completely due to software companies. Public policy affects the environment for new businesses, influences society and education, and creates the regulations that govern our commercial interactions.
Overall, our message is that government has an important role to play by allocating public resources, like spectrum. It also has an important role to play in regulation. We want consumers to feel safe when they interact with software companies, whether those companies are large corporations or small one person startups. Finally, and most importantly, we know that government has an important role to play in creating a society where people have the skills and talent to help the software industry keep growing.
That last item is the most important, in my mind. We had a fantastic meeting Monday morning with Danny Weitzner, President Obama’s Deputy Chief Technology Officer for Internet Policy. (As an aside, when 30 tech people have a meeting at the White House, they all get some impressive bragging rights on FourSquare). One of my key messages there involved education and building the talented workforce we need. There is a lot of discussion around STEM fields, and ensuring that more young people become interested in these fields. Much of that discussion misses the mark. There seems to be an emphasis on training for a specific job (Java programmer, iOS programmer) rather than careers (software application development). Our software programs are created by teams with diverse skills: artists, designers, User Experience experts, developers and more. The game industry includes script writers, musicians and engineers. While we need people with specific skills, we need creative people that can come up with new ideas and implement them even more. Changes to our education system must address math and science. More than that, it must create life-long learners that have the breadth of knowledge and the creative skills to create the next Instagram, Facebook, Google or Microsoft. When we hire people, we ask questions that are outside the realm of simple coding questions. We want to know if you can think about problems, come up with new ideas, make new products that people haven’t even thought about wanting. I was really excited to see Mr. Weitzner latch onto that concern.
Overall, it was a great trip, and a great way to take my perspective on our industry to our governmental leaders.
This post doesn't have much code, but there's a few important points to remember for working with WinRT apps and application suspend and activation.
When your application starts, it becomes the active app. Your application will receive events that indicate your application is being started, loaded and activated.
If you've used Windows 8, you've probably noticed that you don't have any explicit way to terminate an application. (Yes, you can close an app, using the swipe action from the top to the bottom, but that doesn't actually stop the application from running.)
The environment will close your application if there's memory or CPU pressure on the machine. The first thing that will happen is that Windows will suspend your application if the system needs more resources. Your application will receive a Suspend event, and will have 5 seconds to store its state, and then the system will suspend your application.
If the system still needs more resources, the system will terminate suspended applications to free up more resources. Since your application was already suspended, your application won't get a new event for termination; rather the system will simply unload your application.
Your users have expectations around suspending, resuming, terminating, and restarting your app. When the system suspends your application, you must save state so that you can resume your app's state exactly where the user left off. You need to respond to those events, save and restore state correctly.
First, suspend: Suspend is easy. Just save your state. Do it async, and don't take more than a few seconds. App storage will backup your state into the user’s Live account storage. It will automatically be visible across Win8 machines for that user.
Next, resume: You probably don't need to do anything here. Your app was still in memory, and can simply be resumed. However, some applications, which may have time sensitive data (think news or weather apps) should look at the elapsed time and consider if they should update content. In addition, you may store state in the cloud. When you app resumes, you should consider that you may need to update the content based on the user working on your app on another machine (think EverNote). Once again, remember that the storage will be available across WIn8 machines.
Finally, consider Activated: You will get this event when your application starts. One of the properties in the event arguments is the Previous Execution State. If this is the enum value Windows.ApplicationModel.Activation.ApplicationExecutionState.terminated, that means your application was run previously, and saved state on suspend, and was then terminated by the system to free resources. You should restore state, and use the same guidance I just gave on resume.
Of course, if you've got to code these capabilities, you need to be able to test and debug these scenarios. Visual Studio 11 gives you capabilities to do that. Unfortunately, the necessary commands are bit hidden. It took me several minutes to find the right magic to find and turn on the right commands. One of the new toolbars in VS 11 beta is the “Debug Location” toolbar. You need to customize your toolbar, and turn on this feature. It’s not on by default. The “Debug Location” toolbar (shown below) contains three new commands to suspend, resume, and suspend then terminate the app being debugged:
Once you've turned on that toolbar, simply debug your app. You'll have buttons on the toolbar button to suspend your app, or suspend and terminate your app. You can walk through the code that handles these features and ensure that your users will get the proper behavior.
This is another in my post series on WinRT programming for C# developers. For review, I’m blogging my notes on running and modifying each of the samples from this page: http://code.msdn.microsoft.com/windowsapps/Windows-8-Modern-Style-App-Samples
The next sample provides an overview of WinRT data binding. This sample actually has seven different scenarios; it's one of the biggest samples so far.
However, I'm only going to briefly cover this sample. TLDR Version: If you've used DataBinding in WPF or Silverlight, it is very familiar.
For those interested in more:
You can support one way, two way, and one time bindings:
You can create Converters similar to WPF or Silverlight:
Other scenarios in this sample show how to respond to data source changes (It looks like INotifyPropertyChanged), bind to elements in a collection (just like in WPF or Silverlight) , bind to color properties (just like WPF and Silverlight), modify collections (like WPF and Silverlight), and navigate collections (like in WPF or Silverlight).
In short, it's really familiar code, if you've done anything with WPF or Silverlight.
All of these projects are Open Source (using the Creative Commons license for content, and the MIT license for code). If you would like to contribute, visit our GitHub Repository. Or, if you have questions, comments, or ideas for improvement, please create an issue for us.