We released our first Windows Store app earlier this week: A calculator app that supports multiple skins.
We’re concentrating on adding the art and user experience to make a better experience for our customers. The first skin is a steampunk skin. It’s novel look at a simple application:
The Second skin is a kids calculator skin. Note that this skin has fewer functions than the steampunk skin. we wanted to make an app you’d be happy to let small children use to check their homework:
We also added history and sharing features. If you use the calculator in snap mode, or portrait mode, the screen has 5 lines of display instead of 1. Also, you can share from this app, putting all the calculation history as text in an email message, or a document.
We’re working on other skins and more functions. Please use the comments to let me know what you’d want to see.
I was honored to be invited to the speak at the Portland, ME user group last week. It was a great group of developers, and we had a lively discussion about the new async and await features in C# 5. We went through some of the current thinking on how to leverage async in your applications.
And, I explained how async and await relate to Dr. Who.
Slides can be downloaded here.
Demos can be downloaded here.
I received a great question that relates to using exceptions to indicate contract failures. I thought it would be of general interest, so I am sharing the answer and discussion here:
I am reading your book More Effective C# and just finished Ch3, Item25, Use Exceptions to Report Method Contract Failures. It all makes good sense but I'm left wondering what to do in the case of a Data Access Object. Supposing I have the following class:
public class OrderDao
public Order GetOrder(int id)
// Query database
public bool OrderExists(int id)
// Query database
What happens if the order is not in the database? Should the GetOrder method return null or throw an exception? We can call the OrderExists method before calling GetOrder but then that would involve up to two database queries, which could be more expensive than just calling GetOrder and handling the exception. What do you think is the best strategy for this?
He correctly realizes that he does not want to make two round trips to the database. Of course, you don’t want to use exceptions as a general flow control mechanism either.
I don’t have enough context to give a definitive answer on this sample. However, that gives me the chance to discuss why you’d make different design decisions.
It may be that this class is correct exactly as it is shown. Suppose the data model is such that any code searching for an Order does so through some other table, and that a missing id would indicate a data integrity error. If that is the scenarios for this application, the correct behavior is in the class above. A query for an order when the order ID does not exist is a data integrity error. When you encounter a data integrity error, you almost certainly want to stop stop processing more requests. Something should trigger a full check and recovery of the database.
Of course, because you asked the question, this likely isn’t your scenario. It’s easy to come up with a scenario that requires a different contract. Suppose your application is a customer service app. Customers call, and give you their order number. A customer service rep keys in the order number, and your code queries that database to find that order. There’s lots of ways that can fail without being too exceptional: a customer may give the wrong ID, a customer may hear it wrong, type it wrong, and so on. There are a lot of ways that searching for an order could fail in this scenario.
The answer is to change the contract so that one method will check the database and correctly return the record, or tell you that it doesn’t exists. Suppose that instead of GetOrder() you followed the LINQ convention and wrote GetOrderOrDefault(). Now your class (in pseudocode) looks like this:
public class OrderDao
public Order GetOrderOrDefault(int id)
Order order = null;
// Query database
// if order is found, set order = returned value.
// else order remains null
public bool OrderExists(int id)
// Query database
You are still doing only one database query, and you can translate that into a single result that returns the answer, if it is found, and completes its contract even when the sought record is not found.
What’s important about this answer is that you, as a class designer, create the contracts your class adheres to. You can write those contracts in such a way that under any reasonable condition, your methods can fulfill their contracts. In this instance, the important concern is that you do not want your users paying for extra database trips. You need to create a contract that enables your users to call your methods in the way they want, and still performs the way you want internally.
I spent all day yesterday working with VS2010 RC. (MSDN Subscribers could download late on Monday. It becomes public today).
First impression: It is much faster, and more stable than the beta 2 build.
Caveats: It’s only been one day. If I were making a ship decision, I’d say VS2010 RC needs a lot more soak time.
I put VS2010 RC on the PDC Laptop. (The Windows Rating is a 3.2, due to the Windows Aero score). On that machine, the VS2010 Beta was rather painful. It worked, but I could type noticeably faster than VS2010 could display characters. Even worse, slewing characters would cause the display to fall more than a screen behind in drawing.
None of those effects occur in the the VS2010RC. (VS2010 RC uses the hardware rendering on this machine. See Jason Zander’s discussion of perf changes betwen B2 and the RC for more information.) I spent all day coding on the PDC laptop. It kept up during the entire day. Scrolling, slewing keys, intellisense. It’s downright zippy.
As an extra added bonus: The Add Reference dialog appears in a heartbeat.
As of now, I am doing all development in VS2010 RC.
This is going to be fun. It’s a bit of LINQ, a bit of academic Computer Science, and a bit of meteorology.
Euler Problem 14 concerns a sequence referred to as hailstorm numbers.
Hailstorm number sequences are generated by applying one of two functions to a number in order to generate the next number. In this case, the sequence is:
n –> n / 2 (n is even)
n –> 3n + 1 (n is odd)
When n is even, the next number in the sequence is smaller. When n is odd, the next number in the sequence is larger
In all cases, it is believed that for every starting number, the sequence will oscillate for some time, and then eventually converge to some minimum number. (In this case, that minimum number is 1.)
These sequences are called hailstorm numbers because they (very simplistically) act like hail in a storm: oscillating up and down in a thunderhead before eventually falling to earth.
This particular problem asks you to find the sequence that has the longest chain, given input numbers less than 1 million.
The brute force method will take your computer a very long time to compute. The problem asks for the longest sequence, and you’ve got a million of them to compute.
Stack size is a related problem. You could write a method that computes the next value, and then finds the sequence size recursively:
A LINQ query gives you the answer:
This works, and does give the correct answer.
But I’m not really satisfied, because it takes more than 15 seconds (on my PDC laptop) to finish.
There are two steps to making this faster. The final version uses a technique called Memoization. Memoization enables you to avoid computing the same result more than once. Pure functions have a useful property in that the output depends on their inputs, and nothing else. Therefore, once you’ve computed the sequence length for, say 64, you should never need to compute it again. It’s always going to be the same.
Moemoization means to store the result for a computation and returned the stored result rather than do the work again. This can provide significant savings in a recursive algorithm like this. For example, memoization of the result for 64 (7) means saving the computations for 64, 32,16,8,4,2 and 1. Memoization of the results for longer sequences would mean correspondingly larger savings.
You could modify the Generate method to provide storage for previously computed results. Here’s how you would do that:
But, what’s the fun in that? There’s no reuse. It’s a one off solution.
I want to write a generic Memoize function that lets me memoize any function with one variable. Wes Dyer’s post explains this technique in detail. Memoize is a generic method that executes a function, abstracting away the types for both the parameter type and the result type:
The first question you may have is how the previous results actually works. It’s a local variable, not a static storage. How can it possible live beyond the scope of the method?
Welll, that’s the magic of a closure. Memoize doesn’t return a value, it returns some func that enables you to find the value later. That func contains the dictionary. I go into this technique in Items 33 and 40 in More Effective C#.
In order to use this, you need to move Generate() from a regular method to a lamdba expression (even if it is a multi-line lambda):
Now, we have the generate function in a form we can memoize. The only trick here is that you have to set GenerateSequenceSize to null before you assign it in line 2. Otherwise, the compiler complains about using an unassigned value in line 6.
You can extend the Memoize function to methods with more than one input, but for now, I’ll leave that as an exercise for the reader.
The most important part is that the C# column is in great hands. Patrick is excellent at explaining concepts, and he’s going to bring a wealth of new ideas and concepts to the magazine. I feel much better walking away from the column knowing it is in such good hands.
It was hard to walk away after so much time with VSM and its predecessors. But it was time. I’ve written on so many C# topics that I was having trouble coming up with ideas that felt new. I felt like I was covering the same ground over and over.
That got me thinking about how long it had been, and what a long, strange trip it has been.
I started as the original C++ Fundamentals columnist for Visual C++ Developers Journal in the 1990s. That was a new magazine published by TPD, for the MFC / C++ Developer. It were covering techniques to bridge the divide between 16 bit and 32 bit applications. There was this amazing new OS code-named ‘Chicago’ on the horizon.
A few years went by. I kept covering more topics related to C++ and windows development. More MFC, ATL, language enhancements and how the Microsoft C++ compiler tracked (or didn’t track) the C++ standard.
At some point in this period, Visual C++ Developers Journal was bought by Fawcette. TPD stopped being involved.
The turn of the century brought more changes. This amazing new .NET platform and the C# language showed up. I started writing about C#and .NET fundamentals, instead of C++. Although the change was gradual: In the beginning, I was writing 2 C++ columns for every 1 C# column. Over time that kept changing.
Next, came some initiatives to capture more of the web audience. Several of the columnists starting writing a column online (at the rate of one a week). These weren’t all tech columns. Some were more opinion, or ‘tip’ columns. That felt like quite a grind. I was constantly under deadline pressure to come up with a new idea every week. It soon got harder: The readers liked the ‘how to’ columns most. I was asked to replaces the ‘tips’ and ‘opinion’ entries with more regular columns.
I took a break and wrote a book (Effective C#, the first edition).
I wrote for a while, and then there were more changes. Visual C++ Developers Journal merged with Visual Basic Programmers Journal to become Visual Studio Magazine. This was a challenging time to write for this magazine. The audiences for VCDJ and VBPJ were very different. And, most importantly, they were both afraid of losing content to the ‘other’ audience. C# aficionados were concerned that they’d lose coverage to the larger VB market. VB developers felt the same fear of losing coverage to the newer, and perceived to be cooler C#. That was a tough era. The editorial staff did a tremendous job to navigate a very tough set of market perceptions.
I stayed on break and wrote a second book (More Effective C#).
Then, I was approached to come back and write the C# Corner column for Visual Studio Magazine. Having finished the book, it was time to keep writing again. It was fun for a while. I was and still am impressed by the energy that 1105 media is bringing to the publication. I had a blast over the past two years writing for a reenergized Visual Studio Magazine.
Then, while still trying to write the column, I updated Effective C#, covering C# 4.0, and other recent enhancements to the language.
I was running out of ideas for the column. Visual Studio Magazine deserves better content. That’s why I worked with Michael Desmond and the editorial team at VSM to turn over the column to Patrick. I’m glad it’s in good hands.
So what’s next?
I’m now writing content for the C# Developer Center. The cadence will be roughly once a month, and I’ll be writing on current and upcoming language features in the C# language.
As promised, here are the slides and demos from my CodeMash Talk: Going Dynamic in C#.
Please note that the demos are compatible with VS2010 Beta 2. They will not load (or run) on VS2008. I believe they will be compatible with future VS2010 builds, but predicting the future is very hard.
Thanks to the CodeMash committee for letting me speak again. It’s a great experience, and I’m proud to be a small part of the conference.
In my last post, I wrote about the new items in the second edition of Effective C#, and those items that were removed to make room for the new items. Now, let’s discuss what happened to the items that I carried over from the previous edition.
Every item received a rather significant update for this new versions. However, you won’t see that from looking at the table of contents in InformIT. That’s because the advice is very similar to the earlier edition. However, the way you implement the advice has changed significantly. As I mentioned in the last post, the C# language has made many significant enhancements over the years since the first edition was published. We have many different tools at our disposal to express our designs. That means we have many new techniques that we can use to achieve the same goals of better software, and clearly communicating our designs to other developers.
In the new edition, I re-wrote all the samples to use the latest version of C#, taking advantage of all the features in C# 4.0. That does not mean I use C# 4.0 syntax in every single item. It does mean that I thought about how to express an idea using the full palette of features available in C# 4. In many cases, that meant using features available in C# 3, or even C# 2. In other cases, the samples will include some of the latest C# features. In all cases, I updated the justifications for the advice, and how to implement the goals, in the context of C# 4.0.
Even if you have no experience with earlier versions of C#, you can use the advice in the second edition. Furthermore, you can use much of the advice even if you have not updated your environment to C# 4.0, and .NET 4.0.
The 2nd edition of Effective C# is now available on Rough Cuts. With that, I’ve started to get questions via email about how I decided which items to add, and which items to drop.
It should be clear from the additional content what’s new: I added coverage of the significant C#4.0 features like dynamic invocation and named / optional parameters. New library additions like PLINQ are also covered.
It’s much harder to see how I decided which items to drop. There are 15 completely new items in the 2nd edition, so that meant finding 15 items to drop. (Several other items have the same title, but were significantly rewritten – that will be the subject of another blog post.) Here’s how I decided which items to remove:
Items that are less important now. A number of the items in the first edition discussed techniques that were much more important before generics were available. Some of these items were those that discussed boxing and unboxing, the collection classes, and the data set class. All of those techniques and libraries were far more useful in C# 1.x than in the current .NET universe.
Items that have become common practice. C# has been around for almost a decade, and the community is much more mature than it was in 2004, when Effective C# was published. Some of the items are part of the conventional wisdom now
Items that assumed your last language was C++ or Java. The early adopters of C# were developers that came from C++ (on the Windows platform) along with some developers that came from Java. That’s no longer true. College grads (since 2002 or so) are using C# for their first professional programming experience. Others are coming from VB, Ruby, Python, or PHP. (I’m not claiming that C# is grabbing market share from all those languages; the migration happens in all directions.) It just wasn’t right to assume that every C# developer has C++ or Java experience anymore.
The poster child for dropping items is the original Item 41, where I advocated using DataSets rather than implementing IBindingList yourself. I didn’t rewrite this item because the obvious answer now is to use BindingList<T> when you need the IBindingList capability. If you were using DataSets for some other reason, pick some other generic collection type. There are many, and the options grew again in .NET 4.0. Those generic collections have better APIs (the type parameter means the compiler ensures type correctness), and better performance (boxing / unboxing doesn’t apply. It’s not often that it’s trivial to get better performance and better chances at correctness. Even in the 1.x days, I didn’t advocate using DataSets are part of a service API. That was and still is a painful choice.
There’s also been many enhancements in the .NET framework that mean there are better data solutions. LINQ, along with the query syntax pattern (See Item 36 in More Effective C#), means there are much better ways to work with data in .NET 4.0. Chapters 4 and 5 of More Effective C# discuss these important techniques. The entity framework has matured, and is a better way to handle data transfer between layers and machine boundaries. (I still need to look more closely at the latest EF, I know some of the changes, but not all).
All in all, I’m happy that the second edition did preserve quite a bit of the original advice from the first edition. The C# language has grown, and there are better tools in the C# toolset. It was clearly time for an update that represented the changes in the C# language, the .NET framework, and the C# community at large.
I received the following question (paraphrased) this week:
“You are an advocate of converting imperative style programming into re-usable frameworks as you did in your November VS Magazine column. I agree from a developer’s perspective. But, one thought that came to mind is the imperative style is quicker to grasp from a maintainer’s perspective because the code is in one place in contrast to several little functions that need to be pieced together. Is your thinking that once the framework of functions is setup and re-used, then a maintainer only has to learn it once and subsequently the idiom is quickly recognized? Effectively, there is a high cost the first time the idiom is encountered but then the future cost is lower?”
Actually, I think the imperative code seems easier to maintain because we’ve all seen so much more of it. That’s probably even more true case of maintenance developers. Only some of our universities discuss declarative programming languages. There’s even less declarative programming in products and enterprise projects these days. Declarative programming is unfamiliar to many of the professional developers writing code today.
In our regular day to day jobs, it’s not that what we do is incredibly hard. What makes some tasks feel hard is that we are unfamiliar with the tools or the task. Declarative program is one of possible tools that most developers are likely to avoid, because it is very unfamiliar. That’s a shame (in my opinion) because forcing yourself to think in different ways will make you a better software engineer. Furthermore, the problem you describe above will go away. The more exposure that developers have to declarative programming, the easier it will be to understand it and maintain it.
On your specific concern, I honestly think that the declarative style makes it clearer to see exactly what each function does. Those smaller, simpler functions are easier to understand. In my opinion, that makes it easier to understand larger algorithms composed of these smaller functions.
Kirill Osenkov provides some details on this feature that was added in Beta 2 of C#4.0. I must admit a certain ambivalence toward this feature. On the one hand, this feature will make working with Office COM APIs even easier (as Kirill shows). On the other hand, I fear a rising chorus asking to implement defining indexed properties in C#. That would be a mistake. It would end muddling the waters of what objects ‘owned’ other objects and properties.
could represent two very different object models. Is SomeProperty a type that implements an indexer? Or is SomeProperty a property of foo that requires an index? That starts to matter quite a bit when you wonder about LINQ queries and other ways to access whatever data is behind SomeProperty.
I’m close to thinking that I’d have preferred this change wasn’t made. It’s a bit troubling to think that the language accepted changes solely to support interop with COM style APIs. But as long as the indexed properties remain in a box, and are only used for that particular scenario, it won’t be a serious concern.
You probably noticed that Visual Studio 2010 Beta 2 was released for download today (for MSDN subscribers). The general release will be Wednesday (Oct 21).
I’ve had limited time (obviously) to work with this, but I’m already impressed. The WPF editor has shown lots of progress. It’s much more responsive than in earlier beta builds. The language features (at least for C#) are coming along well.
That bodes well for the announced release date of March 22, 2010. Yes, they’ve placed a stake in the ground, and this release has an official launch date.
In addition, Microsoft made some announcements about MSDN licensing and pricing. Microsoft has the full announcement here. There are a couple of interesting items that are very important in this announcement:
1. Every Visual Studio Premium license includes Team Foundation Server with 1 Cal. That means if your team has VS Premium, you can use TFS right out of the box.
2. WIndows Azure “Development and Test Use”. Visual Studio Premium (and above) will include compute hours (and data storage) in Windows Azure for test purposes. (UPDATE: The full terms are here.) VS2010 with Premium MSDN will get (initially) 750 hours of compute time per month, 10 Gigabytes of storage, and more).
That promises to be a very exciting 2010!
All of these projects are Open Source (using the Creative Commons license for content, and the MIT license for code). If you would like to contribute, visit our GitHub Repository. Or, if you have questions, comments, or ideas for improvement, please create an issue for us.