Bill Blogs in C# -- Azure

Bill Blogs in C# -- Azure

Created: 11/18/2015 6:17:00 PM

One of the major challenges we faced with the AllReady app was building a custom kudu deployment script. There is incredible power in this system, but it takes a bit of research, and a bit of help to get all the pieces working.

Let’s start with the simple goal: to make testing easier, we wanted to deploy to a live site automatically whenever we merged a pull request into master.

Azure supports this by default, including for asp.net 5 sites. Using Kudu was an obvious choice.

Adding Web Jobs

Life got complicated when we added a web job to the project. We added a web job because one of the actions the Allreedy application performs is to send messages to all registered volunteers. We don’t want to tie up a thread running in the asp.net worker process for the duration of sending messages through a third party service. That could take quite some time.

Instead, we want to queue up the emails on a separate webjob so that the site remains responsive.

Complicating the Deployment

That’s where deployment got complicated. You see, when we started, webjobs aren’t supported under asp.net 5 yet. The webjob builds using asp.net 4.6, using Visual Studio 2015. We also have one interface assembly that contains the types that are shared between the web application (asp.net 5) and the web job (asp.net 4.6).

So our build now includes:

1. Build the assemblies that are part of the web application using DNX.

2. Build the assemblies that are part of the web job using MSBuild. (Note that this means building one assembly twice)

3. Deploy the web site.

4. Deploy the web job.

Those extra steps require creating a custom deployment script.

Creating the Custom Deployment Script

Here are the steps we needed for adding our own custom Kudu script. There were several resources that helped create this. First, this page explains the process in general. It is a shortened version of this blog series (link is to part 1).

The first task was to create a custom build script that performed exactly the same process that the standard build script process performs. I downloaded the azure tools for deployment, and generated a deployment.cmd that mirrored the standard process.

You need to have node installed so you can run the azure-cli tool. Then, install the azure-cli:

npm install azure-cli –g

Then, run the azure CLI to generate the script. In my case, that was:

azure site deploymentscript –aspWAP allreadyApp/Web-App/AllReady/AllReady.xproj –s allready.sln

Notice that I’m directing azure cli to generate a script based on my xproj file. But, notice that this does not build the .csproj for the web jobs.

Before modifying the script, I wanted to verify that the generated script worked. It’s a good thing I did, because the default generated script did not work right away. The script generator assumes that the root of your github repository is the directory where your .sln file lives. That’s not true for allready. We have a docs directory, and a code directory under the root of the repo.

So, the first change I had to make is to modify the script to find the solution file in the sub-directory. After that, the script worked to deploy the website (but not the webjobs). Doing that build required three changes. First, I needed to restore all the NuGet packages for the .csproj style projects:

call :ExecuteCmd nuget restore "%DEPLOYMENT_SOURCE%\NotificationsProcessor\packages.config" -SolutionDirectory "%DEPLOYMENT_SOURCE%" -source https://www.nuget.org/api/v2/

Getting this right got me stuck for a long time. In fact, I needed to get some product team support during our coding event from David Fowler. The version of nuget running in Azure needs to use the V2 feed when it’s restoring packages for a .csproj based project. *Huge* thanks to David for helping us find that.

Next, we needed to build the .csproj bsed projects:

call :ExecuteCmd "%MSBUILD_PATH%" "%DEPLOYMENT_SOURCE%\AllReady.Models\AllReady.Models.csproj"
IF !ERRORLEVEL! NEQ 0 goto error
call :ExecuteCmd "%MSBUILD_PATH%" "%DEPLOYMENT_SOURCE%\NotificationsProcessor\NotificationsProcessor.csproj"
IF !ERRORLEVEL! NEQ 0 goto error

The final step is to deploy the webjobs. This meant copying the web jobs into the correct location. This step happens after the build, and before the Kudu Sync process:

mkdir "%DEPLOYMENT_TEMP%\wwwroot\app_data\jobs\continuous\notificationsprocessor\"
call xcopy /S "%DEPLOYMENT_SOURCE%\NotificationsProcessor\bin\debug" "%DEPLOYMENT_TEMP%\wwwroot\app_data\jobs\continuous\notificationsprocessor\"

The deployment script and the .deployment config are in our repository, so if you want to explore, check it out. Our repository is here: http://www.github.com/htbox/allready. And, if you want to help, check out the issues and send us a pull request.

Created: 4/8/2014 12:21:12 PM

Let’s start with the biggest story:

A public Roslyn Preview that’s Open Source.

The next versions of C# and VB.NET have progressed far enough that you now have a new public preview. The Roslyn Compilers now support all the existing features (C# 5, VB.NET 12). In fact, they have even added some new prototype features for the proposed C# 6 and VB.NET vNext releases. (VB.NET would be at version 13, but I haven’t seen a published version number)

Best of all, you can see the development and you can participate in the ongoing language design discussions. The Roslyn compilers are Open Source. You can view them here. I’ll blog more about the compilers and the new and proposed language features in the coming months. 

And TypeScript, don’t forget TypeScript.

In more language news, TypeScript 1.0 has been released. It’s development has been public and Open Source for some time.  It’s integrated into Visual Studio 2013 Update 2, which is available in Release Candidate form.I’ve been working with TypeScript for a while, and I’ll be covering it more here. In particular, this discussion on the null propagating operator is very lively.

In addition, there’s now the new .NET Foundation, which is the curator of several .NET related Open Source projects. You can see a number of .NET libraries and components in the foundation already, and I only expect that trend to continue. The .NET Foundation is a great example of how Microsoft is changing. Look at the current board members of the .NET Foundation. The board already includes prominent community members that do not work for Microsoft. I expect that to continue and grow as time goes on.

Both Roslyn compilers, and TypeScript are already part of the .NET Foundation assets.

Visual Studio Online

The Humanitarian Toolbox has been using the preview versions of Visual Studio. It’s a tremendous step forward for team collaboration in the cloud. The tools help you can more information about your application, and Applications Insights provides a nice extra. Quick edits in the cloud, automatic deployments, insights, and more. Oh, and it’s integrated with your local Visual Studio experience.

Azure Updates

This probably deserves several blog posts on its own. It’s a full time job just to keep up with all the new tools released by the Azure team. There’s a new management portal. New mobile services, new resource management,  Java VMs in Anzure, and more.

I need to dive in more, because it’s hard to keep up with what Guthrie’s team produces.

 

Windows 8.1 update

Most of these news features are for end users, not developers. They do represent a lot of feedback and represent a good direction. For Enterprise developers there are a couple great features. The enterprise side loading story is much better. Modern apps are much more economical for enterprises to deploy than with previous 8.x platforms. The IE Enterprise mode will also help enterprise deal with legacy web-based applications. However, I would still recommend that most enterprises consider any application that needs enterprise mode to be a technology risk.

Universal Apps, and all it comes with

Microsoft has been talking about unifying the Windows Phone, Windows 8 and Windows Server platforms for some time. That’s now gotten even more ambitious. Universal apps also can include iOS and Android devices, using the Xamarin tools. The idea that a common API set could be available across Windows 8, Windows Phone, iPhone, iPad, Android phones and Android Tablets is really innovative.  Also, these projects work across different IDEs: Visual Studio, Xamarin Studio, and (where someone has written the plug ins) Eclipse.

It means C# is, along with JavaScript, the one language you can use to develop applications across all the popular computing platforms today. .NET and C# is truly a cross-platform development environment.

There’s more changes that make this cross-device story compelling. There are changes in the Microsoft Store that make it easier and more advantageous to produce apps for both Windows 8 and Windows Phone. You’ll get more visibility across the stores, and you’ll get more touch points with your potential customers.

Windows Phone and Cortana

I’m impressed with the potential of Cortana. You’ll see lots of people saying that Cortana is “Siri for WIndows Phone”. That’s an oversimplification. Yes, you can talk to Cortana and it will answer you. But what’s really interesting over time is how Cortana integrates with you through your applications on your phone. Voice recognition and voice interaction is a great step. More interesting is how those capabilities will work across applications when integrated with Cortana. Can my scrum app alert me if I’m too aggressive on deadlines because of other commitments? Could a nutrition app integrate across a fitness app to change calorie allocations because of upcoming events, or days off? There’s a lot of potential. There’s also risk. Can those cross-application capabilities be added while respecting users’ privacy?  There’s a lot of potential hear, and I can’t wait to learn more and dive in to create new, useful applications.

Conclusion: Microsoft is reaching out to developers in a big way

I know this post had less detailed technical content than my typical post. There’s a lot of information that came from //build. I’ll drill into many of the areas for some time.

The big picture of all the announcements and the reveals at //build is this: Microsoft is reaching out to developers again. And not just the developers that have traditionally worked within the Microsoft space. They are reaching out to Open Source developers, mobile developers that concentrate on non-Microsoft platforms, and more. It’s a smart move. They brought out lots of technology that makes lots of sense.

The platform, the languages, and the tools, are first rate. I’m glad to see Microsoft reaching out to developers and changing their direction to match today’s developer community.

Created: 8/17/2011 7:33:54 PM

I really enjoy the ASP.NET MVC Route Debugger that Phil Haack wrote. It’s even cooler now that it’s deployed as a NuGet package.

I was having trouble getting error redirection working in an ASP.NET MVC site deployed on Azure.  I could not figure out why I could still get to the yellow screen of death.

I added the route debugger, and the answer was quickly obvious. I’d forgotten an Index method on my ErrorController class. I added that method, redirecting to my error page method, and all was great. I turned off the route debugger in web.config and deployed to Azure.

It didn’t work at all!

I’d originally written the site using MVC 2, and the route debugger needed a new version of ASP.NET MVC (which was installed on my developer box.) To fix the problem, I needed to deploy the latest version of ASP.NET MVC in Azure along with my site. There are a fewposts that explain how you can do this.  However, for ASP.NET MVC now, there is an easier way. Right-click on the web site node in Solution Explorer, and select “Add Deployable Dependencies”.  You’ll see the following dialog:

 

SNAGHTMLa22628  

Just check both boxes, and the add in configures a _bin_deployableAssemblies directory with everything you need. I deployed this version to Azure, and everything is fine. I can modify my web.config online and see any route errors either on the Azure fabric, or locally.

When you’re working in Azure, you need to remember that you almost certainly won’t have the same environment you have on your local box. You’ll be missing dependencies. You need to manage those when you first deploy to Azure. You may be missing things.

Created: 2/18/2011 3:17:51 PM

I was lucky enough to be invited to speak at the Windows Azure Boot Camp in Grand Rapids. I presented along with Dennis Burton, and Jason Follas. I covered Azure Queues, Azure AppFabric, and the closing session on migration strategies into the cloud. (You can download the materials at the boot camp site.

I enjoyed the trip. There was a lot of enthusiasm for the Azure, and for moving applications into the cloud. I’m thrilled to see our region building momentum behind cloud and Azure adoption.

Here are my key take aways from the sessions I delivered:

Azure Queues

Azure Queues are conceptually like a queue data structure. Obviously, most developers don’t need a full session to discuss a queue data structure, so there must be more here. In the case of Azure queues, that’s additional features for redundancy and persistence, and a protocol to ensure that messages are always processed: never started and then dropped.

Azure queues are a persistent and redundant data storage (like blobs and tables). All the instances of your application’s roles can access the logical queue. If a node storing a physical queue is rebooted (for a service upgrade, or some other hardware or software failure), the queue does not lose any of the messages. Other copies are still available, and the redundant node will come back, or migrate to a new VM if the node has a serious hardware failure.

The protocol for processing messages requires you write some defensive code when processing queues. Message processing is a three step process:

  1. Your worker role calls GetMessage() to retrieve a message. This retrieves message from the queue, and marks the message as in process. That means the queue storage will not hand that message to another worker role.
  2. Your worker role does the work represented by the message.
  3. Your worker role calls DeleteMessage to remove the message from the queue. This permanently removes the message. It’s done.

This three step process allows Azure queues to help you ensure that all messages are processed completely. If your worker role fails to finish processing, and doesn’t call DeleteMessage (within the specified timeout), that message moves back from the In Process state to the waiting state. The message now will be processed by another worker role.

Point to remember:  DeleteMessage() must be the last method call you make when you process a message.

Of course, a catastrophic failure may not be the only reason a queue message does not get completely processed.  Your worker role may simply exceed the timeout. In that case, the queue still marks the message as un-processed, and hands it to another worker role.  This does mean it is entirely possible for queue messages to be processed more than once.

Point to remember:  Azure queue messages will be processed at least once.  They may be processed more than once, due to timeouts. Ensure that your message processing code is idempotent. Processing a message twice (or more) must produce the same result as processing a message once.

Windows Azure AppFabric

AppFabric can be hard to describe. There are a lot of nuanced features under the AppFabric umbrella.  There are many different ways to use it to produce applications that are a combination of on-premise and in the cloud services. I find this session one of the harder ones to discuss. There’s just so much and so many different scenarios. It’s easy to give this session and leave attendees with spinning heads at all the possibilities.

I finally came up with a quick phrase that, while not strictly accurate, does get your head in the right space:

AppFabric is the Conjunction Junction of Windows Azure: It’s job is hooking up services, and making them run right.

Of course, that is obviously a simplification. But, it is a useful way to think about AppFabric. If your design calls for services running in different locations, and you want to have those services connect to each other, AppFabric is the right tool. AppFabric also helps with connecting peer-to-peer services. It’s got components and features that help with authentication and authorization.

In general, when different services need to find each other, and the end goal is having those two services speak directly to each other, starting the conversation by having AppFabric connect the two services is the right choice.

This is also a tough session, because the AppFabric is still in CTP mode, and new features and changes are coming quickly. I did have trouble practicing one of the demos because of the updated AppFabric portal release. I like the new portal much better, but I haven’t found all the features in it yet.

Strategies to migrate applications to Azure

The final session was is the one that discusses ways to migrate your existing applications and services to Azure. It discusses different ways to take large enterprise applications, decompose them into services, and migrate those services where you will get the most return quickly.

The key point: The Azure platform includes many ways to communicate between services on premises and in the cloud. The best way to get your applications in the cloud is to pick the service with the best ROI and move it. Rinse, and repeat with the next component or service. I do think this is one of the most important differentiators between Azure and Amazon’s EC platform. The same migration strategy requires much more work on the EC platform than it does on the Azure platform. You’ll need to build the communications infrastructure that is already part of Azure.

Dennis and I will be involved in two more Azure Bootcamps: April 13,14 in Southfield MI (near Detroit) and April 20-21 in Downers Grove, IL (near Chicago)

Created: 12/16/2010 4:01:09 AM

If you are a developer, you need to keep your skills up to date. If you’re working in the .NET space, our local Microsoft office is going to help. They’ve just announced Windows Client Development bootcamps, and another tour for the Windows Azure Development bootcamps.

The Windows Azure Bootcamp contains updated content to reflect the 1.3 release of Windows Azure. If you attended the previous bootcamps, there is a wealth of new material to learn. If you’re new to Azure, the bootcamp will give you a head start on growing the skills you need to build applications that run on the Windows Azure platform. See http://www.windowsazurebootcamp.com for a list of dates and locations. I’m honored to be part of the group of people delivering the windows Azure Bootcamp. I’ll be helping at a few of these events, primarily in Michigan and Illinois.

The Windows Client bootcamp concentrates on development for the Windows client platform. That will include Silverlight 4, Windows 7 features, and IE 9 as a development platform. The Windows bootcamp is a one day event, concentrating on the features you can use to leverage features in the Windows platform to create more compelling applications. See http://www.windowsdevbootcamp.com for cities and dates.

Created: 10/19/2009 5:39:00 PM

You probably noticed that Visual Studio 2010 Beta 2 was released for download today (for MSDN subscribers). The general release will be Wednesday (Oct 21).

I’ve had limited time (obviously) to work with this, but I’m already impressed.  The WPF editor has shown lots of progress. It’s much more responsive than in earlier beta builds. The language features (at least for C#) are coming along well.

That bodes well for the announced release date of March 22, 2010. Yes, they’ve placed a stake in the ground, and this release has an official launch date.

In addition, Microsoft made some announcements about MSDN licensing and pricing.  Microsoft has the full announcement here. There are a couple of interesting items that are very important in this announcement:

1. Every Visual Studio Premium license includes Team Foundation Server with 1 Cal. That means if your team has VS Premium, you can use TFS right out of the box.

2. WIndows Azure “Development and Test Use”. Visual Studio Premium (and above) will include compute hours (and data storage) in Windows Azure for test purposes. (UPDATE:  The full terms are here.) VS2010 with Premium MSDN will get (initially) 750 hours of compute time per month, 10 Gigabytes of storage, and more).

That promises to be a very exciting 2010!

Created: 6/19/2009 9:43:22 PM

Last Tuesday, we hosted our first executive briefing on upcoming technology trends.  The first two topics were Cloud Computing and Rich Internet Applications (RIA). We chose those topics because examples of RIA applications are already around, and cloud computing is on the horizon. It was a great mix of present and future discussion.

Nerd Note: My regular readers that are looking for core technology content won’t find it here. Our discussions were on the business issues around these technologies. No code samples here, but there were fantastic architecture and design discussions.

Cloud Computing

I’ll start by saying that I’m personally excited about Cloud Computing. I’m spending time building applications on Azure and Live Framework.

The opening discussion centered around separating the buzzword “Cloud Computing” from the substantive advantages of using a cloud platform. The ‘Cloud Computing’ buzzword has been attached to many different activities: Google Docs can be considered cloud computing.  So can LiveMail, GMail, or zune marketplace.  LiveMesh is clearly a cloud based application. However, for most of us in the Software Development community, we think of cloud computing as having our own applications running in the cloud.  That means Windows Azure, Amazon’s Elastic Computing Platform, or the Google App Engine. Even those this was an executive type of briefing, we are all in the software industry. We build things, so we look at tools in terms of what we can build.

From there, we discussed the risks to moving toward cloud computing. There are many.

Current Investments: A lot of the large enterprises already have significant investment in their own DataCenters. This will change the economics of moving to the cloud.  Should an enterprise retire its datacenter? If so, at what cost? Having already spent all the money to build a data center, cloud computing will take much longer to generate ROI.

Sensitive Data:  Others have discussed this as well,but the main concern here is fear.  Many companies have entered into a trust relationship with their customers that involves how the vendor handles the customers’ data. Most vendors are concerned with offloading that trust relationship to a 3rd party. Regardless of how much trust they may place in that 3rd part already, they still have concerns. Cloud platforms increase this concern because not only is the data offsite, the data is somewhere unknown.  You know, ‘it’s in the cloud’.

Spotty Connectivity:  We developers tend to locate where we have great connectivity.  It’s almost a prerequisite for where we choose to live and work. However, the same isn’t true for all our customers. Some of them still must live in locations where connectivity is not a given. Or, even if there is great connectivity, it may not have high enough uptimes.

But of course, we are excited about building software for the cloud, so what are the drivers to build software there? We are we excited about cloud computing?

Once again, there are a lot of great business drivers for cloud computing.  However, almost all of them turn into one statement:

Cloud Computing is Elastic.

That implies several drivers for moving applications into the cloud. Economics is the greatest force: Under current models, your data center must be built to handle your peak load. You’ll pay for the infrastructure at all times, even at the minimum load for your application. For applications with seasonal implications (retail, tax applications, Olympics, etc) that can be a huge savings.

There are other drivers as well.  Scale is a big one.  We work with some researchers that generate terabytes of data every month. That’s an incredible expense, and cloud based computing can lower the storage costs.

After discussing some of the positives, we did a little comparing on the major announced cloud platforms.  Our index-card sized comparison is this:

  • Amazon’s Elastic Computing is the most immediate on-ramp to the cloud.
  • Google App Engine is a great platform for scripted web applications.
  • Windows Azure is a new OS optimized for cloud computing.

Here’s why we came to that conclusion:  Amazon’s platform is based on the concept of renting a virtual machine image (either Linux or Windows Server). That makes it the smallest distance from any current application architecture. Google App Engine is optimized for scripting web applications. It’s simple to create web apps there. Windows Azure is setup a bit differently, and enables you to think of an app running ‘in the cloud’ as opposed to running ‘on N servers in the cloud’. That makes it a bit more work to take advantage of the capabilities, but could be a bigger win once there.

We finished with an interesting question from one of our customers: “Are we recommending cloud computing, and if so, whose cloud?”

We all weighed in with positive comments.  My own view was that if there weren’t overwhelming negatives (such as resistance to data location), I’ve been recommending cloud based solutions almost exclusively.  Which cloud is trickier. I believe in the Azure vision, because even though you can leverage much of your existing skills, it does make you think about running in a cloud. Therefore, for customers invested in .NET, or open to moving there, it’s my choice. However, it’s not a good idea to recommend porting a large codebase from other platforms (java, php) just to move to Azure.  I know there are some strategies for running those applications in Azure, but I don’t have enough experience there to place big bets on it. And, too many customers with Java or PHP apps have a dependency on MySQL, which doesn’t run in Azure.  When those barriers are in place, Amazon’s Elastic Computing platform makes more sense.

There’s also a lot of interest from larger companies about running the Azure stack in their own data centers. Interestingly, while I understand some of the justification, I think it’s a short term concern.  Why would you run your own version of the Azure stack instead of pushing your application into the immense scale of a world-wide scale cloud?

Rich Internet Applications

One great thing about conducting small seminars is that we can set the agenda and veer off that agenda as energy takes us in different directions. 

On Tuesday we spent more than 3/4 of our time on cloud computing, and finished with a brief look at Rich Internet Applications.

Or rather, we talked about RunKeeper. (www.runkeeper.com) If you run, and you have an iPhone, this is incredibly cool. Take your iPhone on your run, and it maps the distance, and all elevation changes.  We were discussing things like having it monitor your heartbeat, and monitor your speed (which it may do, I forget which features it does, and which we wished it did).

This one was a bit less ‘game changing’ than cloud computing.  The overall consensus was that computer users want more. Better interaction, data from any device, and not constrained by the classic browser / forms metaphor.  That means we may be entering a time when browser based applications will be considered ‘legacy applications’ unless they make use of Ajax, Silverlight, Flex or similar tools. It’s fun to develop applications that have different capabilities.

Closing

This was a wonderful experience. It was fantastic to spend a morning with a tremendous set of brain power, all participating in thoughtful discussions about emerging technologies and how to leverage them to bring more value and more capabilities to our customers.  The best part is that some problems which were considered ‘out of scope’ are suddenly in view.  That’s cool.

Created: 6/1/2009 6:53:38 PM

And, it’s got support for VS2010 Beta 1.

This is amazingly cool.  I downloaded and installed the Azure May CTP (both the SDK and the Visual Studio Tools (link here).

Being that ‘let’s see if we can break this’ kind of person, I ran the same sample in VS2008 and VS2010 on the fabric on my box.  Both ran concurrently without problems.  The developer fabric kept each copy in its own sandbox.

After you get the Azure toolkit, you can get the samples (separate download for both VS2010 and VS2008) here.

Current Projects

I create content for .NET Core. My work appears in the .NET Core documentation site. I'm primarily responsible for the section that will help you learn C#.

All of these projects are Open Source (using the Creative Commons license for content, and the MIT license for code). If you would like to contribute, visit our GitHub Repository. Or, if you have questions, comments, or ideas for improvement, please create an issue for us.

I'm also the president of Humanitarian Toolbox. We build Open Source software that supports Humanitarian Disaster Relief efforts. We'd appreciate any help you can give to our projects. Look at our GitHub home page to see a list of our current projects. See what interests you, and dive in.

Or, if you have a group of volunteers, talk to us about hosting a codeathon event.