Free SSL encryption on Azure with Cloudflare

I have a small Web Application site (running ASP.net Core) hosted on Microsoft Azure under a shared plan. Azure shared plans are budget friendly whilst providing features like custom domain names. Until recently the site was HTTP only which was initially fine for my use case but as HTTPS is becoming increasingly a minimum standard on most sites and even browsers are now warning users when navigating to unsecure sites, it was time to move to HTTPS.

Photo by James Sutton on Unsplash

The Problem

Whilst setting up HTTPs certificates has got easier and cheaper there is still usually a trade off between effort and cost. I could have added SSL to my Azure subscription but I couldn’t justify that cost for the site in question. So I needed an easy and cheap solution – and I definitely found it with Cloudflare!

The Solution

I am amazed how easy it was to set up SSL fronting of my existing site using Cloudflare’s SSL.

  1. Create an account at Cloudflare.com
  2. Enter your domain name where the site is hosted.
  3. Confirm the pricing plan (choose FREE)
  4. Cloudflare scans the domain’s DNS entries and asks you to confirm which entries you want proxying. Once done you’ll be given the new NameServers which you’ll need to update on your domain registar’s site.
  5. Once the NameServers have synced (could take minutes to 48 hours) you are done!

There are a few configuration modes for their SSL offering, e.g. full SSL (i.e. SSL from end users to Cloudflare to origin server) or Flexible (i.e. where the SSL stops at Cloudflare and traffic is not encrypted from your origin server to Cloudflare). The options you choose will depend on your setup and your requirements. As my site is hosted on Azure I got Full SSL encryption by default (using Azure’s SSL keys to encrypt traffic from my web server to Cloudflare).

You don’t need an Azure hosted site for this SSL goodness to work as they will front your service wherever its hosted.

This process was so easy I should have done it months ago. For more info checkout these resources:

Advertisement

Cheap Azure Hosting via Static Web Sites

Something that is pretty cool and not that well known is that you can now host your static web site in the cloud with Microsoft Azure just from your Azure storage account. The functionality is currently in preview only but its functional enough to get up and running quickly if you have an Azure account.

Why host a static site?

Whilst it does depend on your requirements many sites are quite capable of being static sites with no server side processing. The classic example is a blog site whereby the site could just serve up static html, images and JavaScript straight from disk as the content changes fairly infrequently.

The growth in JavaScript libraries and the functionality of frameworks like React.js make static sites even more viable. Using the power of JavaScript its possible to create rich powerful web applications that don’t need server side processing. There has been an explosion of static site generators over recent years that will take text or markdown files and generate a complete static site for you. Two very popular generators of note are Gatsby (React.js based) and Jekyll (Ruby) but there are literally hundreds of others as can be seen by this online directory: staticgen.com.

Hosting a static site in Azure

Of course you could always host a static site in Azure if you hosted it in a full featured web site (via a hosted VM or azure web site) but the beauty of a hosting a static only site is that you can host it straight out of storage area and so you don’t need to pay for any compute power which makes it extremely cheap (and even free). You just pay standard Azure storage rates which include a generous data transfer limit (about 5GB a month).

If you think about it hosting a static web site is just a natural extension for a cloud offering like Azure as they already host files and binary content on public URLs in Azure Storage. This new functionality though makes it more explicit and enables web site like functionality such as custom error pages. It is also possible to add your custom domain name to the site and link up SSL (although unfortunately at the moment SSL requires use of an Azure CDN which adds to the cost.)

So how do you host your site, well follow the official instructions here.

Once you have a web page being served by the default Azure storage URL you can proceed to add your own custom domain name using these steps.

Now you should have a fully working site, but to keep costs even lower we can utilise caching of our static content to encourage the client browser to cache the files thus reducing our data transfer costs. Luckily it is easy to set cache control settings on our Azure Blob storage items. This blog post by Alexandre Brisebois covers doing it in code but if you are just testing, or have a site that doesn’t change much you can do it manually via the Azure Portal. To do so enter your Azure Portal, browse to your Storage Account and then using Storage Explorer find the files you want to set caching for and go to their properties. In the Properties dialog you can set the Cache-control value in the HTTP header to something like…

 "public, max-age=86400". 

There are other alternatives to Azure for hosting static files and some offerings are very cheap or free. Some of these are more advanced than the current Azure offering and provide additional features such as integrated SSL and contact forms. One such vendor is netlify.com but there are others.

In summary, if you want to host a site cheaply and you dont really need server side processing then consider hosting a static site, and if you’re already using Azure then its a simple step to give it a go.

.

Setting a Custom Domain Name on an Azure Web Site

I recently decided to add a custom domain name to a free Azure website that I use for development purposes. As the FREE Azure web site model doesn’t support custom domains (a shame but hard to complain as it’s FREE) I needed to upgrade the site to the ‘Shared’ mode. This is easily done by the Scaling button in the azure portal.

Firstly however I needed to link my current azure web site to sit under a different subscription to the one I used to set it up. The problem is that cannot move sites between subscription models yet (please fix this Microsoft). To get around this I needed to create a new website under the correct subscription and then publish my web site code to it. Luckily this is easy to do as it’s just a basic website but I can imagine that this could be painful if you have a bunch of storage accounts or a database to re-create.

Using the Azure Portal, creating a new site is a simple process Click +NEW at the bottom of the portal for the menu shown below:

image

Once created all I needed to do was download a Publish profile (see this tutorial link for how to publish to Azure) for the new site for Visual Studio to use. Once downloaded I opened my VS2012 solution and brought up the Publish dialog. I pointed it to the new Publish profile file and clicked Publish. In just a few seconds I’ve got a new Azure web site up and running with my existing MVC web application. This was very smooth, with no change to config or code required. The sheer simplicity of this impressed me as I was short on time.

Next I needed to allocate my custom domain which as previously mentioned is not available for FREE websites so i needed to upgrade to SHARED mode. From the Azure portal >web site configuration > scale > click SHARED (remember this model incurs a cost).

image

Once upgraded I could then immediately select DOMAINS and set up my CNAME and A record references, for more information see this useful link (configuring a custom domain name for a Windows Azure web site). It’s worth reading the comments on the post too as it covers issues with registering the domain without the WWW subdomain.

Once the DNS entries had propagated I had my existing site up and running under a custom domain running within a shared Azure instance, all with very little effort.

Renaming your Team Foundation Service Account

If you haven’t checked out TFS Service from Microsoft then I thoroughly recommend trying it out. It is basically a TFS instance in the cloud and at the moment its free for everyone to use and there’s an official a free plan for small projects up to 5 users. I run a small TFS instance on my WHS but I’m planning on switching completely to the TFS Service offering to avoid having to manage a TFS installation locally.

image10

One thing that caught me out was the issue of naming your TFS Service Account. I didn’t give much thought to it initially but then once I found how good the service was I began to wish I’d named it something more appropriate. Well, as the TFS team are releasing service updates at such a rapid cadence I didn’t have to wait long for a rename feature. The process to rename your service account is detailed in Brian Harry’s blog here.

Even if you are not enthusiastic about a cloud based centralised ALM tool I recommend checking out Brian Harry’s blog anyway for informative posts on how his team is managing three week sprints and continuous delivery within Microsoft. What this team is delivering for their customers is impressive.

It’s All About Culture (Enterprise IT Beware)

This interesting post by PEG recently highlights an organisations culture as being in reality the only differentiating factor that they have. In his view assets, IP, cost competitiveness, brand and even people can be copied or acquired by your competition but it is your company culture that will lead to success/failure. I agree with his assertion on the importance of culture, and would add that whilst it has always been the case that culture is critical the rapidly changing new world is bringing with it new challenges to competitiveness resulting in culture becoming even more important for competitive advantage. I must clarify here that I see a huge gulf between the ‘official’ culture of an organisation that is documented and presented by senior management and the real culture that is living and breathing on the shop floor (and they rarely match).

This excellent highscalability.com post by Todd Hoff recently highlights the way that the rules framing IT (and business) are changing and how start-ups are becoming a beacon for investigating this new world. When we look at this new world the key elements are flexibility, adaptability and innovation. These traits thrive in start-ups where the ‘culture’ encourages them. Many of these new industry shakers lack the assets, brand and IP to use but excel in using their ‘culture’ to outmanoeuvre bigger rivals and drive innovation in their industry.

Some enterprises get this and are trying hard to foster a more innovative and customer focused culture. The emergence of agile development practices can help to focus the team on the true business value of features and aid flexibility in the use of resources. SAP has recently designed its new office environment to fully promote Agile development practices (see this post), and whilst this on its own can’t change corporate culture, it can remove some of the blockers to a more agile culture emerging. Of course there are many other enterprises that still rely on military inspired hierarchical structures and attempt to enforce a desired culture. This InfoQ article by Craig Smith summarizes recent articles covering how mainstream management are missing the benefits of an agile approach within their organisations. 

"…The management world remains generally in denial about the discoveries of Agile. You can scan the pages of Harvard Business Review and find scarcely even an oblique reference to the solution that Agile offers to one of the fundamental management problems of our times.”

The benefits of Agile practices are well documented and so if this fundamental approach is still not connecting with mainstream managers then how long will it be before they grasp the bigger paradigm shift that is occurring underneath them. This shift is being forged by start-ups and enabled via cloud computing.

Let’s look into what is happening in the start-up space; Todd Hoff’s post again highlights the use of small dedicated autonomous teams with the power (and responsibility) to make rapid changes to production systems. these teams being responsible for the entirety of their systems from design to build, test, deployment and even monitoring. How does this work? Well it can only work effectively via a shared culture of innovation, ownership and excellence. Facebook/Google staff have stated that their biggest motivator is the recognition of their good work by their peers (see here and here). This is ‘Motivation 3.0’ in action with the intrinsic rewards of the job at hand being the main motivation to succeed. Compare this with the traditional and still prevalent command and control approach used in a lot of enterprises with tightly controlled processes, working habits and resources. Splitting the responsibility for each stage of the development lifecycle between different teams (usually separated by reporting lines, internal funding requests, remote locations etc.) and then expecting a coherent solution to prevail is not going to work in this new world.

We are now starting to see the emergence of the cloud as a major force in driving the democratising of computing power and enabling the emergence of empowered teams. Todd’s post covers in detail how the cloud is making the once impossible possible and I recommend you take the time to read it. Only time will tell if the cloud’s impact on Enterprise IT will act as a catalyst to driving the emergence of more agile competitive corporate cultures in enterprises in the future.

The Enterprise & Open Web Developer Divide

In this interesting Forrester post about embracing the open web Jeffrey Hammond highlights the presence of two different developer communities. In his words:

"…there are two different developer communities out there that I deal with. In the past, I’ve referred to these groups as the "inside the firewall crowd" and the "outside the firewall crowd." The inquiries I have with the first group are fairly conventional — they segment as .NET or Java development shops, they use app servers and RDBMSes, and they worry about security and governance. Inquiries with the second group are very different — these developers are multilingual, hold very few alliances to vendors, tend to be younger, and embrace open source and open communities as a way to get almost everything done. The first group thinks web services are done with SOAP; the second does them with REST and JSON. The first group thinks MVC, the second thinks "pipes and filters" and eventing."

Following the tech industry it is clear to me that this division is tangible and in fact I would suggest the gap is currently increasing. I recently started to revisit my open web development skills after it occurred to me how large this divide was beginning to get and how important these skills will be key in the future. Whilst the Enterprise developer often traditionally focuses deeply on a handful of technologies (too often from one Vendor) the Open Web developer is constantly learning new languages and choosing between best of breed open source frameworks to get the job done. The new Open Web developer has evolved from a different age and with different perspectives and in many ways leaving behind the rules/constraints of the Enterprise developer building typical Line Of Business (LOB) applications. I’m not suggesting that Enterprise developers don’t understand these technologies already, I assume many do, but they’re unlikely to be living and breathing them. This is not just about web development technologies and techniques, but more about mind-sets, architectural styles and patterns. Perhaps it can be viewed historically as similar to the evolution from mainframes to distributed computing, and this is just the next evolution. This movement compliments the emergence of cloud computing and one can assume that the social, dynamic LOB applications of tomorrow will rely heavily on the skills and technologies of the Open Web community. To quote Jeffrey again:

"In the next few years, their world is headed straight to an IT shop near you."

The proliferation of devices, cloud computing and a new breed of ‘surfing since birth’ young blood entering the industry combined with the shift towards this new world from big players like Microsoft (e.g. using JavaScript to build Windows 8 apps) mean that Enterprise IT will have to converge with the Open Web approach in order to meet future consumer needs. Only the integration of these worlds will enable Enterprises to integrate their existing application landscapes with the new web based consumption model.

John R. Rymer’s Forrester post on the subject provides his view on the differences between these communities and his accompanying post details the technologies you need to focus on now (HTML5, CSS3, JavaScript, REST). Whilst it can be tricky to follow this sort of fast moving decentralized movement, the good news is that now is a great time to get into these technologies with the growth of the umbrella HTML5 movement raising awareness within the industry and bringing some standards to advanced web design. Keep an eye on what the big web frameworks are offering, and track the innovations at companies like Google and Twitter. I recommend you read these Forrester articles and think about how this affects your architecture, IT organization and career.

For some quality content on these technologies check out these links:  ‘Mozilla Developer Network’, ‘Move The Web Forward’ and ‘HTML5 Rocks’.

Ray Ozzie’s Dawn of a New Day

I would recommend everyone interested in technology to read Ray Ozzie’s (Chief Software Architect of Microsoft) memo – "Dawn of a New Day". It’s a fascinating insight into the vision of a key player in the industry and a call to arms for Microsoft and it’s partners. What interests me the most about this vision is that it is a conceivable vision and one that I share. This vision of "appliance-like connected devices" being the norm and consuming "Cloud Based Continuous Services" is one that is easy to visualise as this day is dawning now around us. Smart phones, tablets, connected TVs etc are set to become the principle means of interacting with our online world.

"Complexity kills"

Whenever I’m called upon to help out family and friends with their PCs it often strikes me how inappropriate these machines are for the needs of the basic user. The power and complexity of the PC is it’s great power but it also makes them often too difficult to manage and secure. Huge numbers of basic PC users now in reality only use their browser and don’t install software applications anymore. These people are also now enjoying the simplicity provided by smart phone OS’s such as Android and iOS. In fact many of these users are able to fulfil their needs via App Stores etc whilst their PCs gradually gather dust. the future vision where devices rule makes total sense. Whilst Apple is proving the master in the device market Microsoft have the ‘Windows’ advantage. The failure of Linux netbooks to maintain market share shows that given similar pricing models consumers will stick with the familiarity and safe option of Windows, and this is an opportunity for Microsoft. They could capitalise on this with a lean “appliance like” version of Windows in the future.

"Complexity sucks the life out of users, developers and IT. " – I have seen numerous projects needlessly suffer in delivery due to overly complex designs, sometimes from overly complex requirements. Because we can create software to be configurable and feature rich we feel we have to, but of course every additional feature brings additional overhead. This overhead my be felt by the end user or perhaps just the developer and testers trying to implement or test the features.

"Cloud-based continuous services"

Ray’s vision of cloud services being continuous is key for the connected future. Consumers need to be able to depend on the cloud always being available and willing to serve them. As these services grow in importance they will be expected to grow in number and complexity. This is a real challenge for industry engineers and we really need to learn the lessons of the hugely scalable consumer web sites such as Facebook and Google. I look forward to seeing what technologies are produced to aid the development of these services and which scalability patterns move towards the mainstream.

It’s an exciting future for our industry and one that I look forward to playing my part in.

Private Clouds Gaining Momentum

Well its been an interesting few weeks for cloud computing, mostly in the “private cloud” space. Microsoft have announced their Windows Azure Appliance enabling you to buy a Windows Azure cloud solution in a box (well actually many boxes as it comprises of hundreds of servers) and also the OpenStack cloud offering continues to grow in strength with RackSpace releasing its cloud storage offering under Apache 2.0 license with the OpenStack project.

OpenStack is an initiate to provide open source cloud computing and contains many elements from various organisations (Citrix, Dell etc) but the core offerings are Rackspace’s storage solution and the cloud compute technology behind NASA’s Nebula Cloud platform. To quote their web site…

The goal of OpenStack is to allow any organization to create and offer cloud computing capabilities using open source software running on standard hardware. OpenStack Compute is software for automatically creating and managing large groups of virtual private servers. OpenStack Storage is software for creating redundant, scalable object storage using clusters of commodity servers to store terabytes or even petabytes of data.”

It is exciting to see OpenStack grow as more vendors outsource their offerings and integrate them into the OpenStack initiative. It provides an opportunity to run your own open source private cloud that will eventually enable you to consume the best of breed offerings from various vendors based on the proliferation of common standards.

Meanwhile Microsoft’s Azure Appliance is described as …

…a turnkey cloud platform that customers can deploy in their own datacentre, across hundreds to thousands of servers. The Windows Azure platform appliance consists of Windows Azure, SQL Azure and a Microsoft-specified configuration of network, storage and server hardware. This hardware will be delivered by a variety of partners.

Whilst this is initially going to appeal to service providers wanting to offer Azure based cloud computing to their customers, it is also another important shift towards private clouds.

These are both examples in my eyes of the industry stepping closer to private clouds becoming a key presence in the enterprise and this will doubtless lead to the integration of public and private clouds. It shows the progression from hype around what cloud might offer, to organisations gaining real tangible benefits from the scalable and flexible cloud computing platforms that are at home inside or outside of the private data centre. These flexible platforms provide real opportunities for enterprises to deploy, run, monitor and scale their applications on elastic commodity infrastructure regardless of whether this infrastructure is housed internally or externally.

The debate on whether ‘Private clouds’ are true cloud computing can continue and whilst it is true that they don’t offer the ‘no- capital upfront’ expenditure and pay as you go model I personally don’t think that excludes them from the cloud computing definition. For enterprises and organisations that are intent on running their own data centres in the future there will still be the drive for efficiencies as there is now, perhaps more so to compete with competitors utilising public cloud offerings. Data centre owners will want to reduce the costs of managing this infrastructure, and will need it to be scalable and fault tolerant. These are the same core objectives of the cloud providers. It makes sense for private clouds to evolve based on the standards, tools and products used by the cloud providers. the ability to easily deploy enterprise applications onto an elastic infrastructure and manage them in a single autonomous way is surely the vision for many a CTO. Sure the elasticity of the infrastructure is restricted by the physical hardware on site but the ability to shut down and re-provision an existing application instance based on current load can drive massive cost benefits as it maximises the efficiency of each node.  The emergence of standards also provides the option to extend your cloud seamlessly out to the public cloud utilising excess capacity from pubic cloud vendors.

The Windows Azure ‘Appliance’ is actually hundreds of servers and there is no denying the fact that cloud computing is currently solely for the big boys who can afford to purchase hundreds or thousands of servers, but it won’t always be that way. Just as with previous computing paradigms the early adopters will pave the way but as standards evolve and more open source offerings such as OpenStack become available more and more opportunities will evolve for smaller more fragmented private and public clouds to flourish. For those enterprises that don’t want to solely use the cloud offerings and need to maintain a small selection of private servers the future may see private clouds consisting of only 5 to 10 servers that connect to the public cloud platforms for extra capacity or for hosted services. The ability to manage those servers as one collective platform offers efficiency benefits capable of driving down the cost of computing.

Whatever the future brings I think that there is a place for private clouds. If public cloud offerings prove to be successful and grow in importance to the industry then private clouds will no doubt grow too to compliment and integrate those public offerrings. Alternatively if the public cloud fails to deliver then I would expect the technologies involved to still make their way into the private data centre as companies like Microsoft move to capitalise on their assets by integrating them into their enterprise product offerings. Either way then, as long as the emergence of standards continues as does the need for some enterprises to manage their systems on site, the future of private cloud computing platforms seems bright. Only time will tell.

Windows Azure Experimentation Is Currently Too Expensive

I’m a fan of Windows Azure and have enjoyed using it during its CTP phase. Once the CTP was open for registration like many I jumped at the chance to play with this new paradigm in Software Development. During this CTP phase I have written some small private web applications that really do nothing more than experiment with the Azure platform. These have provided me with valuable experience and an insight into building a ‘real’ world application on ‘Azure’. I have also used this knowledge to demonstrate Azure to my colleagues and to promote the platform within my enterprise. All this has been possible due to the fact that the CTP version is completely free to use, however this period of experimentation will soon sadly come to an end.

As Windows Azure moves from a free to use CTP to a commercial product it is right that users have to pay for the privilege of using the platform but it seems that many developers are going to have a hard choice to make in the new year. Do you forget about developing on Azure or do you fork out $86/£55 a month for the privilege of experimentation. For those with an MSDN Premium Subscription they’ll have some more time to enjoy it free, but in 8 months the same decision will be required.

Windows Azure pricing details can be found here but if we assume that the transaction and storage costs are minimal for a developers experimental site and just take the basic cost of running one instance per hour it is $0.12 (sounds cheap) but if we consider that there’s 24 hours in a day, 7 days a week etc the cost for a month is around $86 (£53). That’s not a small amount for the average developer to find. Whilst I am pleased by the free hours provided for MSDN subscribers this is a limited offer and it’s really just delaying the problem for those developers. That is unless Microsoft can come up with a basic cheaper proposition similar to the shared web hosting model. If a developer wants to experiment with web technologies for example they can host a web site (for public or private use) with a 3rd party web-hosting company. These hosting companies provide a selection of options based on your requirements. Whilst premium dedicated server hosting is available developers can get their fix from the cheap and cheerful shared server hosting which will provide most of the features on a smaller scale for around $10 (£6) per month. I realise that there is more to Azure than hosting a web site but the point is that you can only really experience a product when you are frequently interacting with it to build something real, and therefore it has to be accessible.

Now I’m not saying Azure is uncompetitive compared to it’s rivals (it actually competes favourably) or that you don’t get your money’s worth. For a new business starting up with some expected revenue then Azure provides huge advantages and the ability to scale up and down is ideal. It’s getting the developer community interested and informed that is the problem. Microsoft needs developers to buy-in to this seismic shift in computing and by making the barrier to entry so high it is making it difficult to spread the love for this excellent product. I believe that it is in Microsoft’s interest to provide some way to get grass routes developers to buy into this product and to gain exposure to it.

I hope that in the new year we will see a new low cost (even advertisement funded) offering for Azure aimed at getting developers tuned in and coding on this great platform without making a large financial commitment. I’m not alone in hoping for this, check out the requested feature list for Azure (the most popular by far at the time of writing is just this, a low cost option).

Windows Azure : An Introduction

At last year’s PDC Microsoft released the details of its new venture into the next IT paradigm that is arguably set to change the way that applications are developed, hosted, managed and funded – Cloud Computing. It is easy to dismiss Cloud Computing is a fad or simply as a move back towards the mainframe days of a central processing model, but regardless of these debates there is no doubt Microsoft, Amazon and Google are pouring large amounts of funding into developing Cloud Computing platforms. I’m not going to debate the subject of Cloud Computing, although I will state that personally I feel it will impact all that we do in IT in the future, perhaps not in it’s current guise, but this latest move from Microsoft can be seen as one more step on that journey.

What’s is Windows Azure?

Well it’s not an image of Windows Server hosted somewhere on the internet for you to remote desktop into and install what you like on it. To quote Microsoft it is (in Marketing speak) a “Platform for writing highly scalable and available applications”.

It’s not currently possible nor advisable to just convert your current application to run on Azure, instead Azure provides a platform on which you can build a new application that is highly scalable and available. Azure runs in Microsoft Data Centres (currently in the US but planned to be located throughout the world) and your application runs within individual instances of virtual machines on that Azure fabric.

The pricing policy is also going to be based on usage which allows you to start small (with a few computing instances) and then increase the number of instances (and therefore computing power) as your application grows and needs to be scaled for the increasing number of users. Imagine you’re writing the new “Facebook”. You could buy a handful of expensive servers and then buy more if/when the applications user base takes off. Then you need to buy more and more until you’ve got a whole DataCenter of servers (all consuming masses of power) and a team of IT Administrators running them. Then your user base levels out, and possibly drops down to a more stable level leaving you with excess capacity you’ve already paid for. Worse still if your application never takes off then that initial investment in the first few servers will leave you seriously out of pocket. In contrast cloud services like Windows Azure are paid by usage. The cost per month will be related to your current storage usage and your compute instance usage. If you need more resources to scale out your application then you just pay more, which allows you to adjust your costs based on demand and removes the need for large upfront capital expenditure.

These cost benefits are ideal for Web 2.0 start-ups but they can also benefit large Enterprises. The ability to develop an application within a low cost framework that also manages hosting that application and usage monitoring, allows any development team to try out new ideas and dynamically move with the business. Cloud Computing could be a tool to enable an Enterprise to keep up with fast moving business opportunities at a low initial outlay and a low Total Cost of Ownership. An alternative model is where a platform like Windows Azure is deployed locally in the Enterprise DataCenter. The enterprise would then benefit from an efficient processing model for its data centre, forcing all new applications to be built to run on that platform. This would provide most of the benefits of Cloud Computing but with less issues around security as data would not be leaving the Enterprise. Microsoft have so far only unofficially acknowledged this model and are not promoting it as an option with Windows Azure, although it will be interesting to see if they do promote this idea in the future.

Azure Services Platform:

This is the stack that makes up Microsoft’s current Cloud Computing offering:

AzuresServicesPlatform

As you can see there are several offerings that sit on top of Azure, so lets quickly look at these first, although we’ll not go into the detail for these:

Microsoft .NET Services: Offers distributed infrastructure services to support both cloud-based and local based applications. This offering includes:

–  Access Control:  Provides claims based implementation of identity federation and transformation in the cloud.

– Service Bus: Allows you to expose your services (in the cloud or on premises) on the internet via a URI, without having to open up incoming ports inside your firewall.

– Workflow: Running Windows Workflow based workflows in the Cloud.

Microsoft SQL Services: This provides “SQL like” data services in the cloud based on SQL Server. This is effectively a premium storage service over the standard one provided by Azure Storage Services.

Live Services: There is a wealth of data locked within Microsoft Live applications (e.g. Live Mail) that is difficult to interact with. Live Services allows your applications to interact with this data. Building on Live Mesh it also enables synchronizing this data across a user’s numerous devices.

Windows Azure:

This is base environment where your application will sit. It is not an Operating System but that is the ideal way to imagine it. In the way an OS provides an abstraction from the systems hardware and provides APIs to enable communicate with it, Windows Azure is an OS in the cloud. It sits on the virtual hardware and provides an environment (a fabric) for running your applications.

The deployment and management of your application instances is transparent to the developer but its useful to understand how Azure works under the covers. On deploying your application to the Cloud it is added to a Virtual Hard Disk which is then added to a Virtual Machine instance running on a Windows 2008 (Server Core) host server in a Microsoft Data Centre. Interestingly , a multi-cast message is sent to all available hosts, allowing multiple instances to be installed concurrently. The virtual machine running your application instance will share it’s host machine with other applications. Your instance may move around different host machines as required to maintain availability and server maintenance. Microsoft’s deployment strategy takes into account both Fault and Update Domains, ensuring that your instances are not all deployed on a single point of failure (e.g. on a single power point etc). For the current CTP release the hardware of the VM is: 64-bit Windows Server 2008, 1.5-1.7 GHz CPU, 1.7 GB RAM. It is expected that the commercial release will allow for a choice of specifications. It’s worth noting that each Azure instance currently only see’s one CPU and so multi-threading should be used within your code for non-CPU intensive tasks.

Your application instances can perform one of two roles, Web or Worker:

A “Web Role” runs within IIS 7 and therefore effectively runs as an ASP.net web application. This means that most types of application that can be run under IIS can be run in a Web role, so for example ASP.net websites and WCF Service Applications. A web role allows inbound connections over HTTP and is used where inbound connections from the outside world are required.

A “Worker Role” is similar to a Windows Service except that it runs in the Cloud. It cannot accept inbound communications, but it can make outbound communications. It is a .Net Class Library that has a Start() method which is run at start-up and it’s up to your code to keep itself alive (using sleeps and loops). Communication between roles/instances is via ‘Queues’ (more on these below). These instances are ideal for providing background processing of data which allows a faster response from your Web Roles if the web roles are used to off-loading the intensive work onto these Worker Roles.

It is expected that more roles will emerge with the commercial release of the Azure platform.

Windows Azure Storage Services:

Windows Azure currently provides four forms of Storage , ‘Local’, ‘Queues’, ‘Tables’ and ‘Blobs’. It is important to note that SQL Data Services is a separate service that is not part of Azure Storage Service but instead an additional add-on service that provides a more SQL like data framework. Interestingly all these storage services are actually independent and fully accessible over HTTP(S) (via a RESTful interface) from both within and outside of the cloud. This means that your local windows client application could store it’s data in the cloud even though the application is not hosted in the cloud. Alternatively you could save the data from your Cloud application in Azure Storage instances but then access it from your local on premise application. All data writes to the storage services (this doesn’t include local storage) are triplicated for data redundancy across multiple servers.

Local Storage:

This is not part of Windows Azure Storage Services but should be included in the storage discussion for completeness and to avoid confusion. Each Azure instance runs on a Virtual Hard Disk (as previously discussed) and this provides around 250GB of local transient disk space for temporary storage. As this data is transient and local only to that one instance of your application you can’t use this space for true data persistence, but it is useful where you need to temporarily store data to disk during your processing.

Queues:

Queues primarily allow communication to occur between instances and allow Worker Roles to be passed work from Web Roles. For example your web role may accept incoming data which it then persists to a queue for a worker role to pick up. The worker role constantly polls the queue for work to process. The queues are based on FIFO and as queues are persisted to disk (and triplicated) they are very durable, and this provides system designers with a powerful feature that can be used to provide transaction type durability into their Azure applications that are not supported at the data layer. Read messages remain in the queue but marked as hidden preventing them from being picked up by another instance, it is up to the application to explicitly delete the message once it has been actioned. If it is not deleted then it will become visible again on the queue after a specific period (less than a minute). This means that once the data is on the queue the application can fail and once it is running again it can pick up the last message and continue as the message was not deleted. Adding queues therefore into your application design allows you to design durability into the architecture.

BLOB Storage:

Blob storage provides a simple method of storing and retrieving BLOBS (Binary Large Objects). This is particularly useful for media content but can also be useful for persisting serialised objects. Blob storage works as a hierarchy in a similar approach to a file system. You define an ‘account’, which contains ‘Containers’ which hold the Blobs. These blobs can also be held as ‘Blocks’ which allows for the handling of large Blobs. This relationship is shown in this diagram: <ref>

BlobStorage

Remember that the Storage Services are separate from your Azure application and can be accessed independently. The hierarchical relationship described above is key to the RESTful URL used to retrieve this data. The URL looks like this:

http://<Account&gt;.blob.core.windows.net/<Container>/<BlobName>

This provides a very user friendly URL that is easy to navigate and allows the designer to use the hierarchy to his/her advantage to make the data structures within the application as simple as possible.

Tables:

When we think of storage we tend to think of Relational Databases and Table based schemas. The Table storage service provides a mechanism to store data in hierarchical tables but these are NOT relational tables. This is seems to be a sticking point for many people who can’t see the benefits of having a table structure that isn’t relational. RDMS systems have been around for so long now that the relational model is taken for granted as the best for all situations. The truth is of course that it depends on what sort of system you are trying to build. The view that Microsoft have taken is that Azure is a “platform for highly scalable and available applications” where the RDMS model doesn’t always fit. Instead of a centralised, normalised data structure that minimises disk space and duplication, and provides complex query services, why not use the power of distribution and the cheap cost of disk storage to provide a fast, scalable and reliable DMS that effectively duplicates data.

Table Storage does not provide referential integrity, joins, group by, transactions and complex queries. If you determine that you really need a relational model for your application then you will need to consider the SQL Data Services offering (as mentioned briefly at the start of this article) and pay the premium. Table Storage, however, does provide cheap, scalable and durable data management with no fixed data schema. The idea is that you de-normalise your data and store it as required by the application, using multiple inserts and just simple queries. The data is not held as physical tables but merely as ‘entities’ with properties (like fields or columns). Each entity has a partition key and a row key which together provide uniqueness. Currently only the row key is indexed so the data should be partitioned for scalability. The CTP version requires some creative uses of Row Keys and Partitions to produce the desired effect but it can truly scale.

Development Lifecycle/Tools:

Azure applications for the CTP need to to be written in native code (.Net), although the commercial version is expected to support non-managed code. After installing the Windows Azure SDK you are provided with locally installed mock versions of the Windows Azure fabric (to run your instances in) and Azure Storage Services. These allow you to run and debug your cloud application on your developer machine without an Azure account or internet access. Once you have completed your application you ‘publish’ it using Visual Studio. This runs it through CSPack.exe which basically gathers the assemblies and related config and compresses them into a package. The developer then logs into the online “Azure Service Developer Portal” and uploads the ‘package’ to a staging area in the cloud. This staging area can be publicly accessed but it is separate from your live application instances . This allows you to test the application privately in the cloud environment and then promote it to live once testing is complete. The promotion process ensures that there is no downtime of your cloud application during the switch to the new version.

Portal1

The Portal provides key information on your Windows Azure accounts and allows you to extract the logs for your applications and to view detailed reports on various metrics such as Network Usage, Storage, Virtual Machine hours etc.

Developing For Windows Azure:

In order to develop an Azure application you must install the Windows Azure SDK and the Windows Azure Tools For Visual Studio. One point to note is that the SDK utilises IIS7 and therefore requires Windows Vista on the developer machine. You also need SQL Server Express 2005 for the Storage services to utilise. If you want to actually host your finished application in the Cloud then you need to request a token from the Microsoft Azure web site. These are free for the CTP version but there is a waiting list so register early.

You can find the new Azure project templates from the Visual Studio ‘New Project’ dialog which allows to create a basic Azure configured application as a starting point. The result is a Solution with some Azure specific items and a ASP.net Project for the Web Role or a Class Library Project for the Worker Role (depending on what options you picked).

The Azure API provides you with access to the RoleManager class through which you can utilise Logging and Configuration utility classes. As you cannot debug your application once its in the cloud it is important to add instrumentation to monitor the progress of your application and to report exceptions. This log output can be viewed in the local Development Fabric for development purposes but once in the cloud it is automatically written to storage services from where you can download it. Logging is mostly a matter of calling RoleManager.WriteToLog().

Two key files in the solution are the Service Configuration files which define your Azure application and the services it consumes. These files effectively define your application and inform the Azure fabric how to handle your application, for example how many instances of web roles should be deployed for your application, and what Storage Accounts to use. By changing the instance value from 1 to 5 you suddenly have 5 instances of your application running with the scalability that this provides. Whilst you can still use web.config for configuration values these should be limited to those that don’t need to change at runtime as changes to this will require you to re-package and re-deploy your application. For runtime dynamic configuration use the ServiceConfiguration files.

For consuming the Storage Services from your Cloud application it is recommended that you use the ‘Storage Client’ project that is provided in the Samples section of the SDK. This project provides a abstraction from the REST API. This abstraction is recommended to enable a fast start-up time on your new project but also it is expected that this API will change as the platform matures and the ‘Storage Client’ project will shield these changes. There’s no point learning something that is going to change.

This is just a very quick overview, for more information I recommend you download the SDK and then run through the hands on labs in the Azure Services Training Kit.

Conclusion:

Windows Azure is not the definitive answer from Microsoft for what a Cloud Computing platform should look like. It is merely a very early CTP release of their future platform. It is merely a step on the way to the next computing paradigm, whatever that may eventually look like. This is a big step though, as the functionality provided is enough to get a very capable system up and running and hosted entirely in the cloud. It does this by building on the development tools we already know (Visual Studio and managed code) but it also requires, in some areas, a shift in thinking away from more traditional approaches of software design we’ve been using for several years.

References:

Microsoft Windows Azure Site:
http://www.microsoft.com/azure/default.mspx

Windows Azure SDK:
http://www.microsoft.com/downloads/details.aspx?familyid=B44C10E8-425C-417F-AF10-3D2839A5A362&displaylang=en

Windows Azure Services Training Kit:
http://www.microsoft.com/downloads/details.aspx?FamilyID=413e88f8-5966-4a83-b309-53b7b77edf78&displaylang=en

Windows Azure Tools For Visual Studio:
http://www.microsoft.com/downloads/details.aspx?familyid=59E8FC0C-C399-4AB7-8A93-882D8E74B67A&displaylang=en

Azure Services Platform Developer Center:
http://msdn.microsoft.com/en-us/azure/default.aspx

Deploying a Service on Windows Azure:
http://msdn.microsoft.com/en-us/library/dd203057.aspx