Setting a Custom Domain Name on an Azure Web Site

Setting a Custom Domain Name on an Azure Web Site

I recently decided to add a custom domain name to a free Azure website that I use for development purposes. As the FREE Azure web site model doesn’t support custom domains (a shame but hard to complain as it’s FREE) I needed to upgrade the site to the ‘Shared’ mode. This is easily done by the Scaling button in the azure portal.

Firstly however I needed to link my current azure web site to sit under a different subscription to the one I used to set it up. The problem is that cannot move sites between subscription models yet (please fix this Microsoft). To get around this I needed to create a new website under the correct subscription and then publish my web site code to it. Luckily this is easy to do as it’s just a basic website but I can imagine that this could be painful if you have a bunch of storage accounts or a database to re-create.

Using the Azure Portal, creating a new site is a simple process Click +NEW at the bottom of the portal for the menu shown below:

image

Once created all I needed to do was download a Publish profile (see this tutorial link for how to publish to Azure) for the new site for Visual Studio to use. Once downloaded I opened my VS2012 solution and brought up the Publish dialog. I pointed it to the new Publish profile file and clicked Publish. In just a few seconds I’ve got a new Azure web site up and running with my existing MVC web application. This was very smooth, with no change to config or code required. The sheer simplicity of this impressed me as I was short on time.

Next I needed to allocate my custom domain which as previously mentioned is not available for FREE websites so i needed to upgrade to SHARED mode. From the Azure portal >web site configuration > scale > click SHARED (remember this model incurs a cost).

image

Once upgraded I could then immediately select DOMAINS and set up my CNAME and A record references, for more information see this useful link (configuring a custom domain name for a Windows Azure web site). It’s worth reading the comments on the post too as it covers issues with registering the domain without the WWW subdomain.

Once the DNS entries had propagated I had my existing site up and running under a custom domain running within a shared Azure instance, all with very little effort.

Renaming your Team Foundation Service Account

Renaming your Team Foundation Service Account

If you haven’t checked out TFS Service from Microsoft then I thoroughly recommend trying it out. It is basically a TFS instance in the cloud and at the moment its free for everyone to use and there’s an official a free plan for small projects up to 5 users. I run a small TFS instance on my WHS but I’m planning on switching completely to the TFS Service offering to avoid having to manage a TFS installation locally.

image10

One thing that caught me out was the issue of naming your TFS Service Account. I didn’t give much thought to it initially but then once I found how good the service was I began to wish I’d named it something more appropriate. Well, as the TFS team are releasing service updates at such a rapid cadence I didn’t have to wait long for a rename feature. The process to rename your service account is detailed in Brian Harry’s blog here.

Even if you are not enthusiastic about a cloud based centralised ALM tool I recommend checking out Brian Harry’s blog anyway for informative posts on how his team is managing three week sprints and continuous delivery within Microsoft. What this team is delivering for their customers is impressive.

Useful Web Based UML Drawing Tools

A basic sequence diagram can be a very powerful tool to explain the interactions in a system but drawing them can often be too time consuming to bother for disposable uses. I find that many people draw them out on rough paper to help explain their argument but less actually ever bother to build them in soft form unless for a formal document. There are a lot of powerful feature rich UML building tools but recently I found this: http://www.websequencediagrams.com.

It lets you build sequence diagrams like the one below in seconds by typing the object interactions in a short hand form, such as:

title Authentication Sequence
Alice->Bob: Authentication Request
Bob-->Alice: Authentication Response
Bob-->Jeff: Pass Request
Jeff-->Bob: Return Response

…which draws this in real time in the browser: 
WebSequenceDigrams 

And you can even choose the style and colouring too. There’s also the functionality to save diagrams and import saved diagram text. Check out the API page too for tips on embedding the drawing engine into your web pages allowing you to edit your diagrams as well as plugins for Confluence, Trac and xWiki. There are also example implementations for Ruby, Java and Python.

A similar online tool is http://yuml.me with which you can draw Class, Activity and Use Case Diagrams. Here is an example of a Use Case diagram definition:

[Customer]-(Make Cup of Tea)
(Make Cup of Tea)<(Add Milk)
(Make Cup of Tea)>(Add Tea Bag)

…which makes this:

YUMLUseCase

yUML.me also supports API integration with a whole host of things (Gmail, Android, .Net, PowerShell, Ruby and more).

Now there’s no reason to not use a quick UML diagram to explain what you mean!

The Enterprise & Open Web Developer Divide

In this interesting Forrester post about embracing the open web Jeffrey Hammond highlights the presence of two different developer communities. In his words:

"…there are two different developer communities out there that I deal with. In the past, I’ve referred to these groups as the "inside the firewall crowd" and the "outside the firewall crowd." The inquiries I have with the first group are fairly conventional — they segment as .NET or Java development shops, they use app servers and RDBMSes, and they worry about security and governance. Inquiries with the second group are very different — these developers are multilingual, hold very few alliances to vendors, tend to be younger, and embrace open source and open communities as a way to get almost everything done. The first group thinks web services are done with SOAP; the second does them with REST and JSON. The first group thinks MVC, the second thinks "pipes and filters" and eventing."

Following the tech industry it is clear to me that this division is tangible and in fact I would suggest the gap is currently increasing. I recently started to revisit my open web development skills after it occurred to me how large this divide was beginning to get and how important these skills will be key in the future. Whilst the Enterprise developer often traditionally focuses deeply on a handful of technologies (too often from one Vendor) the Open Web developer is constantly learning new languages and choosing between best of breed open source frameworks to get the job done. The new Open Web developer has evolved from a different age and with different perspectives and in many ways leaving behind the rules/constraints of the Enterprise developer building typical Line Of Business (LOB) applications. I’m not suggesting that Enterprise developers don’t understand these technologies already, I assume many do, but they’re unlikely to be living and breathing them. This is not just about web development technologies and techniques, but more about mind-sets, architectural styles and patterns. Perhaps it can be viewed historically as similar to the evolution from mainframes to distributed computing, and this is just the next evolution. This movement compliments the emergence of cloud computing and one can assume that the social, dynamic LOB applications of tomorrow will rely heavily on the skills and technologies of the Open Web community. To quote Jeffrey again:

"In the next few years, their world is headed straight to an IT shop near you."

The proliferation of devices, cloud computing and a new breed of ‘surfing since birth’ young blood entering the industry combined with the shift towards this new world from big players like Microsoft (e.g. using JavaScript to build Windows 8 apps) mean that Enterprise IT will have to converge with the Open Web approach in order to meet future consumer needs. Only the integration of these worlds will enable Enterprises to integrate their existing application landscapes with the new web based consumption model.

John R. Rymer’s Forrester post on the subject provides his view on the differences between these communities and his accompanying post details the technologies you need to focus on now (HTML5, CSS3, JavaScript, REST). Whilst it can be tricky to follow this sort of fast moving decentralized movement, the good news is that now is a great time to get into these technologies with the growth of the umbrella HTML5 movement raising awareness within the industry and bringing some standards to advanced web design. Keep an eye on what the big web frameworks are offering, and track the innovations at companies like Google and Twitter. I recommend you read these Forrester articles and think about how this affects your architecture, IT organization and career.

For some quality content on these technologies check out these links:  ‘Mozilla Developer Network’, ‘Move The Web Forward’ and ‘HTML5 Rocks’.

The Future Of The IT Department

Recently I have been witness to rapid, often painful, change within my own internal IT division over the last few years and observed the on-going developments in the industry. It is clear that IT departments changed dramatically in a short amount of time and the pace is not relenting. This has led me to try to picture what IT will look like within large institutions in the future. It is becoming more and more apparent that the structure of our internal IT organisations are very often based on the traditional legacy models that served enterprises well in the past. Big IT investments and centralised systems are best managed and maintained by an rigid organisational structure. The IT department and the business units are today usually far more disconnected than many CIOs would care to admit. IT used to be something that was done by the IT department based on fairly static business processes. However we’re now in a different world, where IT is seen increasing as just a commodity and business processes need to be able to react quickly to changing economic conditions. No longer is the IT department responsible for big monolithic systems (e.g. payroll etc.) but IT is now embedded in every business process so in some sense every department is an IT department. Surely if the IT organisation doesn’t aid the business then it will be eventually pushed aside and replaced.

The Journey From Past to Present

This excellent post by PEG covers this subject well. PEG paints the picture of the traditional IT organisation as it was in many enterprises and then slices it up to represent the current model once outsourcing/off-shoring has been considered. The left hand diagram showing the more traditional split, and the right showing the emerging norm:

Factoring in the effort required to manage out-sourced projects


Diagrams from PEG: The IT department we have today is not the IT department we’ll need tomorrow

It surprises me how many people consider their jobs as not being under threat from outsourcing as they’re role is above the bottom tier on this sort of diagram, but as you can see it is inevitable that the line between permanent staff and outsource partner staff will continue to rise to the point represented in the triangle on the right, with a good cross section of IT roles being fulfilled by partner organisations. This represents where many large enterprises are at present whereby some “doing” roles are maintained in-house but the management and planning layers are also supplemented by outsource/offshore partners. The bulge in the middle represents the extra permanent resources required to cover the additional overhead of managing partner resources.  Taking a bank to be the textbook example of a large enterprise with a significant scale IT organisation then this research into European banks activities provides some insight into the strategy driving these changes. Unsurprisingly cost reduction is key, but its not the only factor…

“Survey participants cited cost reduction as the primary reason to outsource IT functions, followed by cost variability (for example, the flexibility to respond to peak demand without ramping up internal resources) and access to know-how or skilled personnel. The main benefits of outsourcing were access to know-how or skilled personnel and a guaranteed level of service. (The cost benefits associated with outsourcing often fell short of expectations.) The biggest disadvantages of outsourcing were high switching costs and limited control over critical elements of the IT environment. On the whole, however, the survey shows that banks have embraced outsourcing. Only 3 percent of the banks surveyed were planning to decrease their outsourcing activities. The case for offshoring was slightly different. Although banks used offshoring primarily for the same reason they used outsourcing—to reduce costs—the main benefit of offshoring was less stringent foreign labour laws. The biggest disadvantages of offshoring were opposition among domestic personnel, large overhead, and loss of control.”

Both partner strategy models are therefore seen as suffering from elements of losing control of assets or deliverables and somewhat adding to management overheads, but providing some agility by providing a mechanism to ramp up or down resources as required.

PEG extends his model to show that in the future there will be an increased reliance on SaaS and automation tools and therefore a chunk of the IT organisation structure will be replaced by these as well as outsourcing/offshoring roles.

A skills/roles triangle for the new normal

Diagram from PEG: The IT department we have today is not the IT department we’ll need tomorrow

Within the current model, management layers have often become too complex and unwieldy. With the IT organisation being a business entity itself within the enterprise and with 65% of IT spend just being used to maintain current service, business functions and IT often clash over priorities and the allocation of funding. In many instances resulting in the business going outside of the IT Org to secure services or growing their own ‘black ops’ internal capability just to get things done. This again challenges the traditional IT organisational model where IT keeps a tight control.

Changing Objectives

Tighter financial conditions, increasingly competitive environments and a desire to maximise returns is leading to a model of pay per use and more utilising of partners and outsourcing models. Technology advances are making this transition possible (e.g. Cloud Computing, SaaS). Future IT departments will increasingly utilise these external services resulting in them adopting a very different structure. Whilst the traditional IT organisation has been geared to building and maintaining large complex systems and is staffed with technical people, the rapidly emerging model is one where IT skills are outsourced to numerous vendors and IT staff become the negotiators and orchestrators of these relationships and contracts. Instead of managing systems changes internally the IT organisation is increasingly just the middleman between the business and the outsource/offshore partners. The role becomes one of managing projects more than technically implementing them. Reports can be found of in-house IT departments cutting 90% of headcount with a rapid shift to offshore/outsourcing with the remaining staff focusing on the planning and relationship management tasks. This Boston Consulting Group paper suggests there is an essential move from “doer” to “orchestrator”,  with the IT Organisation “doing fewer of the traditional ‘run the business’ activities” instead leaving them to external providers and doing more coordinating of (one or many) providers activities to meet the design.  This “network of external providers and integrators” needs monitoring and tuning and the structure of the IT Organisation will need to centre around these activities.

A quote from Reinventing The IT Organisation by Antoine Gourevitch, Stuart Scantlebury & Wolfgang Thiel…

“Unless CIOs take swift action, the IT organisation will be at risk of being reduced to a thin layer between the business and the specialist outsourcing firms.”

The outcome will presumably be either a slim organisation staffed with Change Managers and Project Managers responsible for liaising with the partners to satisfy business requirements, or alternatively these changes could prove the catalyst required to move to true business driven IT, where IT skills are integrated with the business units to enable them to react rapidly to changing business needs. Larry Dignan in his post welcomes the idea of breaking up the traditional IT organisation, seeing it as an anachronism. He classes CIOs as often “out of their league”, “process jockeys” who would “rather be scouting new technologies” than innovating. I would agree that this appears to be the case in many large organisations where IT, some would argue, has frustratingly become detached from the goal of driving business value through technology, losing itself in bureaucratic processes. These organisations can seem a long way from delivering core bottom line business value. PEG discusses the detachment of Enterprise Architecture and the business, together with a description of little ‘a’ and big ‘A’ architects, here and its well worth a read. Even where IT organisations do deliver real value its often to timescales that seem painfully long to the business customer but painfully short to the IT guy wrapped up in bureaucratic red tape. Perhaps this isn’t ITs fault as such but more the  arcane structure of the IT organisation as we have come to accept.

One way suggested for IT organisations to remain relevant and address future challenges is for the business and IT to move closer together than ever. This has been talked about for many years but with the demise of the monolithic IT organisation the next few years could see this model mature. Perhaps decentralised pockets of business IT shops closely aligned to the business units will be the norm, introducing new challenges around how to control these pockets.

This shift towards IT/business integration could be very rewarding for an enterprise as in reality modern business processes are often tightly intertwined with the LOB applications in use and so anything that can be done to ensure that those LOB applications support the business processes instead of restricting the pace of business change will be welcomed. Dreischmeier & Thiel suggest new ways of working may be required as IT organisations are forced to adjust their operating model to become faster, more agile and to embrace rapid-development approaches. The business can’t afford to be held back by a slow and unwieldy IT organisation.

One concept I particularly like is the concept of  “introducing Product or Solution Managers” to address the “lack of end to end ownership within IT Orgs”. The person would “own the IT product/solution across all technical layers”. This role should improve TCO and aid business & IT priority alignment. Dreischmeier & Thiel also see the CIO as a key player in ensuring that the IT organisation is “Proactively Engaging in Business Transformation Activities” and that even the IT organisation is very well positioned to be a key player in this transformation as it is aware of the end to end business processes (in theory). They suggest:

“Creating, together with the business, a new-business-model team that seeks out and addresses the changes in economics of the relevant industry as it changes through increased competition and environmental forces”. 

The growth of agile development practices have a a part to play here too. Having innovative IT teams that ‘fail fast and often’ and use lean agile techniques to maximise business value could replace traditional models. Smaller, focused development teams under the direct control of the business units using Agile practices and being supported by a central infrastructure function (probably outsourced) could prove a very effective way of actually building what the business really need. The evolution of Cloud Computing technologies provides real opportunities to make these teams very capable. A business unit based developer could ‘mashup’ cloud services together with core on-premise web services to produce a powerful line of business application that is then deployed to PaaS cloud based infrastructure. Forester Analyst Alex Cullan sells the benefits of this model with the term “Empowered BT (Business Technology)” where IT’s role is to empower the business to utilise the technology that they need in order to remain competitive. The traditional arguments against this approach such as the expected system proliferation and business technology decisions being driven by hype, are dismissed as actually not as bad as we in IT would believe. He argues successfully that some proliferation is acceptable if it empowers the business, but there would have to be trust in business leaders to choose the right path for this to work. Is that trust there at this moment in time? Well not according to this MIT & Boston Consulting Group survey where it shows that current CIOs believe that business leaders are not positioned to lead IT enabled business transformation. Only 33% of CIOs consider their company’s senior execs effective at driving business value with IT, and 40% consider them effective at prioritizing IT investments. However perhaps this reflects the differences in the current differing priorities of the of traditional IT Organisations and the business units, with IT enforcing its traditional maintenance role (“keeping the lights on”) and role of application development/innovation more than a real distrust. The paper does however highlight the benefits that can be achieved when the IT organisation avoids the simple “middle man” role and takes the lead role of driving business change (such as lower maintenance costs, faster realisation of business benefits from new systems, and higher employee satisfaction).  Perhaps the future of the IT organisation is that of a business in its own right, an internal consulting firm offering assistance in business process design, innovation and development management.

Proctor and Gamble run their IT Organisation as a business within the enterprise running alongside other business services (e.g accounting etc.). Their services are branded and marketed to the enterprise and billed on a usage basis with business units empowered to choose to consume these services or go elsewhere. The emphasis is on running this as a viable competitive internal business that is in tune with its customers (in this case the internal business units) needs. They have Brand managers responsible for “the innovation, pricing and commercialization of the services” that ensuring that the total end to end offerings can match that of 3rd party offerings. Underpinning this though is a collection of external partner relationships that still need to be managed and so  in essence this is still heading towards becoming an integrator, orchestrating these partner services into a clear cohesive branded, and hopefully relevant, service. The key here though is the added value provided by this internal IT business service that crucially understands the business and offers competitive services that are completely relevant to the business. This is supported by the BCG research that found where IT Organisations really drove business change they often delivered their IT services as shared services and placed more emphasis on relevant prices and alternative service levels. They tended to centralise IT with lower levels of recorded “shadow” IT being instigated by the business, which could perhaps suggest that these business units felt they were getting sufficient value from their shared IT services, even though it was under central control.

Future Skills

All these changes have massive implications on the skills required within the IT organisation of the future. In the current model maintaining a relevant skilled workforce can be tricky with many key staff feeling demotivated by the outsourcing/offshoring partner model and the subsequent removal of technical roles from their organisation. The loss of junior IT roles to partner resources destroys any future progression opportunities and shows that this model is unsustainable moving forward. Engaging technical people will be increasingly difficult in the current model but perhaps a move to more business aligned IT can help skilled staff remain technical if they wish and also benefit the business through enhanced IT innovation and passion for their roles, instead of forcing good techies to oversee offshore/outsource relationships.

It seems essential now that IT staff of the near future will be expected to have an enhanced level of business acumen and market knowledge to fulfil their roles. Will this come at the expense of excellent technical skills? Maybe! Perhaps the technical skills will be embedded within the offshore/outsource partners and the relevant ‘technical’ skills required in the IT Organisation will be those around technical process design and system analysis. Knowledge of the business will perhaps be more important than any technical skill (for the majority of roles) and therefore it makes more sense to recruit IT staff from within the business units themselves. This is evident in a number of studies with CIOs, such as this BCG study

“In general, CIOs told us that Internal IT staff roles are shifting away from application development and towards process analysis and engineering, business relationship management, project management and architecture design and implementation.”

Within the previously mentioned Proctor & Gamble organisation the same theme emerges as the skills reflect the role of IT within the organisation:

“..traditional IT is just 30% of what we do. If traditional IT is all a person masters, he or she will never be a leader here. The rest is about business knowledge. Those who embrace that approach will certainly increase their value…” 

This view was supported by the previously mentioned study into European Banking, but it also went further, pointing out that technical skills were being neglected …

“…many banks appear to be underestimating the value of technical tools and skills, which are critical to developing high-impact applications, maintaining an efficient infrastructure, and managing outsourcing partners.”

So where does this leave you and I? Well, I expect the relevant number of deeply technical IT professionals will decline in Western countries but this decline will be dwarfed by the increase in semi-professional developers, working in the business but using end-user computing tools to develop systems that are meant to be rapid, easy and throw away. Where more complex solutions are sought then outsource partners will happily fill that gap. Escaping the large enterprises and fleeing to the small and medium enterprises will not be sustainable longer term either as the partner model will win there too eventually. It is entirely possible that the partner model will lose some of its lustre (it’s already happening in places) and there may be some swing back to in-house technical teams. If that happens then the IT community needs to be ready to promote a new ‘agile’ alternative that understands and drives true business benefits.

This evolution of the IT organisation is natural in such an immature industry as this but one thing is definite the future is different and we need to adapt. Whichever direction the future takes for you spend some effort in the meantime trying to understand your business customers needs better and keep innovating for them!

Ray Ozzie’s Dawn of a New Day

I would recommend everyone interested in technology to read Ray Ozzie’s (Chief Software Architect of Microsoft) memo – "Dawn of a New Day". It’s a fascinating insight into the vision of a key player in the industry and a call to arms for Microsoft and it’s partners. What interests me the most about this vision is that it is a conceivable vision and one that I share. This vision of "appliance-like connected devices" being the norm and consuming "Cloud Based Continuous Services" is one that is easy to visualise as this day is dawning now around us. Smart phones, tablets, connected TVs etc are set to become the principle means of interacting with our online world.

"Complexity kills"

Whenever I’m called upon to help out family and friends with their PCs it often strikes me how inappropriate these machines are for the needs of the basic user. The power and complexity of the PC is it’s great power but it also makes them often too difficult to manage and secure. Huge numbers of basic PC users now in reality only use their browser and don’t install software applications anymore. These people are also now enjoying the simplicity provided by smart phone OS’s such as Android and iOS. In fact many of these users are able to fulfil their needs via App Stores etc whilst their PCs gradually gather dust. the future vision where devices rule makes total sense. Whilst Apple is proving the master in the device market Microsoft have the ‘Windows’ advantage. The failure of Linux netbooks to maintain market share shows that given similar pricing models consumers will stick with the familiarity and safe option of Windows, and this is an opportunity for Microsoft. They could capitalise on this with a lean “appliance like” version of Windows in the future.

"Complexity sucks the life out of users, developers and IT. " – I have seen numerous projects needlessly suffer in delivery due to overly complex designs, sometimes from overly complex requirements. Because we can create software to be configurable and feature rich we feel we have to, but of course every additional feature brings additional overhead. This overhead my be felt by the end user or perhaps just the developer and testers trying to implement or test the features.

"Cloud-based continuous services"

Ray’s vision of cloud services being continuous is key for the connected future. Consumers need to be able to depend on the cloud always being available and willing to serve them. As these services grow in importance they will be expected to grow in number and complexity. This is a real challenge for industry engineers and we really need to learn the lessons of the hugely scalable consumer web sites such as Facebook and Google. I look forward to seeing what technologies are produced to aid the development of these services and which scalability patterns move towards the mainstream.

It’s an exciting future for our industry and one that I look forward to playing my part in.

The Future of Windows Home Server

Microsoft’s recent announcement that the key Drive Extender feature is to be removed from the new version of Windows Home Server codenamed ‘Vail’ has resulted in much dismay within the community. Many commentators, including the vocal WHS user community itself, have started to question the future of this product. In this post I give my take on where I see WHS in the medium term and consider how it can fit alongside the “new dawn” of a Cloud Computing era.

How big is the Drive Extender issue?

Firstly, what’s all this about Drive Extender (DE)? Well DE is a really neat feature of WHS that pools all the hard drives in the system into one logical data drive. This means that you can throw in a mixed selection of hard drives of any type (USB, SATA etc) or capacity and the system enables you to see them as one. It also provides fault tolerance through data duplication which protects your data from drive failure. It is one of the major features of Windows Home Server (WHS). I would argue one of three, with the others being the client backups and remote access. Sure the product does much more than just that but it’s fair to say that all of WHS’s features are available in other products in some shape or form and the combination of these three features into one customisable platform made WHS stand out for me.

Microsoft’s announcement to remove DE from the next version of WHS code named Vail immediately removes a major reason to buy into the new WHS version and this has been evidenced in the recent twitter comments on the subject where a lot of people have stated their intention to not use ‘Vail’. Of course some of this is just anger at the fact that the feature has been removed (and the way in which it was announced) but still the fact remains that the product is a weaker proposition than it was before.

Personally I see this decision in both a negative and a positive light. Firstly I see this as a major blow to the uniqueness of the product and feel that it will suffer without this USP (Unique Selling Point). Also it’s important to remember that this is positioned as a product for the average PC user and DE made extending the storage capacity easy. The user doesn’t need to buy matching disks or configure RAID, they just pop in a new disk and it gets added to the pool. Without DE adding extra storage will presumably be a more complex task. In reality though how many “average” PC users would feel happy upgrading the hard drive on their WHS anyway. Whilst enthusiasts relish the chance to pop open the case many casual users would actually see their OEM produced WHS as just an appliance, and one probably already stuffed with several 2 or 3 TB drives providing a good chunk of storage capacity right out the box. They would not consider any upgrades to it other than replacing it when it gets full. In addition whilst the shared drive pool concept makes adding storage easy the ability to add additional storage as additional drives will still be there in the product as it is in any Windows OS. I don’t see this as a huge blocker to WHS adoption.

Folder duplication utilises the DE feature to ensure that the data is duplicated onto different physical drives within the logical storage pool. This in effect is ‘RAID like’ except that the data is duplicated over time and not immediately (although there is no way of retrieving previous versions of files). This provides an easy form of fault tolerance that, whilst being fairly easy to replicate yourself using other means, will probably never be as easy as ticking a check box. This is again more of an issue for the “average” guy than the PC enthusiast who is at home configuring RAID, although a simple file copy add-in or batch job is my preferred solution. I already run daily automated RoboCopy jobs to copy "’snapshots’ of my data drives to another drive to provide both fault tolerance but also versioned snapshots that I can restore if required. I have had to dive into my snapshots on several occasions to restore a previous version of a file that has accidently been deleted/modified. I prefer this solution over RAID as disk write to a drive in RAID is duplicated immediately even if its not what you wanted.

So, what’s the positive? Well let’s consider why Microsoft are removing it. They have said that it causes conflicts with applications installed on the Small Business Server sister OS code named ‘Aurora’. These software applications don’t play nicely with having a logical drive pool. I, as have many other WHS enthusiasts, have over time installed numerous applications onto my WHS (e.g Microsoft Team Foundation Server) and I always do so with caution due to DE. I am careful to  ensure that nothing I install utilises the DATA drive and I often refrain from installing software that I think might conflict. With DE removed this worry is taken care of, which is definitely a positive for me.

Does WHS fit in the Cloud Computing Landscape?

If we look to the future and assume that the Cloud Computing paradigm is here to stay the bigger question arises of what role would WHS play. I admit to being a Cloud advocate and I do share Ray Ozzie’s view of a “New Dawn” where  devices (not PCs) connect to continuous services hosted in the internet. In this vision the majority of people only use devices to connect to the internet (smart phones, tablets, TVs etc) and they are continuously connected to the web where their data is stored, analysed, processed and shared. The concept of having a local home server is almost alien as your storage will all be in the cloud. Backups won’t be required as data will be automatically synched and devices won’t need to be imaged for restoration as they will only be simple devices with sophisticated browsers. Sure PC’s will remain for advanced users but not the user majority. This vision of the future is not that revolutionary, it’s already happening, so fast in fact that the next version of WHS after Vail will need to be positioned within this connected world. People may cry that users will always want their data close by and local but that’s not true as over time they won’t even think about it as evidenced by early cloud services like Hotmail, Exchange Online etc.

This vision of the future relies heavily on a fast internet connection and related infrastructure which is slowly being rolled out across the developed world but this weakness perhaps provides an opportunity for the WHS’s of the future. The ability to synch to your local “private cloud” and use that as the hub for your home is probably a requirement of the future and a ‘server’ device could fulfil this space. Unfortunately so could other home based devices, such as the XBoxes, Google TVs and Media Centers of the future, and the single home device is the ‘holy grail’ of consumer electronics. The battle for the position as sole ‘provider’ and gateway to the continuous services of the future will be intense and whilst the current WHS offerings (V1 and Vail") are too weak to survive the battle, maybe, just maybe, their future off-spring will fit that gap perfectly.

Summary:

WHS has, unfortunately, always been a niche product which is a real shame as it is one of the best products to ever have come out of Redmond and one that deserves more credit. Microsoft have never promoted it and seem instead to be happy to use it as a experiment for newer technologies (like DE). This is obviously a dark period for the WHS product but the communities reaction to the DE news and the growing popularity of the platform means that I believe it it will survive in the short term.

If I were Microsoft I would look to extract the key features of WHS (i.e. client backup and remote access services) and convert them into add on applications for Windows. With DE gone there is little point in having a ‘Home’ sku of Windows Server. Sell Windows 2008 Foundation to OEMs with these WHS feature applications installed for them to put on their consumer devices. This would enable these features to be supported on Windows Client OS’s in the future too when it was profitable to do so. I would be happy to run a fully fledged supported version of Windows Server that comfortably ran all server based software but to which I could also install a Client Backup and Remote Access Services if I required them.

Will I upgrade to Vail? Good question. Currently I’m undecided. I will review it against other products when the time comes (Amahi on Linux, Aurora, Win Server 2008) but one thing is for sure – the removal of DE will not affect my decision but the strength of Microsoft commitment to the product will.

Private Clouds Gaining Momentum

Private Clouds Gaining Momentum

Well its been an interesting few weeks for cloud computing, mostly in the “private cloud” space. Microsoft have announced their Windows Azure Appliance enabling you to buy a Windows Azure cloud solution in a box (well actually many boxes as it comprises of hundreds of servers) and also the OpenStack cloud offering continues to grow in strength with RackSpace releasing its cloud storage offering under Apache 2.0 license with the OpenStack project.

OpenStack is an initiate to provide open source cloud computing and contains many elements from various organisations (Citrix, Dell etc) but the core offerings are Rackspace’s storage solution and the cloud compute technology behind NASA’s Nebula Cloud platform. To quote their web site…

The goal of OpenStack is to allow any organization to create and offer cloud computing capabilities using open source software running on standard hardware. OpenStack Compute is software for automatically creating and managing large groups of virtual private servers. OpenStack Storage is software for creating redundant, scalable object storage using clusters of commodity servers to store terabytes or even petabytes of data.”

It is exciting to see OpenStack grow as more vendors outsource their offerings and integrate them into the OpenStack initiative. It provides an opportunity to run your own open source private cloud that will eventually enable you to consume the best of breed offerings from various vendors based on the proliferation of common standards.

Meanwhile Microsoft’s Azure Appliance is described as …

…a turnkey cloud platform that customers can deploy in their own datacentre, across hundreds to thousands of servers. The Windows Azure platform appliance consists of Windows Azure, SQL Azure and a Microsoft-specified configuration of network, storage and server hardware. This hardware will be delivered by a variety of partners.

Whilst this is initially going to appeal to service providers wanting to offer Azure based cloud computing to their customers, it is also another important shift towards private clouds.

These are both examples in my eyes of the industry stepping closer to private clouds becoming a key presence in the enterprise and this will doubtless lead to the integration of public and private clouds. It shows the progression from hype around what cloud might offer, to organisations gaining real tangible benefits from the scalable and flexible cloud computing platforms that are at home inside or outside of the private data centre. These flexible platforms provide real opportunities for enterprises to deploy, run, monitor and scale their applications on elastic commodity infrastructure regardless of whether this infrastructure is housed internally or externally.

The debate on whether ‘Private clouds’ are true cloud computing can continue and whilst it is true that they don’t offer the ‘no- capital upfront’ expenditure and pay as you go model I personally don’t think that excludes them from the cloud computing definition. For enterprises and organisations that are intent on running their own data centres in the future there will still be the drive for efficiencies as there is now, perhaps more so to compete with competitors utilising public cloud offerings. Data centre owners will want to reduce the costs of managing this infrastructure, and will need it to be scalable and fault tolerant. These are the same core objectives of the cloud providers. It makes sense for private clouds to evolve based on the standards, tools and products used by the cloud providers. the ability to easily deploy enterprise applications onto an elastic infrastructure and manage them in a single autonomous way is surely the vision for many a CTO. Sure the elasticity of the infrastructure is restricted by the physical hardware on site but the ability to shut down and re-provision an existing application instance based on current load can drive massive cost benefits as it maximises the efficiency of each node.  The emergence of standards also provides the option to extend your cloud seamlessly out to the public cloud utilising excess capacity from pubic cloud vendors.

The Windows Azure ‘Appliance’ is actually hundreds of servers and there is no denying the fact that cloud computing is currently solely for the big boys who can afford to purchase hundreds or thousands of servers, but it won’t always be that way. Just as with previous computing paradigms the early adopters will pave the way but as standards evolve and more open source offerings such as OpenStack become available more and more opportunities will evolve for smaller more fragmented private and public clouds to flourish. For those enterprises that don’t want to solely use the cloud offerings and need to maintain a small selection of private servers the future may see private clouds consisting of only 5 to 10 servers that connect to the public cloud platforms for extra capacity or for hosted services. The ability to manage those servers as one collective platform offers efficiency benefits capable of driving down the cost of computing.

Whatever the future brings I think that there is a place for private clouds. If public cloud offerings prove to be successful and grow in importance to the industry then private clouds will no doubt grow too to compliment and integrate those public offerrings. Alternatively if the public cloud fails to deliver then I would expect the technologies involved to still make their way into the private data centre as companies like Microsoft move to capitalise on their assets by integrating them into their enterprise product offerings. Either way then, as long as the emergence of standards continues as does the need for some enterprises to manage their systems on site, the future of private cloud computing platforms seems bright. Only time will tell.

Windows Azure Experimentation Is Currently Too Expensive

Windows Azure Experimentation Is Currently Too Expensive

I’m a fan of Windows Azure and have enjoyed using it during its CTP phase. Once the CTP was open for registration like many I jumped at the chance to play with this new paradigm in Software Development. During this CTP phase I have written some small private web applications that really do nothing more than experiment with the Azure platform. These have provided me with valuable experience and an insight into building a ‘real’ world application on ‘Azure’. I have also used this knowledge to demonstrate Azure to my colleagues and to promote the platform within my enterprise. All this has been possible due to the fact that the CTP version is completely free to use, however this period of experimentation will soon sadly come to an end.

As Windows Azure moves from a free to use CTP to a commercial product it is right that users have to pay for the privilege of using the platform but it seems that many developers are going to have a hard choice to make in the new year. Do you forget about developing on Azure or do you fork out $86/£55 a month for the privilege of experimentation. For those with an MSDN Premium Subscription they’ll have some more time to enjoy it free, but in 8 months the same decision will be required.

Windows Azure pricing details can be found here but if we assume that the transaction and storage costs are minimal for a developers experimental site and just take the basic cost of running one instance per hour it is $0.12 (sounds cheap) but if we consider that there’s 24 hours in a day, 7 days a week etc the cost for a month is around $86 (£53). That’s not a small amount for the average developer to find. Whilst I am pleased by the free hours provided for MSDN subscribers this is a limited offer and it’s really just delaying the problem for those developers. That is unless Microsoft can come up with a basic cheaper proposition similar to the shared web hosting model. If a developer wants to experiment with web technologies for example they can host a web site (for public or private use) with a 3rd party web-hosting company. These hosting companies provide a selection of options based on your requirements. Whilst premium dedicated server hosting is available developers can get their fix from the cheap and cheerful shared server hosting which will provide most of the features on a smaller scale for around $10 (£6) per month. I realise that there is more to Azure than hosting a web site but the point is that you can only really experience a product when you are frequently interacting with it to build something real, and therefore it has to be accessible.

Now I’m not saying Azure is uncompetitive compared to it’s rivals (it actually competes favourably) or that you don’t get your money’s worth. For a new business starting up with some expected revenue then Azure provides huge advantages and the ability to scale up and down is ideal. It’s getting the developer community interested and informed that is the problem. Microsoft needs developers to buy-in to this seismic shift in computing and by making the barrier to entry so high it is making it difficult to spread the love for this excellent product. I believe that it is in Microsoft’s interest to provide some way to get grass routes developers to buy into this product and to gain exposure to it.

I hope that in the new year we will see a new low cost (even advertisement funded) offering for Azure aimed at getting developers tuned in and coding on this great platform without making a large financial commitment. I’m not alone in hoping for this, check out the requested feature list for Azure (the most popular by far at the time of writing is just this, a low cost option).

Windows Azure : An Introduction

Windows Azure : An Introduction

At last year’s PDC Microsoft released the details of its new venture into the next IT paradigm that is arguably set to change the way that applications are developed, hosted, managed and funded – Cloud Computing. It is easy to dismiss Cloud Computing is a fad or simply as a move back towards the mainframe days of a central processing model, but regardless of these debates there is no doubt Microsoft, Amazon and Google are pouring large amounts of funding into developing Cloud Computing platforms. I’m not going to debate the subject of Cloud Computing, although I will state that personally I feel it will impact all that we do in IT in the future, perhaps not in it’s current guise, but this latest move from Microsoft can be seen as one more step on that journey.

What’s is Windows Azure?

Well it’s not an image of Windows Server hosted somewhere on the internet for you to remote desktop into and install what you like on it. To quote Microsoft it is (in Marketing speak) a “Platform for writing highly scalable and available applications”.

It’s not currently possible nor advisable to just convert your current application to run on Azure, instead Azure provides a platform on which you can build a new application that is highly scalable and available. Azure runs in Microsoft Data Centres (currently in the US but planned to be located throughout the world) and your application runs within individual instances of virtual machines on that Azure fabric.

The pricing policy is also going to be based on usage which allows you to start small (with a few computing instances) and then increase the number of instances (and therefore computing power) as your application grows and needs to be scaled for the increasing number of users. Imagine you’re writing the new “Facebook”. You could buy a handful of expensive servers and then buy more if/when the applications user base takes off. Then you need to buy more and more until you’ve got a whole DataCenter of servers (all consuming masses of power) and a team of IT Administrators running them. Then your user base levels out, and possibly drops down to a more stable level leaving you with excess capacity you’ve already paid for. Worse still if your application never takes off then that initial investment in the first few servers will leave you seriously out of pocket. In contrast cloud services like Windows Azure are paid by usage. The cost per month will be related to your current storage usage and your compute instance usage. If you need more resources to scale out your application then you just pay more, which allows you to adjust your costs based on demand and removes the need for large upfront capital expenditure.

These cost benefits are ideal for Web 2.0 start-ups but they can also benefit large Enterprises. The ability to develop an application within a low cost framework that also manages hosting that application and usage monitoring, allows any development team to try out new ideas and dynamically move with the business. Cloud Computing could be a tool to enable an Enterprise to keep up with fast moving business opportunities at a low initial outlay and a low Total Cost of Ownership. An alternative model is where a platform like Windows Azure is deployed locally in the Enterprise DataCenter. The enterprise would then benefit from an efficient processing model for its data centre, forcing all new applications to be built to run on that platform. This would provide most of the benefits of Cloud Computing but with less issues around security as data would not be leaving the Enterprise. Microsoft have so far only unofficially acknowledged this model and are not promoting it as an option with Windows Azure, although it will be interesting to see if they do promote this idea in the future.

Azure Services Platform:

This is the stack that makes up Microsoft’s current Cloud Computing offering:

AzuresServicesPlatform

As you can see there are several offerings that sit on top of Azure, so lets quickly look at these first, although we’ll not go into the detail for these:

Microsoft .NET Services: Offers distributed infrastructure services to support both cloud-based and local based applications. This offering includes:

–  Access Control:  Provides claims based implementation of identity federation and transformation in the cloud.

– Service Bus: Allows you to expose your services (in the cloud or on premises) on the internet via a URI, without having to open up incoming ports inside your firewall.

– Workflow: Running Windows Workflow based workflows in the Cloud.

Microsoft SQL Services: This provides “SQL like” data services in the cloud based on SQL Server. This is effectively a premium storage service over the standard one provided by Azure Storage Services.

Live Services: There is a wealth of data locked within Microsoft Live applications (e.g. Live Mail) that is difficult to interact with. Live Services allows your applications to interact with this data. Building on Live Mesh it also enables synchronizing this data across a user’s numerous devices.

Windows Azure:

This is base environment where your application will sit. It is not an Operating System but that is the ideal way to imagine it. In the way an OS provides an abstraction from the systems hardware and provides APIs to enable communicate with it, Windows Azure is an OS in the cloud. It sits on the virtual hardware and provides an environment (a fabric) for running your applications.

The deployment and management of your application instances is transparent to the developer but its useful to understand how Azure works under the covers. On deploying your application to the Cloud it is added to a Virtual Hard Disk which is then added to a Virtual Machine instance running on a Windows 2008 (Server Core) host server in a Microsoft Data Centre. Interestingly , a multi-cast message is sent to all available hosts, allowing multiple instances to be installed concurrently. The virtual machine running your application instance will share it’s host machine with other applications. Your instance may move around different host machines as required to maintain availability and server maintenance. Microsoft’s deployment strategy takes into account both Fault and Update Domains, ensuring that your instances are not all deployed on a single point of failure (e.g. on a single power point etc). For the current CTP release the hardware of the VM is: 64-bit Windows Server 2008, 1.5-1.7 GHz CPU, 1.7 GB RAM. It is expected that the commercial release will allow for a choice of specifications. It’s worth noting that each Azure instance currently only see’s one CPU and so multi-threading should be used within your code for non-CPU intensive tasks.

Your application instances can perform one of two roles, Web or Worker:

A “Web Role” runs within IIS 7 and therefore effectively runs as an ASP.net web application. This means that most types of application that can be run under IIS can be run in a Web role, so for example ASP.net websites and WCF Service Applications. A web role allows inbound connections over HTTP and is used where inbound connections from the outside world are required.

A “Worker Role” is similar to a Windows Service except that it runs in the Cloud. It cannot accept inbound communications, but it can make outbound communications. It is a .Net Class Library that has a Start() method which is run at start-up and it’s up to your code to keep itself alive (using sleeps and loops). Communication between roles/instances is via ‘Queues’ (more on these below). These instances are ideal for providing background processing of data which allows a faster response from your Web Roles if the web roles are used to off-loading the intensive work onto these Worker Roles.

It is expected that more roles will emerge with the commercial release of the Azure platform.

Windows Azure Storage Services:

Windows Azure currently provides four forms of Storage , ‘Local’, ‘Queues’, ‘Tables’ and ‘Blobs’. It is important to note that SQL Data Services is a separate service that is not part of Azure Storage Service but instead an additional add-on service that provides a more SQL like data framework. Interestingly all these storage services are actually independent and fully accessible over HTTP(S) (via a RESTful interface) from both within and outside of the cloud. This means that your local windows client application could store it’s data in the cloud even though the application is not hosted in the cloud. Alternatively you could save the data from your Cloud application in Azure Storage instances but then access it from your local on premise application. All data writes to the storage services (this doesn’t include local storage) are triplicated for data redundancy across multiple servers.

Local Storage:

This is not part of Windows Azure Storage Services but should be included in the storage discussion for completeness and to avoid confusion. Each Azure instance runs on a Virtual Hard Disk (as previously discussed) and this provides around 250GB of local transient disk space for temporary storage. As this data is transient and local only to that one instance of your application you can’t use this space for true data persistence, but it is useful where you need to temporarily store data to disk during your processing.

Queues:

Queues primarily allow communication to occur between instances and allow Worker Roles to be passed work from Web Roles. For example your web role may accept incoming data which it then persists to a queue for a worker role to pick up. The worker role constantly polls the queue for work to process. The queues are based on FIFO and as queues are persisted to disk (and triplicated) they are very durable, and this provides system designers with a powerful feature that can be used to provide transaction type durability into their Azure applications that are not supported at the data layer. Read messages remain in the queue but marked as hidden preventing them from being picked up by another instance, it is up to the application to explicitly delete the message once it has been actioned. If it is not deleted then it will become visible again on the queue after a specific period (less than a minute). This means that once the data is on the queue the application can fail and once it is running again it can pick up the last message and continue as the message was not deleted. Adding queues therefore into your application design allows you to design durability into the architecture.

BLOB Storage:

Blob storage provides a simple method of storing and retrieving BLOBS (Binary Large Objects). This is particularly useful for media content but can also be useful for persisting serialised objects. Blob storage works as a hierarchy in a similar approach to a file system. You define an ‘account’, which contains ‘Containers’ which hold the Blobs. These blobs can also be held as ‘Blocks’ which allows for the handling of large Blobs. This relationship is shown in this diagram: <ref>

BlobStorage

Remember that the Storage Services are separate from your Azure application and can be accessed independently. The hierarchical relationship described above is key to the RESTful URL used to retrieve this data. The URL looks like this:

http://<Account&gt;.blob.core.windows.net/<Container>/<BlobName>

This provides a very user friendly URL that is easy to navigate and allows the designer to use the hierarchy to his/her advantage to make the data structures within the application as simple as possible.

Tables:

When we think of storage we tend to think of Relational Databases and Table based schemas. The Table storage service provides a mechanism to store data in hierarchical tables but these are NOT relational tables. This is seems to be a sticking point for many people who can’t see the benefits of having a table structure that isn’t relational. RDMS systems have been around for so long now that the relational model is taken for granted as the best for all situations. The truth is of course that it depends on what sort of system you are trying to build. The view that Microsoft have taken is that Azure is a “platform for highly scalable and available applications” where the RDMS model doesn’t always fit. Instead of a centralised, normalised data structure that minimises disk space and duplication, and provides complex query services, why not use the power of distribution and the cheap cost of disk storage to provide a fast, scalable and reliable DMS that effectively duplicates data.

Table Storage does not provide referential integrity, joins, group by, transactions and complex queries. If you determine that you really need a relational model for your application then you will need to consider the SQL Data Services offering (as mentioned briefly at the start of this article) and pay the premium. Table Storage, however, does provide cheap, scalable and durable data management with no fixed data schema. The idea is that you de-normalise your data and store it as required by the application, using multiple inserts and just simple queries. The data is not held as physical tables but merely as ‘entities’ with properties (like fields or columns). Each entity has a partition key and a row key which together provide uniqueness. Currently only the row key is indexed so the data should be partitioned for scalability. The CTP version requires some creative uses of Row Keys and Partitions to produce the desired effect but it can truly scale.

Development Lifecycle/Tools:

Azure applications for the CTP need to to be written in native code (.Net), although the commercial version is expected to support non-managed code. After installing the Windows Azure SDK you are provided with locally installed mock versions of the Windows Azure fabric (to run your instances in) and Azure Storage Services. These allow you to run and debug your cloud application on your developer machine without an Azure account or internet access. Once you have completed your application you ‘publish’ it using Visual Studio. This runs it through CSPack.exe which basically gathers the assemblies and related config and compresses them into a package. The developer then logs into the online “Azure Service Developer Portal” and uploads the ‘package’ to a staging area in the cloud. This staging area can be publicly accessed but it is separate from your live application instances . This allows you to test the application privately in the cloud environment and then promote it to live once testing is complete. The promotion process ensures that there is no downtime of your cloud application during the switch to the new version.

Portal1

The Portal provides key information on your Windows Azure accounts and allows you to extract the logs for your applications and to view detailed reports on various metrics such as Network Usage, Storage, Virtual Machine hours etc.

Developing For Windows Azure:

In order to develop an Azure application you must install the Windows Azure SDK and the Windows Azure Tools For Visual Studio. One point to note is that the SDK utilises IIS7 and therefore requires Windows Vista on the developer machine. You also need SQL Server Express 2005 for the Storage services to utilise. If you want to actually host your finished application in the Cloud then you need to request a token from the Microsoft Azure web site. These are free for the CTP version but there is a waiting list so register early.

You can find the new Azure project templates from the Visual Studio ‘New Project’ dialog which allows to create a basic Azure configured application as a starting point. The result is a Solution with some Azure specific items and a ASP.net Project for the Web Role or a Class Library Project for the Worker Role (depending on what options you picked).

The Azure API provides you with access to the RoleManager class through which you can utilise Logging and Configuration utility classes. As you cannot debug your application once its in the cloud it is important to add instrumentation to monitor the progress of your application and to report exceptions. This log output can be viewed in the local Development Fabric for development purposes but once in the cloud it is automatically written to storage services from where you can download it. Logging is mostly a matter of calling RoleManager.WriteToLog().

Two key files in the solution are the Service Configuration files which define your Azure application and the services it consumes. These files effectively define your application and inform the Azure fabric how to handle your application, for example how many instances of web roles should be deployed for your application, and what Storage Accounts to use. By changing the instance value from 1 to 5 you suddenly have 5 instances of your application running with the scalability that this provides. Whilst you can still use web.config for configuration values these should be limited to those that don’t need to change at runtime as changes to this will require you to re-package and re-deploy your application. For runtime dynamic configuration use the ServiceConfiguration files.

For consuming the Storage Services from your Cloud application it is recommended that you use the ‘Storage Client’ project that is provided in the Samples section of the SDK. This project provides a abstraction from the REST API. This abstraction is recommended to enable a fast start-up time on your new project but also it is expected that this API will change as the platform matures and the ‘Storage Client’ project will shield these changes. There’s no point learning something that is going to change.

This is just a very quick overview, for more information I recommend you download the SDK and then run through the hands on labs in the Azure Services Training Kit.

Conclusion:

Windows Azure is not the definitive answer from Microsoft for what a Cloud Computing platform should look like. It is merely a very early CTP release of their future platform. It is merely a step on the way to the next computing paradigm, whatever that may eventually look like. This is a big step though, as the functionality provided is enough to get a very capable system up and running and hosted entirely in the cloud. It does this by building on the development tools we already know (Visual Studio and managed code) but it also requires, in some areas, a shift in thinking away from more traditional approaches of software design we’ve been using for several years.

References:

Microsoft Windows Azure Site:
http://www.microsoft.com/azure/default.mspx

Windows Azure SDK:
http://www.microsoft.com/downloads/details.aspx?familyid=B44C10E8-425C-417F-AF10-3D2839A5A362&displaylang=en

Windows Azure Services Training Kit:
http://www.microsoft.com/downloads/details.aspx?FamilyID=413e88f8-5966-4a83-b309-53b7b77edf78&displaylang=en

Windows Azure Tools For Visual Studio:
http://www.microsoft.com/downloads/details.aspx?familyid=59E8FC0C-C399-4AB7-8A93-882D8E74B67A&displaylang=en

Azure Services Platform Developer Center:
http://msdn.microsoft.com/en-us/azure/default.aspx

Deploying a Service on Windows Azure:
http://msdn.microsoft.com/en-us/library/dd203057.aspx