Private Clouds Gaining Momentum

Private Clouds Gaining Momentum

Well its been an interesting few weeks for cloud computing, mostly in the “private cloud” space. Microsoft have announced their Windows Azure Appliance enabling you to buy a Windows Azure cloud solution in a box (well actually many boxes as it comprises of hundreds of servers) and also the OpenStack cloud offering continues to grow in strength with RackSpace releasing its cloud storage offering under Apache 2.0 license with the OpenStack project.

OpenStack is an initiate to provide open source cloud computing and contains many elements from various organisations (Citrix, Dell etc) but the core offerings are Rackspace’s storage solution and the cloud compute technology behind NASA’s Nebula Cloud platform. To quote their web site…

The goal of OpenStack is to allow any organization to create and offer cloud computing capabilities using open source software running on standard hardware. OpenStack Compute is software for automatically creating and managing large groups of virtual private servers. OpenStack Storage is software for creating redundant, scalable object storage using clusters of commodity servers to store terabytes or even petabytes of data.”

It is exciting to see OpenStack grow as more vendors outsource their offerings and integrate them into the OpenStack initiative. It provides an opportunity to run your own open source private cloud that will eventually enable you to consume the best of breed offerings from various vendors based on the proliferation of common standards.

Meanwhile Microsoft’s Azure Appliance is described as …

…a turnkey cloud platform that customers can deploy in their own datacentre, across hundreds to thousands of servers. The Windows Azure platform appliance consists of Windows Azure, SQL Azure and a Microsoft-specified configuration of network, storage and server hardware. This hardware will be delivered by a variety of partners.

Whilst this is initially going to appeal to service providers wanting to offer Azure based cloud computing to their customers, it is also another important shift towards private clouds.

These are both examples in my eyes of the industry stepping closer to private clouds becoming a key presence in the enterprise and this will doubtless lead to the integration of public and private clouds. It shows the progression from hype around what cloud might offer, to organisations gaining real tangible benefits from the scalable and flexible cloud computing platforms that are at home inside or outside of the private data centre. These flexible platforms provide real opportunities for enterprises to deploy, run, monitor and scale their applications on elastic commodity infrastructure regardless of whether this infrastructure is housed internally or externally.

The debate on whether ‘Private clouds’ are true cloud computing can continue and whilst it is true that they don’t offer the ‘no- capital upfront’ expenditure and pay as you go model I personally don’t think that excludes them from the cloud computing definition. For enterprises and organisations that are intent on running their own data centres in the future there will still be the drive for efficiencies as there is now, perhaps more so to compete with competitors utilising public cloud offerings. Data centre owners will want to reduce the costs of managing this infrastructure, and will need it to be scalable and fault tolerant. These are the same core objectives of the cloud providers. It makes sense for private clouds to evolve based on the standards, tools and products used by the cloud providers. the ability to easily deploy enterprise applications onto an elastic infrastructure and manage them in a single autonomous way is surely the vision for many a CTO. Sure the elasticity of the infrastructure is restricted by the physical hardware on site but the ability to shut down and re-provision an existing application instance based on current load can drive massive cost benefits as it maximises the efficiency of each node.  The emergence of standards also provides the option to extend your cloud seamlessly out to the public cloud utilising excess capacity from pubic cloud vendors.

The Windows Azure ‘Appliance’ is actually hundreds of servers and there is no denying the fact that cloud computing is currently solely for the big boys who can afford to purchase hundreds or thousands of servers, but it won’t always be that way. Just as with previous computing paradigms the early adopters will pave the way but as standards evolve and more open source offerings such as OpenStack become available more and more opportunities will evolve for smaller more fragmented private and public clouds to flourish. For those enterprises that don’t want to solely use the cloud offerings and need to maintain a small selection of private servers the future may see private clouds consisting of only 5 to 10 servers that connect to the public cloud platforms for extra capacity or for hosted services. The ability to manage those servers as one collective platform offers efficiency benefits capable of driving down the cost of computing.

Whatever the future brings I think that there is a place for private clouds. If public cloud offerings prove to be successful and grow in importance to the industry then private clouds will no doubt grow too to compliment and integrate those public offerrings. Alternatively if the public cloud fails to deliver then I would expect the technologies involved to still make their way into the private data centre as companies like Microsoft move to capitalise on their assets by integrating them into their enterprise product offerings. Either way then, as long as the emergence of standards continues as does the need for some enterprises to manage their systems on site, the future of private cloud computing platforms seems bright. Only time will tell.

Thin Client vs Thick Client Architecture

Earlier in the year I did a presentation on the pros and cons of Thin vs Thick Client architecture, purely from the perspective of recommending an approach for a new UI. This is a long running debate and one that can become very political as most people have a preference. Usually developers/architects prefer to either go with the technologies they feel comfortable with, or prefer to try the latest fashionable technology . I’ve noted down my comments for what they’re worth. Please note that this is not a debate about hardware technologies, which is a separate (although related) issue but purely a debate on client application design.

One way to think of a ‘thin client’ is to imagine it as unintelligent. This is because little data processing is done on the client, but instead data processing tasks are delegated to the supporting server. The clients primary responsibility is merely to display data and collect input from the user for posting back to the server. Some processing can occur on the client to perform validation tasks to ensure valid data is being accepted though, an example being using JavaScript to validate a web form before it is submitted. When we think of thin clients we are traditionally think browser based applications as this is the most common form but it doesn’t have to be browser based though to be considered thin. Extending this idea further some browser based applications are actually quite thick with a heavy reliance on plug-ins and frameworks. Some contain so much client side scripting they are arguably thick clients that happen to live inside a browser.

Non thin clients go by various names but we’ll use ‘thick clients’. In this model a high degree of data processing is done on the client. Of course this model can vary from no server involvement, to a fairly large amount of server processing. The idea though is that the client application is more “intelligent” and capable to processing more data locally. Servers are still traditionally used for data queries and persistence. Traditionally we see thick clients as regular Windows applications installed on the local machine.

The key separation between the thin and thick styles is in where the data processing occurs. On a thin client its on the central server and for a thick client a high degree of processing occurs locally. The differences between thick and thin clients are reviewed below. These are generalised differences though as each system is unique and may not exhibit all the characteristics of its peers.

· Audience Reach: It’s generally easier to reach larger numbers of users and users in less heterogeneous locations via a thin client architecture. Certainly a web based thin client enforces a lower barrier to adoption for users based on the minimum requirement to point the browser at a given URL. Conversely distributing your thick client software to users PC’s can be problematic and complex, even in corporate environments, although technologies such as ‘ClickOnce‘ can take away some of this deployment pain.

· Central Management: The thin client’s server (e.g. web server in web clients) plays a key part in implementing the business logic for the application and this can be centrally located and managed. Changes can be easily rolled out to all users by changing the server side code, making deployments and updates simpler. However thick clients don’t have this reliance on the server for basic navigation and processing logic. This means that it it lacks the benefits of having a central server but gains from reduced infrastructure costs associated with maintaining a server estate.

· Versioning: There is generally only one version of the thin client application as it is serviced by your central server ensuring all users share one version. For thick clients it is important to maintain compatibility with previous versions, and ideally make them self updating. This requirement to support previous versions can limit your evolution of new functionality. However the sheer number of pages and elements that make up a thin browser application can also make version control (within the enterprise) more problematic.

· Client Environment: Browser applications are largely (but not completely) less reliant on the clients local environment. Put (very) simply as long as the user’s PC has a browser installed then the application can run happily. Whilst browser versions and OS versions still cause issues these are minimal compared to a thick client installation where dependencies and prerequisites can cause grief. .Net’s XCOPY deployment model has made huge steps to overcome ‘DLL Hell‘ but it still needs careful planning by thick client developers.

· Thin device support (PDA, Mobile): A thin client can arguably reach more devices due its minimal requirements. Many devices now have browsers, although screen sizes and other differences still add complexity.

· Automated UI test tools: Testing a browser client can be automated using tools that record and replay mimic HTTP requests/responses (particularly useful for Performance Testing). Testing a thick client UI through automation is harder, although not impossible. Microsoft’s UI Automation Framework allows developers to interact with a Windows application and perform scripted testing. It is possible to test both thick and thin clients UI’s using standard automated unit testing tools (NUnit, Team System etc) by abstracting the UI code from the business logic and underlying data using the recognised Model View Controller (MVC) or Model View Presenter (MVP) patterns.

· Dynamic UI building: It is easier to dynamically build a UI in a browser based thin client, which can be a powerful feature and one used by many flexible client applications.

· UI Richness: This is again a debatable statement, but thin clients are usually less rich in functionality. Thick clients embrace the full Windows Forms functionality and maturity, and this user experience can be can be hard to replicate on a thin client. Application responsiveness is dependent on network traffic and the traditional HTTP model, and functionality richness can be restricted by the confines of HTML and HTTP. The requirement to ‘postback’ to the server slows the user experience. Newer methods of posting data back to the server (e.g. AJAX) are leading to more responsive web applications but it is debatable whether the extra effort required to implement these techniques is a worthwhile investment where a thick client is an option.

· Network bandwidth: The transmission of data to and from the thin client causes higher network bandwidth usage compared to thicker clients. Also constant network availability is required to run the thin client application. As the client is unable to function on its own without its server there is no scope for offline working or network reliance. There are always exceptions of course and Google Gears is making offline browser applications possible.

· Client side CPU: Thick clients can make use of the powerful PC hardware sitting on a users desk which can make for a more efficient processing model and can help reduce infrastructure costs. Shifting processing to the client takes the load off your enterprise’s server infrastructure.

· Developer Skills: Often developing a thin (browser based) client forces development teams to adopt multiple technologies and mindsets. Many developers can get overloaded having to keep up to date with numerous technology streams. Whilst, for an averagely competent developer, using multiple technologies is feasible, mastering them all is difficult. Whilst many technology vendors attempt to bridge this gap in the pursuit of productivity this itself causes problems when the intricacies of web based development are neglected ( e.g. managing state). With a thick client approach there is a single mindset and development approach.

· Plug-ins & 3rd Party APIs: It is not as easy to use plug-in’s and third party APIs in a browser client than a thick Windows forms application.

‘Smart Client’ – A Compromise?

A thinner type of thick client is a ‘Smart Client’. This offers the advantages of a thick client application combined with some of the benefits of a thin client. ‘Smart Client’ therefore goes some way to bridge the gap. Definitions of what a smart client is varies but there are a set of properties that Smart clients share that make them ‘smart’: Use of local and remote resources (supporting offline processing) and easy self updating intelligent deployment. In the Microsoft space a ‘Smart Client’ is essentially a windows application that runs locally on the users machine. It uses local resources and peripherals whilst offering the traditional rich Windows UI, but displays the properties mentioned above, i.e. an easy deployment model, and connectivity to services on the local network or the Internet.

By using a centralized installation/update mechanism via ‘ClickOnce’, .Net applications can be deployed and updated with (relative) ease, thanks largely to .Net’s XCOPY deployment model. Local machine resources can be utilised to perform offline processing and the absence of a thin clients complete reliance on the network means that smart clients can go offline. The Microsoft Synch Framework together with the excellent SQL Server Compact 3.5 database make it possible to build powerful applications that can cope with being ‘occasionally connected’. The capacity of Smart clients to utilise network services make them ideal clients within a SOA landscape. Thick client server applications that traditionally processed locally and persisted data in a remote database are slowly giving way to smart clients that consume web service assets within the SOA enterprise to process business logic and persist their data.

Merging the models: The Future is Fatter Thin Clients?

So where is all this heading? Will the future be ‘thick’ or ‘thin’ or ‘smart’? Well newer technology released year on year is constantly bridging the gap, making thin clients smarter and thick clients leaner. Technologies like AJAX and Silverlight are increasing the richness of browser based applications, removing the clunkiness of thin clients. The .Net Framework Client Profile version provides a more streamlined version of .Net for clients to download, potentially making smart client deployments slicker. Advances in web technologies are making it easier to consume remote services/data from the browser, allowing the browser to host smart clients that are interacting with enterprise services or those in the ‘cloud’. With the growth of cloud computing and the abundance of rich services available for consumption the future could consist of pure mash-up clients. Imagine business users just mashing together the feeds and services that they need and creating their own client applications that work the way they want them too, in the way that they receive the data they care about on personalised web pages (iGoogle or MyYahoo).

It seems the future won’t be thin or thick, but instead the middle ground will rule and the argument will become pointless. The importance of the Internet makes browser based applications appear here to stay, but the availability of a rich network infrastructure to deploy smart clients and provide connectivity to a service enabled world will make smart clients a powerful choice (especially for the corporate LOB applications). The key point though is that there is increasingly less differences between the choices. Thin clients are getting richer and more powerful, whilst thick clients are becoming more flexible and dynamic. This may lead to a point in time when the decision as to which technology to use for a client application will be based on the systems requirements and not based on architects being hung up on the political thick/thin debate (imagine that!).

Design is the Key:

The success of any system is down to the attention paid to the design process. Thick clients are often credited with being more useable but really UI usability is more related to good UI design than the ‘thickness’ of the application. There are as many unusable thin client applications as thick ones. Engaging your users and valuing their feedback is vital to ensuring that the UI is productive.

The factors that should influence your decision on client architecture should focus on: the number/location users, the processing requirements, the tangible importance of a Rich UI, the frequency of updates, is offline capability required and also not forgetting your current architectural landscape.

As for your code, its vital with either model to separate concerns. Keep the UI and any business logic apart and communicate only through clear interfaces. A good UI should be seen as a thin wrapper that is capable of being replaced without significant changes to the system. Consider using the following:

· Model View Controller/Presenter (for all models) patterns.

· Microsoft User Interface Process Application Block.

· The Separated Interface Pattern

· Microsoft Smart Client Software Factory for building composite Smart Clients

A user interface should only deal with collection and display of data serviced by business/service layers, and these layers may exist locally (thick client), remotely (thin client) or both (smart client). It is important to clearly define and govern what processing your application is to perform and where it is to perform it. Enforce message based communication between layers and fully utilise the SOA landscape if you have one.


It seems that the importance of the debate is dying year on year as the appeal of the middle ground gets stronger which is forcing decision makers to base their arguments on a comparison of current and future requirements with the characteristics of the potential technology choices, which must be a good thing.