Is Your Gym Like Your Dev Team?

I recently managed to drag myself out of the office and into the gym, but unfortunately my mind was still on the office and my observations of what makes a dev team tick. In between sets I observed my fellow gym-goers and I witnessed similarities with my experiences of IT development teams. Parallels between your developer teams and the local gym can made both in terms of personas and in the approaches to gym training:  

Exercise Machines vs. Free Weights:

When the Nautilus training machines (variants of which now fill every Gym) appeared in the early image1970’s they facilitated a revolution in exercise training and professional gyms. In contrast to free-weights (barbells, dumbbells etc) exercise machines provide a convenient and safe way of training. They don’t require a watcher and force the user through the ‘correct’ range of motion to avoid injury, also enabling forced reps etc. Like software frameworks these machine are built by expert engineers using solid (but also opinionated) ideas. They both enable new starters to get started easily and safely but they also share the trade offs. Machines/frameworks can lack fluidity and shelter the user from needing to understand the underlying principles at work. If the machine is out of service the gym user may not appreciate that they can achieve the same results via other methods. Remove the abstraction that the framework provides to the software developer and it may expose their lack of underlying skills (e.g. an ASP.NET Web Forms developer not appreciating HTTP). Of course the ‘correct’ way of doing something is always debateable and may not suit your needs for every project. Interestingly some pure "Bodybuilders" refuse to use machines for snobby reasons even when it would prove useful, whilst the majority of other gym users only use machines. The same can be said for developers and frameworks. An experienced all-rounder will happily use machines/frameworks where they are useful for productivity but will also utilise free-weights/alternative methods to achieve specific requirements.  

Metrics:

Ask any professional athlete or Bodybuilder what calories they consume or the weights/reps/sets in their last gym session and they’ll tell you in detail. This is because they know the value of recording metrics and how to use them to track progress. The same principles can be applied to software development teams. What’s your current burn-down rate? What’s the average code churn figure for a nightly build? How many hours effort really went into building that MVC view compared to the estimate? A productive team that is continuously improving will be using these metrics to drive progress.

Agility:

Whilst solid athletes measure and plan they are also agile in their training – because they have to be. They have to adapt to changing training environments and to the subtle messages from their bodies to avoid injury and maintain productivity by focusing on the end goal. You wouldn’t expect them to stick rigidly to a plan defined months before despite changing circumstances (e.g: injuries, soreness), things change and so the journey towards the goal must be managed with flexibility.

Gym Buddies: image

The benefits of having a gym buddy are clearly documented in the fitness world and for obvious reasons (shared motivation towards goals etc) and these benefits are so often overlooked in the development team. Pair Programming is a step in the right direction and is one technique that springs to mind but it is also just as important to foster a shared vision within the team and promote discussions and peer learning. A performing team is usually greater than the sum of its parts because people’s performance feeds off the ideas and motivation of their peers.

The Miracle Widget:

For those who don’t want the sweat and pain there’s always the miracle widget that will yield amassing results with little effort. Whether it’s a new machine, wonder drug, electronic shock training, sofa gyms, or SOA, Cloud Computing and BPM they need to be viewed with some apprehension. That’s not to say they aren’t the next big thing, but more that they are not silver bullets and they are used best within a cohesive thought out strategy.

Doping:

Taking steroids can rapidly improve an athletes performance but that improvement comes at a cost of unwanted side effects. The end goal may be rapidly becoming achieved but at the cost of internal physical or mental damage. This is form of extreme technical debt, taking a short cut here and there may be acceptable to ship the product but reliance on that short cut can build making it harder to reverse that debt.

 

Below I’ve noted some general stereotypical personas from the Gym and how they mirror development team personas. Do you recognise these roles in your gym/dev team?  Warning: These are fun generalisations so don’t get upset or take it too seriously!

 

The Bodybuilder:

imageThis guy has one goal in mind, to get ‘big’. All his exercises are anaerobic aimed at building muscle and developing his physique. He doesn’t do aerobic training as it detracts energy from his primary goal. He shows a strong ‘engineer like’ expertise of one discipline and he probably has excellent in-depth knowledge of that area and is very focused on learning more about it. He can be slightly intimidating to approach but generally happy to share his knowledge and experience and enjoys being able to show off his skills.  This persona fits well with many traditional experienced software developers, who are experts in their chosen areas of discipline and increasingly seek to learn more about that technology area, often ignoring the benefits of others. They are dedicated and seen as experts in their field but outside their field they struggle and sometimes the imbalance with other disciplines has a negative effect.

The Endless Runner :

Similar to ‘the Bodybuilder’ above but this time in a different discipline. These guys want to run faster/longer and focus on aerobic exercises and building endurance. Again a solid, expert software engineer but this guy is not in it for the showy technology but more for building the plumbing infrastructure required to keep systems operation.

The All-Rounder :

He is not the biggest or fastest guy in the gym but he is the typical all rounder. He probably has experience of working in the various disciplines above (maybe mastering both) but prefers breath over depth. The All Rounder is able speak everyone’s language and can compete admirably with anyone else but has to submit to the overwhelming expertise of the guys listed above. Often this guy is a bridge between the different disciplines and chats in the corner with both. He gains the benefits that the variation and breadth of knowledge provides but is often at risk of not keeping up with the pace of change in either. His nearest IT persona is the architect due to his all round skills and his comfort liaising with all the required disciplines. He is happy to share his experiences when asked or when he sees someone really struggling, but is often less opinionated about one approach or another as he see’s all sides of the technology argument.

One Routine Guy:

A consistent gym attendee but does the same routine for years. We all know developers like this. They lack true ambition for the vocation and hence don’t build up a true understanding of the changing world around them. They are happy to use what they know and they feel works but the lack of willingness to learn new things puts them at risk of hitting a progress wall and finding themselves obsolete, eventually quitting.

Bored Stiff Guy :

imageThey have decided to go the gym but have no real desire to do the workout. He runs through the motions, moving from task to task with little effort or intensity. We have all no doubt worked with developers who are going through the motions without any passion for the art of software development. Similar to ‘One Routine Guy’ they know what they need to know and lack any enthusiasm to learn new skills etc. I often refer to these as ‘Part Time Programmers’ as they see the job as 9 to 5 and the thought of picking up a new skill without company sponsored training course is alien to them.  

The Biceps Only Guy:

Only focuses his energy on what can be shown off.  He is just playing with the flashy stuff but without building a strong foundation to balance it with. For this guy ‘Gloss is Boss’. Some developers are happy playing with new technologies and building hundreds of "Hello World" apps but yet actually rarely innovate for the team as they fail to see the bigger picture.

The Poor Form Guy:

Is energetic and enthusiastic about training with heavy weight but inadvertently uses dangerously bad form in his exercises. Sometimes developers/architects can become so absorbed in delivering big solutions that they fail to assess their actions. They design complicated solutions using patterns they often don’t understand regardless of the project risks and the potential long term problems around stability/maintainability etc. Like ‘Poor Form Guy’ this is often a case of poor teaching or poor controls. These guys needs a coach or community to check their form.   

The Personal Training Guru:

An expert in his field that takes in many disciples and guides people in their goals. Sort of a more senior experienced all rounder that is now dedicated to helping others. He has respect from his community and his advice is well respected. These are the guru’s in the tech world, experienced consultants/authors (e.g. Martin Fowler).  

The Impatient For Results Guy:

He wants results and fast! He’s usually the customer for the ‘Miracle Wonder Widget’ (see below). Happy to take the easy option and cut corners on quality if he can. No doubt we’ve all worked with some bad development/project managers like this.

The Newbie:

imageHe’s new to the gym and very intimidated. He’s still finding his feet with the machines and social etiquette. Just like new developers these are the live blood of the community as they bring enthusiasm and new ideas, but they need to be guided. They need assistance to get up the steep learning curve and be shown the right way to behave. If we make it hard for them to add value quickly then we risk them giving up and going elsewhere, or at least becoming a Bored Stiff Guy.

The Non-Conformist:

This guy is in the corner of the Gym doing his own thing. He’s probably using the equipment in a unique way, or using a less known training technique. He is innovative and might capable of producing amazing results using non standard approaches. He can be found in your development team too bashing out productivity tools and reviewing the latest open-source offerings. Regardless of his personal success he provides a fresh approach and generates new ways of working. He needs keeping in check though to ensure that his solutions are viable longer term.

The Non-Committed Local Gym Supervisor:

Whilst many gym supervisors are like the Personal Trainers some can be over focused on numbers (subscriptions, machine usage rules) more than real results. Once the new recruit is brought in they get given the user guide and then are left to it, with poor form being ignored as long as basic safety rules are adhered to. There can be a lack of evangelism of techniques, ideas etc., or of facilitating the creation of a real community in dev teams too which can lead them to fail. A lot of the success/failure of teams can result from the performance of the development manager / technical lead and their willingness to support the team to keep them productive.

Summary:

Of course the conclusion is that I should have been working out instead of ‘people watching’ but the fact remains that there are parallels that can be drawn between our work communities and many other walks of life. This opens up the ability for us to view situations from different perspectives which can then help us to improve our understanding.

Advertisement

The Future Of The IT Department

Recently I have been witness to rapid, often painful, change within my own internal IT division over the last few years and observed the on-going developments in the industry. It is clear that IT departments changed dramatically in a short amount of time and the pace is not relenting. This has led me to try to picture what IT will look like within large institutions in the future. It is becoming more and more apparent that the structure of our internal IT organisations are very often based on the traditional legacy models that served enterprises well in the past. Big IT investments and centralised systems are best managed and maintained by an rigid organisational structure. The IT department and the business units are today usually far more disconnected than many CIOs would care to admit. IT used to be something that was done by the IT department based on fairly static business processes. However we’re now in a different world, where IT is seen increasing as just a commodity and business processes need to be able to react quickly to changing economic conditions. No longer is the IT department responsible for big monolithic systems (e.g. payroll etc.) but IT is now embedded in every business process so in some sense every department is an IT department. Surely if the IT organisation doesn’t aid the business then it will be eventually pushed aside and replaced.

The Journey From Past to Present

This excellent post by PEG covers this subject well. PEG paints the picture of the traditional IT organisation as it was in many enterprises and then slices it up to represent the current model once outsourcing/off-shoring has been considered. The left hand diagram showing the more traditional split, and the right showing the emerging norm:

Factoring in the effort required to manage out-sourced projects


Diagrams from PEG: The IT department we have today is not the IT department we’ll need tomorrow

It surprises me how many people consider their jobs as not being under threat from outsourcing as they’re role is above the bottom tier on this sort of diagram, but as you can see it is inevitable that the line between permanent staff and outsource partner staff will continue to rise to the point represented in the triangle on the right, with a good cross section of IT roles being fulfilled by partner organisations. This represents where many large enterprises are at present whereby some “doing” roles are maintained in-house but the management and planning layers are also supplemented by outsource/offshore partners. The bulge in the middle represents the extra permanent resources required to cover the additional overhead of managing partner resources.  Taking a bank to be the textbook example of a large enterprise with a significant scale IT organisation then this research into European banks activities provides some insight into the strategy driving these changes. Unsurprisingly cost reduction is key, but its not the only factor…

“Survey participants cited cost reduction as the primary reason to outsource IT functions, followed by cost variability (for example, the flexibility to respond to peak demand without ramping up internal resources) and access to know-how or skilled personnel. The main benefits of outsourcing were access to know-how or skilled personnel and a guaranteed level of service. (The cost benefits associated with outsourcing often fell short of expectations.) The biggest disadvantages of outsourcing were high switching costs and limited control over critical elements of the IT environment. On the whole, however, the survey shows that banks have embraced outsourcing. Only 3 percent of the banks surveyed were planning to decrease their outsourcing activities. The case for offshoring was slightly different. Although banks used offshoring primarily for the same reason they used outsourcing—to reduce costs—the main benefit of offshoring was less stringent foreign labour laws. The biggest disadvantages of offshoring were opposition among domestic personnel, large overhead, and loss of control.”

Both partner strategy models are therefore seen as suffering from elements of losing control of assets or deliverables and somewhat adding to management overheads, but providing some agility by providing a mechanism to ramp up or down resources as required.

PEG extends his model to show that in the future there will be an increased reliance on SaaS and automation tools and therefore a chunk of the IT organisation structure will be replaced by these as well as outsourcing/offshoring roles.

A skills/roles triangle for the new normal

Diagram from PEG: The IT department we have today is not the IT department we’ll need tomorrow

Within the current model, management layers have often become too complex and unwieldy. With the IT organisation being a business entity itself within the enterprise and with 65% of IT spend just being used to maintain current service, business functions and IT often clash over priorities and the allocation of funding. In many instances resulting in the business going outside of the IT Org to secure services or growing their own ‘black ops’ internal capability just to get things done. This again challenges the traditional IT organisational model where IT keeps a tight control.

Changing Objectives

Tighter financial conditions, increasingly competitive environments and a desire to maximise returns is leading to a model of pay per use and more utilising of partners and outsourcing models. Technology advances are making this transition possible (e.g. Cloud Computing, SaaS). Future IT departments will increasingly utilise these external services resulting in them adopting a very different structure. Whilst the traditional IT organisation has been geared to building and maintaining large complex systems and is staffed with technical people, the rapidly emerging model is one where IT skills are outsourced to numerous vendors and IT staff become the negotiators and orchestrators of these relationships and contracts. Instead of managing systems changes internally the IT organisation is increasingly just the middleman between the business and the outsource/offshore partners. The role becomes one of managing projects more than technically implementing them. Reports can be found of in-house IT departments cutting 90% of headcount with a rapid shift to offshore/outsourcing with the remaining staff focusing on the planning and relationship management tasks. This Boston Consulting Group paper suggests there is an essential move from “doer” to “orchestrator”,  with the IT Organisation “doing fewer of the traditional ‘run the business’ activities” instead leaving them to external providers and doing more coordinating of (one or many) providers activities to meet the design.  This “network of external providers and integrators” needs monitoring and tuning and the structure of the IT Organisation will need to centre around these activities.

A quote from Reinventing The IT Organisation by Antoine Gourevitch, Stuart Scantlebury & Wolfgang Thiel…

“Unless CIOs take swift action, the IT organisation will be at risk of being reduced to a thin layer between the business and the specialist outsourcing firms.”

The outcome will presumably be either a slim organisation staffed with Change Managers and Project Managers responsible for liaising with the partners to satisfy business requirements, or alternatively these changes could prove the catalyst required to move to true business driven IT, where IT skills are integrated with the business units to enable them to react rapidly to changing business needs. Larry Dignan in his post welcomes the idea of breaking up the traditional IT organisation, seeing it as an anachronism. He classes CIOs as often “out of their league”, “process jockeys” who would “rather be scouting new technologies” than innovating. I would agree that this appears to be the case in many large organisations where IT, some would argue, has frustratingly become detached from the goal of driving business value through technology, losing itself in bureaucratic processes. These organisations can seem a long way from delivering core bottom line business value. PEG discusses the detachment of Enterprise Architecture and the business, together with a description of little ‘a’ and big ‘A’ architects, here and its well worth a read. Even where IT organisations do deliver real value its often to timescales that seem painfully long to the business customer but painfully short to the IT guy wrapped up in bureaucratic red tape. Perhaps this isn’t ITs fault as such but more the  arcane structure of the IT organisation as we have come to accept.

One way suggested for IT organisations to remain relevant and address future challenges is for the business and IT to move closer together than ever. This has been talked about for many years but with the demise of the monolithic IT organisation the next few years could see this model mature. Perhaps decentralised pockets of business IT shops closely aligned to the business units will be the norm, introducing new challenges around how to control these pockets.

This shift towards IT/business integration could be very rewarding for an enterprise as in reality modern business processes are often tightly intertwined with the LOB applications in use and so anything that can be done to ensure that those LOB applications support the business processes instead of restricting the pace of business change will be welcomed. Dreischmeier & Thiel suggest new ways of working may be required as IT organisations are forced to adjust their operating model to become faster, more agile and to embrace rapid-development approaches. The business can’t afford to be held back by a slow and unwieldy IT organisation.

One concept I particularly like is the concept of  “introducing Product or Solution Managers” to address the “lack of end to end ownership within IT Orgs”. The person would “own the IT product/solution across all technical layers”. This role should improve TCO and aid business & IT priority alignment. Dreischmeier & Thiel also see the CIO as a key player in ensuring that the IT organisation is “Proactively Engaging in Business Transformation Activities” and that even the IT organisation is very well positioned to be a key player in this transformation as it is aware of the end to end business processes (in theory). They suggest:

“Creating, together with the business, a new-business-model team that seeks out and addresses the changes in economics of the relevant industry as it changes through increased competition and environmental forces”. 

The growth of agile development practices have a a part to play here too. Having innovative IT teams that ‘fail fast and often’ and use lean agile techniques to maximise business value could replace traditional models. Smaller, focused development teams under the direct control of the business units using Agile practices and being supported by a central infrastructure function (probably outsourced) could prove a very effective way of actually building what the business really need. The evolution of Cloud Computing technologies provides real opportunities to make these teams very capable. A business unit based developer could ‘mashup’ cloud services together with core on-premise web services to produce a powerful line of business application that is then deployed to PaaS cloud based infrastructure. Forester Analyst Alex Cullan sells the benefits of this model with the term “Empowered BT (Business Technology)” where IT’s role is to empower the business to utilise the technology that they need in order to remain competitive. The traditional arguments against this approach such as the expected system proliferation and business technology decisions being driven by hype, are dismissed as actually not as bad as we in IT would believe. He argues successfully that some proliferation is acceptable if it empowers the business, but there would have to be trust in business leaders to choose the right path for this to work. Is that trust there at this moment in time? Well not according to this MIT & Boston Consulting Group survey where it shows that current CIOs believe that business leaders are not positioned to lead IT enabled business transformation. Only 33% of CIOs consider their company’s senior execs effective at driving business value with IT, and 40% consider them effective at prioritizing IT investments. However perhaps this reflects the differences in the current differing priorities of the of traditional IT Organisations and the business units, with IT enforcing its traditional maintenance role (“keeping the lights on”) and role of application development/innovation more than a real distrust. The paper does however highlight the benefits that can be achieved when the IT organisation avoids the simple “middle man” role and takes the lead role of driving business change (such as lower maintenance costs, faster realisation of business benefits from new systems, and higher employee satisfaction).  Perhaps the future of the IT organisation is that of a business in its own right, an internal consulting firm offering assistance in business process design, innovation and development management.

Proctor and Gamble run their IT Organisation as a business within the enterprise running alongside other business services (e.g accounting etc.). Their services are branded and marketed to the enterprise and billed on a usage basis with business units empowered to choose to consume these services or go elsewhere. The emphasis is on running this as a viable competitive internal business that is in tune with its customers (in this case the internal business units) needs. They have Brand managers responsible for “the innovation, pricing and commercialization of the services” that ensuring that the total end to end offerings can match that of 3rd party offerings. Underpinning this though is a collection of external partner relationships that still need to be managed and so  in essence this is still heading towards becoming an integrator, orchestrating these partner services into a clear cohesive branded, and hopefully relevant, service. The key here though is the added value provided by this internal IT business service that crucially understands the business and offers competitive services that are completely relevant to the business. This is supported by the BCG research that found where IT Organisations really drove business change they often delivered their IT services as shared services and placed more emphasis on relevant prices and alternative service levels. They tended to centralise IT with lower levels of recorded “shadow” IT being instigated by the business, which could perhaps suggest that these business units felt they were getting sufficient value from their shared IT services, even though it was under central control.

Future Skills

All these changes have massive implications on the skills required within the IT organisation of the future. In the current model maintaining a relevant skilled workforce can be tricky with many key staff feeling demotivated by the outsourcing/offshoring partner model and the subsequent removal of technical roles from their organisation. The loss of junior IT roles to partner resources destroys any future progression opportunities and shows that this model is unsustainable moving forward. Engaging technical people will be increasingly difficult in the current model but perhaps a move to more business aligned IT can help skilled staff remain technical if they wish and also benefit the business through enhanced IT innovation and passion for their roles, instead of forcing good techies to oversee offshore/outsource relationships.

It seems essential now that IT staff of the near future will be expected to have an enhanced level of business acumen and market knowledge to fulfil their roles. Will this come at the expense of excellent technical skills? Maybe! Perhaps the technical skills will be embedded within the offshore/outsource partners and the relevant ‘technical’ skills required in the IT Organisation will be those around technical process design and system analysis. Knowledge of the business will perhaps be more important than any technical skill (for the majority of roles) and therefore it makes more sense to recruit IT staff from within the business units themselves. This is evident in a number of studies with CIOs, such as this BCG study

“In general, CIOs told us that Internal IT staff roles are shifting away from application development and towards process analysis and engineering, business relationship management, project management and architecture design and implementation.”

Within the previously mentioned Proctor & Gamble organisation the same theme emerges as the skills reflect the role of IT within the organisation:

“..traditional IT is just 30% of what we do. If traditional IT is all a person masters, he or she will never be a leader here. The rest is about business knowledge. Those who embrace that approach will certainly increase their value…” 

This view was supported by the previously mentioned study into European Banking, but it also went further, pointing out that technical skills were being neglected …

“…many banks appear to be underestimating the value of technical tools and skills, which are critical to developing high-impact applications, maintaining an efficient infrastructure, and managing outsourcing partners.”

So where does this leave you and I? Well, I expect the relevant number of deeply technical IT professionals will decline in Western countries but this decline will be dwarfed by the increase in semi-professional developers, working in the business but using end-user computing tools to develop systems that are meant to be rapid, easy and throw away. Where more complex solutions are sought then outsource partners will happily fill that gap. Escaping the large enterprises and fleeing to the small and medium enterprises will not be sustainable longer term either as the partner model will win there too eventually. It is entirely possible that the partner model will lose some of its lustre (it’s already happening in places) and there may be some swing back to in-house technical teams. If that happens then the IT community needs to be ready to promote a new ‘agile’ alternative that understands and drives true business benefits.

This evolution of the IT organisation is natural in such an immature industry as this but one thing is definite the future is different and we need to adapt. Whichever direction the future takes for you spend some effort in the meantime trying to understand your business customers needs better and keep innovating for them!

Ray Ozzie’s Dawn of a New Day

I would recommend everyone interested in technology to read Ray Ozzie’s (Chief Software Architect of Microsoft) memo – "Dawn of a New Day". It’s a fascinating insight into the vision of a key player in the industry and a call to arms for Microsoft and it’s partners. What interests me the most about this vision is that it is a conceivable vision and one that I share. This vision of "appliance-like connected devices" being the norm and consuming "Cloud Based Continuous Services" is one that is easy to visualise as this day is dawning now around us. Smart phones, tablets, connected TVs etc are set to become the principle means of interacting with our online world.

"Complexity kills"

Whenever I’m called upon to help out family and friends with their PCs it often strikes me how inappropriate these machines are for the needs of the basic user. The power and complexity of the PC is it’s great power but it also makes them often too difficult to manage and secure. Huge numbers of basic PC users now in reality only use their browser and don’t install software applications anymore. These people are also now enjoying the simplicity provided by smart phone OS’s such as Android and iOS. In fact many of these users are able to fulfil their needs via App Stores etc whilst their PCs gradually gather dust. the future vision where devices rule makes total sense. Whilst Apple is proving the master in the device market Microsoft have the ‘Windows’ advantage. The failure of Linux netbooks to maintain market share shows that given similar pricing models consumers will stick with the familiarity and safe option of Windows, and this is an opportunity for Microsoft. They could capitalise on this with a lean “appliance like” version of Windows in the future.

"Complexity sucks the life out of users, developers and IT. " – I have seen numerous projects needlessly suffer in delivery due to overly complex designs, sometimes from overly complex requirements. Because we can create software to be configurable and feature rich we feel we have to, but of course every additional feature brings additional overhead. This overhead my be felt by the end user or perhaps just the developer and testers trying to implement or test the features.

"Cloud-based continuous services"

Ray’s vision of cloud services being continuous is key for the connected future. Consumers need to be able to depend on the cloud always being available and willing to serve them. As these services grow in importance they will be expected to grow in number and complexity. This is a real challenge for industry engineers and we really need to learn the lessons of the hugely scalable consumer web sites such as Facebook and Google. I look forward to seeing what technologies are produced to aid the development of these services and which scalability patterns move towards the mainstream.

It’s an exciting future for our industry and one that I look forward to playing my part in.

The Future of Windows Home Server

Microsoft’s recent announcement that the key Drive Extender feature is to be removed from the new version of Windows Home Server codenamed ‘Vail’ has resulted in much dismay within the community. Many commentators, including the vocal WHS user community itself, have started to question the future of this product. In this post I give my take on where I see WHS in the medium term and consider how it can fit alongside the “new dawn” of a Cloud Computing era.

How big is the Drive Extender issue?

Firstly, what’s all this about Drive Extender (DE)? Well DE is a really neat feature of WHS that pools all the hard drives in the system into one logical data drive. This means that you can throw in a mixed selection of hard drives of any type (USB, SATA etc) or capacity and the system enables you to see them as one. It also provides fault tolerance through data duplication which protects your data from drive failure. It is one of the major features of Windows Home Server (WHS). I would argue one of three, with the others being the client backups and remote access. Sure the product does much more than just that but it’s fair to say that all of WHS’s features are available in other products in some shape or form and the combination of these three features into one customisable platform made WHS stand out for me.

Microsoft’s announcement to remove DE from the next version of WHS code named Vail immediately removes a major reason to buy into the new WHS version and this has been evidenced in the recent twitter comments on the subject where a lot of people have stated their intention to not use ‘Vail’. Of course some of this is just anger at the fact that the feature has been removed (and the way in which it was announced) but still the fact remains that the product is a weaker proposition than it was before.

Personally I see this decision in both a negative and a positive light. Firstly I see this as a major blow to the uniqueness of the product and feel that it will suffer without this USP (Unique Selling Point). Also it’s important to remember that this is positioned as a product for the average PC user and DE made extending the storage capacity easy. The user doesn’t need to buy matching disks or configure RAID, they just pop in a new disk and it gets added to the pool. Without DE adding extra storage will presumably be a more complex task. In reality though how many “average” PC users would feel happy upgrading the hard drive on their WHS anyway. Whilst enthusiasts relish the chance to pop open the case many casual users would actually see their OEM produced WHS as just an appliance, and one probably already stuffed with several 2 or 3 TB drives providing a good chunk of storage capacity right out the box. They would not consider any upgrades to it other than replacing it when it gets full. In addition whilst the shared drive pool concept makes adding storage easy the ability to add additional storage as additional drives will still be there in the product as it is in any Windows OS. I don’t see this as a huge blocker to WHS adoption.

Folder duplication utilises the DE feature to ensure that the data is duplicated onto different physical drives within the logical storage pool. This in effect is ‘RAID like’ except that the data is duplicated over time and not immediately (although there is no way of retrieving previous versions of files). This provides an easy form of fault tolerance that, whilst being fairly easy to replicate yourself using other means, will probably never be as easy as ticking a check box. This is again more of an issue for the “average” guy than the PC enthusiast who is at home configuring RAID, although a simple file copy add-in or batch job is my preferred solution. I already run daily automated RoboCopy jobs to copy "’snapshots’ of my data drives to another drive to provide both fault tolerance but also versioned snapshots that I can restore if required. I have had to dive into my snapshots on several occasions to restore a previous version of a file that has accidently been deleted/modified. I prefer this solution over RAID as disk write to a drive in RAID is duplicated immediately even if its not what you wanted.

So, what’s the positive? Well let’s consider why Microsoft are removing it. They have said that it causes conflicts with applications installed on the Small Business Server sister OS code named ‘Aurora’. These software applications don’t play nicely with having a logical drive pool. I, as have many other WHS enthusiasts, have over time installed numerous applications onto my WHS (e.g Microsoft Team Foundation Server) and I always do so with caution due to DE. I am careful to  ensure that nothing I install utilises the DATA drive and I often refrain from installing software that I think might conflict. With DE removed this worry is taken care of, which is definitely a positive for me.

Does WHS fit in the Cloud Computing Landscape?

If we look to the future and assume that the Cloud Computing paradigm is here to stay the bigger question arises of what role would WHS play. I admit to being a Cloud advocate and I do share Ray Ozzie’s view of a “New Dawn” where  devices (not PCs) connect to continuous services hosted in the internet. In this vision the majority of people only use devices to connect to the internet (smart phones, tablets, TVs etc) and they are continuously connected to the web where their data is stored, analysed, processed and shared. The concept of having a local home server is almost alien as your storage will all be in the cloud. Backups won’t be required as data will be automatically synched and devices won’t need to be imaged for restoration as they will only be simple devices with sophisticated browsers. Sure PC’s will remain for advanced users but not the user majority. This vision of the future is not that revolutionary, it’s already happening, so fast in fact that the next version of WHS after Vail will need to be positioned within this connected world. People may cry that users will always want their data close by and local but that’s not true as over time they won’t even think about it as evidenced by early cloud services like Hotmail, Exchange Online etc.

This vision of the future relies heavily on a fast internet connection and related infrastructure which is slowly being rolled out across the developed world but this weakness perhaps provides an opportunity for the WHS’s of the future. The ability to synch to your local “private cloud” and use that as the hub for your home is probably a requirement of the future and a ‘server’ device could fulfil this space. Unfortunately so could other home based devices, such as the XBoxes, Google TVs and Media Centers of the future, and the single home device is the ‘holy grail’ of consumer electronics. The battle for the position as sole ‘provider’ and gateway to the continuous services of the future will be intense and whilst the current WHS offerings (V1 and Vail") are too weak to survive the battle, maybe, just maybe, their future off-spring will fit that gap perfectly.

Summary:

WHS has, unfortunately, always been a niche product which is a real shame as it is one of the best products to ever have come out of Redmond and one that deserves more credit. Microsoft have never promoted it and seem instead to be happy to use it as a experiment for newer technologies (like DE). This is obviously a dark period for the WHS product but the communities reaction to the DE news and the growing popularity of the platform means that I believe it it will survive in the short term.

If I were Microsoft I would look to extract the key features of WHS (i.e. client backup and remote access services) and convert them into add on applications for Windows. With DE gone there is little point in having a ‘Home’ sku of Windows Server. Sell Windows 2008 Foundation to OEMs with these WHS feature applications installed for them to put on their consumer devices. This would enable these features to be supported on Windows Client OS’s in the future too when it was profitable to do so. I would be happy to run a fully fledged supported version of Windows Server that comfortably ran all server based software but to which I could also install a Client Backup and Remote Access Services if I required them.

Will I upgrade to Vail? Good question. Currently I’m undecided. I will review it against other products when the time comes (Amahi on Linux, Aurora, Win Server 2008) but one thing is for sure – the removal of DE will not affect my decision but the strength of Microsoft commitment to the product will.

Private Clouds Gaining Momentum

Well its been an interesting few weeks for cloud computing, mostly in the “private cloud” space. Microsoft have announced their Windows Azure Appliance enabling you to buy a Windows Azure cloud solution in a box (well actually many boxes as it comprises of hundreds of servers) and also the OpenStack cloud offering continues to grow in strength with RackSpace releasing its cloud storage offering under Apache 2.0 license with the OpenStack project.

OpenStack is an initiate to provide open source cloud computing and contains many elements from various organisations (Citrix, Dell etc) but the core offerings are Rackspace’s storage solution and the cloud compute technology behind NASA’s Nebula Cloud platform. To quote their web site…

The goal of OpenStack is to allow any organization to create and offer cloud computing capabilities using open source software running on standard hardware. OpenStack Compute is software for automatically creating and managing large groups of virtual private servers. OpenStack Storage is software for creating redundant, scalable object storage using clusters of commodity servers to store terabytes or even petabytes of data.”

It is exciting to see OpenStack grow as more vendors outsource their offerings and integrate them into the OpenStack initiative. It provides an opportunity to run your own open source private cloud that will eventually enable you to consume the best of breed offerings from various vendors based on the proliferation of common standards.

Meanwhile Microsoft’s Azure Appliance is described as …

…a turnkey cloud platform that customers can deploy in their own datacentre, across hundreds to thousands of servers. The Windows Azure platform appliance consists of Windows Azure, SQL Azure and a Microsoft-specified configuration of network, storage and server hardware. This hardware will be delivered by a variety of partners.

Whilst this is initially going to appeal to service providers wanting to offer Azure based cloud computing to their customers, it is also another important shift towards private clouds.

These are both examples in my eyes of the industry stepping closer to private clouds becoming a key presence in the enterprise and this will doubtless lead to the integration of public and private clouds. It shows the progression from hype around what cloud might offer, to organisations gaining real tangible benefits from the scalable and flexible cloud computing platforms that are at home inside or outside of the private data centre. These flexible platforms provide real opportunities for enterprises to deploy, run, monitor and scale their applications on elastic commodity infrastructure regardless of whether this infrastructure is housed internally or externally.

The debate on whether ‘Private clouds’ are true cloud computing can continue and whilst it is true that they don’t offer the ‘no- capital upfront’ expenditure and pay as you go model I personally don’t think that excludes them from the cloud computing definition. For enterprises and organisations that are intent on running their own data centres in the future there will still be the drive for efficiencies as there is now, perhaps more so to compete with competitors utilising public cloud offerings. Data centre owners will want to reduce the costs of managing this infrastructure, and will need it to be scalable and fault tolerant. These are the same core objectives of the cloud providers. It makes sense for private clouds to evolve based on the standards, tools and products used by the cloud providers. the ability to easily deploy enterprise applications onto an elastic infrastructure and manage them in a single autonomous way is surely the vision for many a CTO. Sure the elasticity of the infrastructure is restricted by the physical hardware on site but the ability to shut down and re-provision an existing application instance based on current load can drive massive cost benefits as it maximises the efficiency of each node.  The emergence of standards also provides the option to extend your cloud seamlessly out to the public cloud utilising excess capacity from pubic cloud vendors.

The Windows Azure ‘Appliance’ is actually hundreds of servers and there is no denying the fact that cloud computing is currently solely for the big boys who can afford to purchase hundreds or thousands of servers, but it won’t always be that way. Just as with previous computing paradigms the early adopters will pave the way but as standards evolve and more open source offerings such as OpenStack become available more and more opportunities will evolve for smaller more fragmented private and public clouds to flourish. For those enterprises that don’t want to solely use the cloud offerings and need to maintain a small selection of private servers the future may see private clouds consisting of only 5 to 10 servers that connect to the public cloud platforms for extra capacity or for hosted services. The ability to manage those servers as one collective platform offers efficiency benefits capable of driving down the cost of computing.

Whatever the future brings I think that there is a place for private clouds. If public cloud offerings prove to be successful and grow in importance to the industry then private clouds will no doubt grow too to compliment and integrate those public offerrings. Alternatively if the public cloud fails to deliver then I would expect the technologies involved to still make their way into the private data centre as companies like Microsoft move to capitalise on their assets by integrating them into their enterprise product offerings. Either way then, as long as the emergence of standards continues as does the need for some enterprises to manage their systems on site, the future of private cloud computing platforms seems bright. Only time will tell.

Windows Azure : An Introduction

At last year’s PDC Microsoft released the details of its new venture into the next IT paradigm that is arguably set to change the way that applications are developed, hosted, managed and funded – Cloud Computing. It is easy to dismiss Cloud Computing is a fad or simply as a move back towards the mainframe days of a central processing model, but regardless of these debates there is no doubt Microsoft, Amazon and Google are pouring large amounts of funding into developing Cloud Computing platforms. I’m not going to debate the subject of Cloud Computing, although I will state that personally I feel it will impact all that we do in IT in the future, perhaps not in it’s current guise, but this latest move from Microsoft can be seen as one more step on that journey.

What’s is Windows Azure?

Well it’s not an image of Windows Server hosted somewhere on the internet for you to remote desktop into and install what you like on it. To quote Microsoft it is (in Marketing speak) a “Platform for writing highly scalable and available applications”.

It’s not currently possible nor advisable to just convert your current application to run on Azure, instead Azure provides a platform on which you can build a new application that is highly scalable and available. Azure runs in Microsoft Data Centres (currently in the US but planned to be located throughout the world) and your application runs within individual instances of virtual machines on that Azure fabric.

The pricing policy is also going to be based on usage which allows you to start small (with a few computing instances) and then increase the number of instances (and therefore computing power) as your application grows and needs to be scaled for the increasing number of users. Imagine you’re writing the new “Facebook”. You could buy a handful of expensive servers and then buy more if/when the applications user base takes off. Then you need to buy more and more until you’ve got a whole DataCenter of servers (all consuming masses of power) and a team of IT Administrators running them. Then your user base levels out, and possibly drops down to a more stable level leaving you with excess capacity you’ve already paid for. Worse still if your application never takes off then that initial investment in the first few servers will leave you seriously out of pocket. In contrast cloud services like Windows Azure are paid by usage. The cost per month will be related to your current storage usage and your compute instance usage. If you need more resources to scale out your application then you just pay more, which allows you to adjust your costs based on demand and removes the need for large upfront capital expenditure.

These cost benefits are ideal for Web 2.0 start-ups but they can also benefit large Enterprises. The ability to develop an application within a low cost framework that also manages hosting that application and usage monitoring, allows any development team to try out new ideas and dynamically move with the business. Cloud Computing could be a tool to enable an Enterprise to keep up with fast moving business opportunities at a low initial outlay and a low Total Cost of Ownership. An alternative model is where a platform like Windows Azure is deployed locally in the Enterprise DataCenter. The enterprise would then benefit from an efficient processing model for its data centre, forcing all new applications to be built to run on that platform. This would provide most of the benefits of Cloud Computing but with less issues around security as data would not be leaving the Enterprise. Microsoft have so far only unofficially acknowledged this model and are not promoting it as an option with Windows Azure, although it will be interesting to see if they do promote this idea in the future.

Azure Services Platform:

This is the stack that makes up Microsoft’s current Cloud Computing offering:

AzuresServicesPlatform

As you can see there are several offerings that sit on top of Azure, so lets quickly look at these first, although we’ll not go into the detail for these:

Microsoft .NET Services: Offers distributed infrastructure services to support both cloud-based and local based applications. This offering includes:

–  Access Control:  Provides claims based implementation of identity federation and transformation in the cloud.

– Service Bus: Allows you to expose your services (in the cloud or on premises) on the internet via a URI, without having to open up incoming ports inside your firewall.

– Workflow: Running Windows Workflow based workflows in the Cloud.

Microsoft SQL Services: This provides “SQL like” data services in the cloud based on SQL Server. This is effectively a premium storage service over the standard one provided by Azure Storage Services.

Live Services: There is a wealth of data locked within Microsoft Live applications (e.g. Live Mail) that is difficult to interact with. Live Services allows your applications to interact with this data. Building on Live Mesh it also enables synchronizing this data across a user’s numerous devices.

Windows Azure:

This is base environment where your application will sit. It is not an Operating System but that is the ideal way to imagine it. In the way an OS provides an abstraction from the systems hardware and provides APIs to enable communicate with it, Windows Azure is an OS in the cloud. It sits on the virtual hardware and provides an environment (a fabric) for running your applications.

The deployment and management of your application instances is transparent to the developer but its useful to understand how Azure works under the covers. On deploying your application to the Cloud it is added to a Virtual Hard Disk which is then added to a Virtual Machine instance running on a Windows 2008 (Server Core) host server in a Microsoft Data Centre. Interestingly , a multi-cast message is sent to all available hosts, allowing multiple instances to be installed concurrently. The virtual machine running your application instance will share it’s host machine with other applications. Your instance may move around different host machines as required to maintain availability and server maintenance. Microsoft’s deployment strategy takes into account both Fault and Update Domains, ensuring that your instances are not all deployed on a single point of failure (e.g. on a single power point etc). For the current CTP release the hardware of the VM is: 64-bit Windows Server 2008, 1.5-1.7 GHz CPU, 1.7 GB RAM. It is expected that the commercial release will allow for a choice of specifications. It’s worth noting that each Azure instance currently only see’s one CPU and so multi-threading should be used within your code for non-CPU intensive tasks.

Your application instances can perform one of two roles, Web or Worker:

A “Web Role” runs within IIS 7 and therefore effectively runs as an ASP.net web application. This means that most types of application that can be run under IIS can be run in a Web role, so for example ASP.net websites and WCF Service Applications. A web role allows inbound connections over HTTP and is used where inbound connections from the outside world are required.

A “Worker Role” is similar to a Windows Service except that it runs in the Cloud. It cannot accept inbound communications, but it can make outbound communications. It is a .Net Class Library that has a Start() method which is run at start-up and it’s up to your code to keep itself alive (using sleeps and loops). Communication between roles/instances is via ‘Queues’ (more on these below). These instances are ideal for providing background processing of data which allows a faster response from your Web Roles if the web roles are used to off-loading the intensive work onto these Worker Roles.

It is expected that more roles will emerge with the commercial release of the Azure platform.

Windows Azure Storage Services:

Windows Azure currently provides four forms of Storage , ‘Local’, ‘Queues’, ‘Tables’ and ‘Blobs’. It is important to note that SQL Data Services is a separate service that is not part of Azure Storage Service but instead an additional add-on service that provides a more SQL like data framework. Interestingly all these storage services are actually independent and fully accessible over HTTP(S) (via a RESTful interface) from both within and outside of the cloud. This means that your local windows client application could store it’s data in the cloud even though the application is not hosted in the cloud. Alternatively you could save the data from your Cloud application in Azure Storage instances but then access it from your local on premise application. All data writes to the storage services (this doesn’t include local storage) are triplicated for data redundancy across multiple servers.

Local Storage:

This is not part of Windows Azure Storage Services but should be included in the storage discussion for completeness and to avoid confusion. Each Azure instance runs on a Virtual Hard Disk (as previously discussed) and this provides around 250GB of local transient disk space for temporary storage. As this data is transient and local only to that one instance of your application you can’t use this space for true data persistence, but it is useful where you need to temporarily store data to disk during your processing.

Queues:

Queues primarily allow communication to occur between instances and allow Worker Roles to be passed work from Web Roles. For example your web role may accept incoming data which it then persists to a queue for a worker role to pick up. The worker role constantly polls the queue for work to process. The queues are based on FIFO and as queues are persisted to disk (and triplicated) they are very durable, and this provides system designers with a powerful feature that can be used to provide transaction type durability into their Azure applications that are not supported at the data layer. Read messages remain in the queue but marked as hidden preventing them from being picked up by another instance, it is up to the application to explicitly delete the message once it has been actioned. If it is not deleted then it will become visible again on the queue after a specific period (less than a minute). This means that once the data is on the queue the application can fail and once it is running again it can pick up the last message and continue as the message was not deleted. Adding queues therefore into your application design allows you to design durability into the architecture.

BLOB Storage:

Blob storage provides a simple method of storing and retrieving BLOBS (Binary Large Objects). This is particularly useful for media content but can also be useful for persisting serialised objects. Blob storage works as a hierarchy in a similar approach to a file system. You define an ‘account’, which contains ‘Containers’ which hold the Blobs. These blobs can also be held as ‘Blocks’ which allows for the handling of large Blobs. This relationship is shown in this diagram: <ref>

BlobStorage

Remember that the Storage Services are separate from your Azure application and can be accessed independently. The hierarchical relationship described above is key to the RESTful URL used to retrieve this data. The URL looks like this:

http://<Account&gt;.blob.core.windows.net/<Container>/<BlobName>

This provides a very user friendly URL that is easy to navigate and allows the designer to use the hierarchy to his/her advantage to make the data structures within the application as simple as possible.

Tables:

When we think of storage we tend to think of Relational Databases and Table based schemas. The Table storage service provides a mechanism to store data in hierarchical tables but these are NOT relational tables. This is seems to be a sticking point for many people who can’t see the benefits of having a table structure that isn’t relational. RDMS systems have been around for so long now that the relational model is taken for granted as the best for all situations. The truth is of course that it depends on what sort of system you are trying to build. The view that Microsoft have taken is that Azure is a “platform for highly scalable and available applications” where the RDMS model doesn’t always fit. Instead of a centralised, normalised data structure that minimises disk space and duplication, and provides complex query services, why not use the power of distribution and the cheap cost of disk storage to provide a fast, scalable and reliable DMS that effectively duplicates data.

Table Storage does not provide referential integrity, joins, group by, transactions and complex queries. If you determine that you really need a relational model for your application then you will need to consider the SQL Data Services offering (as mentioned briefly at the start of this article) and pay the premium. Table Storage, however, does provide cheap, scalable and durable data management with no fixed data schema. The idea is that you de-normalise your data and store it as required by the application, using multiple inserts and just simple queries. The data is not held as physical tables but merely as ‘entities’ with properties (like fields or columns). Each entity has a partition key and a row key which together provide uniqueness. Currently only the row key is indexed so the data should be partitioned for scalability. The CTP version requires some creative uses of Row Keys and Partitions to produce the desired effect but it can truly scale.

Development Lifecycle/Tools:

Azure applications for the CTP need to to be written in native code (.Net), although the commercial version is expected to support non-managed code. After installing the Windows Azure SDK you are provided with locally installed mock versions of the Windows Azure fabric (to run your instances in) and Azure Storage Services. These allow you to run and debug your cloud application on your developer machine without an Azure account or internet access. Once you have completed your application you ‘publish’ it using Visual Studio. This runs it through CSPack.exe which basically gathers the assemblies and related config and compresses them into a package. The developer then logs into the online “Azure Service Developer Portal” and uploads the ‘package’ to a staging area in the cloud. This staging area can be publicly accessed but it is separate from your live application instances . This allows you to test the application privately in the cloud environment and then promote it to live once testing is complete. The promotion process ensures that there is no downtime of your cloud application during the switch to the new version.

Portal1

The Portal provides key information on your Windows Azure accounts and allows you to extract the logs for your applications and to view detailed reports on various metrics such as Network Usage, Storage, Virtual Machine hours etc.

Developing For Windows Azure:

In order to develop an Azure application you must install the Windows Azure SDK and the Windows Azure Tools For Visual Studio. One point to note is that the SDK utilises IIS7 and therefore requires Windows Vista on the developer machine. You also need SQL Server Express 2005 for the Storage services to utilise. If you want to actually host your finished application in the Cloud then you need to request a token from the Microsoft Azure web site. These are free for the CTP version but there is a waiting list so register early.

You can find the new Azure project templates from the Visual Studio ‘New Project’ dialog which allows to create a basic Azure configured application as a starting point. The result is a Solution with some Azure specific items and a ASP.net Project for the Web Role or a Class Library Project for the Worker Role (depending on what options you picked).

The Azure API provides you with access to the RoleManager class through which you can utilise Logging and Configuration utility classes. As you cannot debug your application once its in the cloud it is important to add instrumentation to monitor the progress of your application and to report exceptions. This log output can be viewed in the local Development Fabric for development purposes but once in the cloud it is automatically written to storage services from where you can download it. Logging is mostly a matter of calling RoleManager.WriteToLog().

Two key files in the solution are the Service Configuration files which define your Azure application and the services it consumes. These files effectively define your application and inform the Azure fabric how to handle your application, for example how many instances of web roles should be deployed for your application, and what Storage Accounts to use. By changing the instance value from 1 to 5 you suddenly have 5 instances of your application running with the scalability that this provides. Whilst you can still use web.config for configuration values these should be limited to those that don’t need to change at runtime as changes to this will require you to re-package and re-deploy your application. For runtime dynamic configuration use the ServiceConfiguration files.

For consuming the Storage Services from your Cloud application it is recommended that you use the ‘Storage Client’ project that is provided in the Samples section of the SDK. This project provides a abstraction from the REST API. This abstraction is recommended to enable a fast start-up time on your new project but also it is expected that this API will change as the platform matures and the ‘Storage Client’ project will shield these changes. There’s no point learning something that is going to change.

This is just a very quick overview, for more information I recommend you download the SDK and then run through the hands on labs in the Azure Services Training Kit.

Conclusion:

Windows Azure is not the definitive answer from Microsoft for what a Cloud Computing platform should look like. It is merely a very early CTP release of their future platform. It is merely a step on the way to the next computing paradigm, whatever that may eventually look like. This is a big step though, as the functionality provided is enough to get a very capable system up and running and hosted entirely in the cloud. It does this by building on the development tools we already know (Visual Studio and managed code) but it also requires, in some areas, a shift in thinking away from more traditional approaches of software design we’ve been using for several years.

References:

Microsoft Windows Azure Site:
http://www.microsoft.com/azure/default.mspx

Windows Azure SDK:
http://www.microsoft.com/downloads/details.aspx?familyid=B44C10E8-425C-417F-AF10-3D2839A5A362&displaylang=en

Windows Azure Services Training Kit:
http://www.microsoft.com/downloads/details.aspx?FamilyID=413e88f8-5966-4a83-b309-53b7b77edf78&displaylang=en

Windows Azure Tools For Visual Studio:
http://www.microsoft.com/downloads/details.aspx?familyid=59E8FC0C-C399-4AB7-8A93-882D8E74B67A&displaylang=en

Azure Services Platform Developer Center:
http://msdn.microsoft.com/en-us/azure/default.aspx

Deploying a Service on Windows Azure:
http://msdn.microsoft.com/en-us/library/dd203057.aspx

Thin Client vs Thick Client Architecture

Earlier in the year I did a presentation on the pros and cons of Thin vs Thick Client architecture, purely from the perspective of recommending an approach for a new UI. This is a long running debate and one that can become very political as most people have a preference. Usually developers/architects prefer to either go with the technologies they feel comfortable with, or prefer to try the latest fashionable technology . I’ve noted down my comments for what they’re worth. Please note that this is not a debate about hardware technologies, which is a separate (although related) issue but purely a debate on client application design.

One way to think of a ‘thin client’ is to imagine it as unintelligent. This is because little data processing is done on the client, but instead data processing tasks are delegated to the supporting server. The clients primary responsibility is merely to display data and collect input from the user for posting back to the server. Some processing can occur on the client to perform validation tasks to ensure valid data is being accepted though, an example being using JavaScript to validate a web form before it is submitted. When we think of thin clients we are traditionally think browser based applications as this is the most common form but it doesn’t have to be browser based though to be considered thin. Extending this idea further some browser based applications are actually quite thick with a heavy reliance on plug-ins and frameworks. Some contain so much client side scripting they are arguably thick clients that happen to live inside a browser.

Non thin clients go by various names but we’ll use ‘thick clients’. In this model a high degree of data processing is done on the client. Of course this model can vary from no server involvement, to a fairly large amount of server processing. The idea though is that the client application is more “intelligent” and capable to processing more data locally. Servers are still traditionally used for data queries and persistence. Traditionally we see thick clients as regular Windows applications installed on the local machine.

The key separation between the thin and thick styles is in where the data processing occurs. On a thin client its on the central server and for a thick client a high degree of processing occurs locally. The differences between thick and thin clients are reviewed below. These are generalised differences though as each system is unique and may not exhibit all the characteristics of its peers.

· Audience Reach: It’s generally easier to reach larger numbers of users and users in less heterogeneous locations via a thin client architecture. Certainly a web based thin client enforces a lower barrier to adoption for users based on the minimum requirement to point the browser at a given URL. Conversely distributing your thick client software to users PC’s can be problematic and complex, even in corporate environments, although technologies such as ‘ClickOnce‘ can take away some of this deployment pain.

· Central Management: The thin client’s server (e.g. web server in web clients) plays a key part in implementing the business logic for the application and this can be centrally located and managed. Changes can be easily rolled out to all users by changing the server side code, making deployments and updates simpler. However thick clients don’t have this reliance on the server for basic navigation and processing logic. This means that it it lacks the benefits of having a central server but gains from reduced infrastructure costs associated with maintaining a server estate.

· Versioning: There is generally only one version of the thin client application as it is serviced by your central server ensuring all users share one version. For thick clients it is important to maintain compatibility with previous versions, and ideally make them self updating. This requirement to support previous versions can limit your evolution of new functionality. However the sheer number of pages and elements that make up a thin browser application can also make version control (within the enterprise) more problematic.

· Client Environment: Browser applications are largely (but not completely) less reliant on the clients local environment. Put (very) simply as long as the user’s PC has a browser installed then the application can run happily. Whilst browser versions and OS versions still cause issues these are minimal compared to a thick client installation where dependencies and prerequisites can cause grief. .Net’s XCOPY deployment model has made huge steps to overcome ‘DLL Hell‘ but it still needs careful planning by thick client developers.

· Thin device support (PDA, Mobile): A thin client can arguably reach more devices due its minimal requirements. Many devices now have browsers, although screen sizes and other differences still add complexity.

· Automated UI test tools: Testing a browser client can be automated using tools that record and replay mimic HTTP requests/responses (particularly useful for Performance Testing). Testing a thick client UI through automation is harder, although not impossible. Microsoft’s UI Automation Framework allows developers to interact with a Windows application and perform scripted testing. It is possible to test both thick and thin clients UI’s using standard automated unit testing tools (NUnit, Team System etc) by abstracting the UI code from the business logic and underlying data using the recognised Model View Controller (MVC) or Model View Presenter (MVP) patterns.

· Dynamic UI building: It is easier to dynamically build a UI in a browser based thin client, which can be a powerful feature and one used by many flexible client applications.

· UI Richness: This is again a debatable statement, but thin clients are usually less rich in functionality. Thick clients embrace the full Windows Forms functionality and maturity, and this user experience can be can be hard to replicate on a thin client. Application responsiveness is dependent on network traffic and the traditional HTTP model, and functionality richness can be restricted by the confines of HTML and HTTP. The requirement to ‘postback’ to the server slows the user experience. Newer methods of posting data back to the server (e.g. AJAX) are leading to more responsive web applications but it is debatable whether the extra effort required to implement these techniques is a worthwhile investment where a thick client is an option.

· Network bandwidth: The transmission of data to and from the thin client causes higher network bandwidth usage compared to thicker clients. Also constant network availability is required to run the thin client application. As the client is unable to function on its own without its server there is no scope for offline working or network reliance. There are always exceptions of course and Google Gears is making offline browser applications possible.

· Client side CPU: Thick clients can make use of the powerful PC hardware sitting on a users desk which can make for a more efficient processing model and can help reduce infrastructure costs. Shifting processing to the client takes the load off your enterprise’s server infrastructure.

· Developer Skills: Often developing a thin (browser based) client forces development teams to adopt multiple technologies and mindsets. Many developers can get overloaded having to keep up to date with numerous technology streams. Whilst, for an averagely competent developer, using multiple technologies is feasible, mastering them all is difficult. Whilst many technology vendors attempt to bridge this gap in the pursuit of productivity this itself causes problems when the intricacies of web based development are neglected ( e.g. managing state). With a thick client approach there is a single mindset and development approach.

· Plug-ins & 3rd Party APIs: It is not as easy to use plug-in’s and third party APIs in a browser client than a thick Windows forms application.

‘Smart Client’ – A Compromise?

A thinner type of thick client is a ‘Smart Client’. This offers the advantages of a thick client application combined with some of the benefits of a thin client. ‘Smart Client’ therefore goes some way to bridge the gap. Definitions of what a smart client is varies but there are a set of properties that Smart clients share that make them ‘smart’: Use of local and remote resources (supporting offline processing) and easy self updating intelligent deployment. In the Microsoft space a ‘Smart Client’ is essentially a windows application that runs locally on the users machine. It uses local resources and peripherals whilst offering the traditional rich Windows UI, but displays the properties mentioned above, i.e. an easy deployment model, and connectivity to services on the local network or the Internet.

By using a centralized installation/update mechanism via ‘ClickOnce’, .Net applications can be deployed and updated with (relative) ease, thanks largely to .Net’s XCOPY deployment model. Local machine resources can be utilised to perform offline processing and the absence of a thin clients complete reliance on the network means that smart clients can go offline. The Microsoft Synch Framework together with the excellent SQL Server Compact 3.5 database make it possible to build powerful applications that can cope with being ‘occasionally connected’. The capacity of Smart clients to utilise network services make them ideal clients within a SOA landscape. Thick client server applications that traditionally processed locally and persisted data in a remote database are slowly giving way to smart clients that consume web service assets within the SOA enterprise to process business logic and persist their data.

Merging the models: The Future is Fatter Thin Clients?

So where is all this heading? Will the future be ‘thick’ or ‘thin’ or ‘smart’? Well newer technology released year on year is constantly bridging the gap, making thin clients smarter and thick clients leaner. Technologies like ASP.net AJAX and Silverlight are increasing the richness of browser based applications, removing the clunkiness of thin clients. The .Net Framework Client Profile version provides a more streamlined version of .Net for clients to download, potentially making smart client deployments slicker. Advances in web technologies are making it easier to consume remote services/data from the browser, allowing the browser to host smart clients that are interacting with enterprise services or those in the ‘cloud’. With the growth of cloud computing and the abundance of rich services available for consumption the future could consist of pure mash-up clients. Imagine business users just mashing together the feeds and services that they need and creating their own client applications that work the way they want them too, in the way that they receive the data they care about on personalised web pages (iGoogle or MyYahoo).

It seems the future won’t be thin or thick, but instead the middle ground will rule and the argument will become pointless. The importance of the Internet makes browser based applications appear here to stay, but the availability of a rich network infrastructure to deploy smart clients and provide connectivity to a service enabled world will make smart clients a powerful choice (especially for the corporate LOB applications). The key point though is that there is increasingly less differences between the choices. Thin clients are getting richer and more powerful, whilst thick clients are becoming more flexible and dynamic. This may lead to a point in time when the decision as to which technology to use for a client application will be based on the systems requirements and not based on architects being hung up on the political thick/thin debate (imagine that!).

Design is the Key:

The success of any system is down to the attention paid to the design process. Thick clients are often credited with being more useable but really UI usability is more related to good UI design than the ‘thickness’ of the application. There are as many unusable thin client applications as thick ones. Engaging your users and valuing their feedback is vital to ensuring that the UI is productive.

The factors that should influence your decision on client architecture should focus on: the number/location users, the processing requirements, the tangible importance of a Rich UI, the frequency of updates, is offline capability required and also not forgetting your current architectural landscape.

As for your code, its vital with either model to separate concerns. Keep the UI and any business logic apart and communicate only through clear interfaces. A good UI should be seen as a thin wrapper that is capable of being replaced without significant changes to the system. Consider using the following:

· Model View Controller/Presenter (for all models) patterns.

· Microsoft User Interface Process Application Block.

· The Separated Interface Pattern

· Microsoft Smart Client Software Factory for building composite Smart Clients

A user interface should only deal with collection and display of data serviced by business/service layers, and these layers may exist locally (thick client), remotely (thin client) or both (smart client). It is important to clearly define and govern what processing your application is to perform and where it is to perform it. Enforce message based communication between layers and fully utilise the SOA landscape if you have one.

Summary:

It seems that the importance of the debate is dying year on year as the appeal of the middle ground gets stronger which is forcing decision makers to base their arguments on a comparison of current and future requirements with the characteristics of the potential technology choices, which must be a good thing.