The End Of TechNet Downloads Raises The Barrier To Entry For MS Techies

Microsoft unfortunately recently announced the demise of the TechNet Subscription. Whilst I appreciate that TechNet download abuse must contribute towards the availability of pirated products, I still think that this is a short sighted move by Microsoft. The MSDN subscription will continue  (for now) and anyone making money from privacy will be able to cover the extra cost of an MSDN subscription. Few individuals, however, are able to afford an MSDN subscription to feed their enthusiasm for Microsoft products. Nor would they want to with attractive alternatives being available from other vendors.

My concern is that the barrier to entry for being a Microsoft Technology IT Pro and Developer was just raised significantly. In my 2009 post on Microsoft making it too expensive for developers to experiment with Azure, I outlined how critical it is to make your products available to both current and future upcoming developers. Microsoft responded over the last few years by offering free Azure websites, reducing prices and offering improved MSDN offers. This has reduced the barrier to entry for Azure for developers, but Microsoft has now raised it for IT Pros and the enthusiast market. 

TechnetDownloadsAccording to Microsoft, evaluation versions of OSs will be available for download. I think that 90-180 day trials are very valuable but historically they have only been available for the latest products. Great if you want to try out Windows Server 2012 but not if you need to experiment with Windows Server 2008, which is a major flaw to this approach. Also short trial periods such as those found with client OSs are a real frustration. Virtual Labs are excellent for targeted training of specific features but are not a replacement for the real world experience of running a real instance.

But surely it’s all running in the cloud now anyway? Well perhaps in the future the idea of running servers locally will be a strange concept but we are a way yet from that being the norm. The Enterprise IT Pros and Developers of today and more importantly the near future will need to be skilled in running servers locally for some time to come. Running virtual servers in the cloud might be an option for some and may be the future but it’s expensive to do this currently and techies will not be exposed to those server maintenance activities that are abstracted away by cloud providers.

TechnetDownloads2There is a large home server enthusiast community that will rely on TechNet to evaluate and run Windows Server products. This is a vibrant, active community and one that happily shares detailed technical knowledge with the wider world and feeds the Microsoft Technology communities. With the death of Windows Home Server, and now TechNet, these enthusiasts will now start to look for alternatives. There are by comparison plenty of non-Windows choices in this space (Linux/BSD).

The cost of a TechNet subscription seems to have dropped to a bargain price over the last few years, perhaps too low, and Microsoft could have gradually increased the price over the next few years to make it less attractive to those looking to avoid buying retail versions and yet continue as a mechanism for Microsoft enthusiastic techies to access Microsoft Operating Systems. 

In summary I think that Microsoft have needlessly raised the barrier to entry for experimenting and learning Microsoft Technologies and makes alternative platforms more attractive. This move will in the long run surely push enthusiasts and young upcoming techies into the arms of Linux/BSD.

Advertisement

The Growth Of Business IT

In my popular post on “The Future of the IT Department” I covered how IT is changing rapidly in enterprises and touched on how business aligned IT teams are going to become more relevant. Some of these agile ‘business focused development and delivery teams’ will be official IT sponsored initiatives whilst others will be somewhat rogue business division sponsored teams working without the IT department as a response to the expensive, often poor quality service provided by the IT division.

The rapid pace of marketplace innovation and the lack of flexibility of many IT organisations within enterprises, when fuelled with the consumerization of IT and the growth of cloud computing is leading to a boom in DIY business application development. Gartner predicts that…

Rank-and-file employees will build at least a quarter of all new business applications by 2014, up from less than 5% in 2007.” [Gartner]

For many years now there has always been the power in the business to harness Excel macros and VBA to enhance end-user productivity, but now this is being enhanced by new friendly end-user tools such as easy  mobile app development, the ability to host new websites in the cloud in a few clicks and a whole SaaS model to replace your IT in house infrastructure over night .  

business-it-supportThe business benefits of this boom are clear to see. The ability of end-users and Business IT teams to manipulate the data and process flows to meet the shifting demands of the market are attractive. Customer demands can in theory be more easily met by those closet to the customer building applications quickly and with their day to day use clearly in mind. As the market changes the user can adjust their homebrew application to fit the market, or throw it away and start a new one. Instead of  a business analysis working closely with the developer to create an application she can reduce the communication overhead by just building it herself. Even if the application is only to be used as a POC this is a very efficient process to find out what works and what doesn’t. In this article on BusinessWeek the CEO of NetApp explains the benefits seen by  encouraging employees to build their own tools, such as cost savings and customer satisfaction. It’s not all peachy though. There are obvious pitfalls to this approach. the IT organisation may be slow and expensive but they often have genuine reasons for being that way. Interoperability, support, security, regulatory concerns, supplier contracts and economies of scale are all topics the IT organisation has to consider and so too does the business if its going to promote this DIY application approach.

Business run IT teams can do very productive work and react quickly to change, but from my experience the problem comes when they have to rely on the IT department to support the implement their change and that’s where tension can arise. Teams outside the IT structure can find it hard to understand the constraints of the IT department. I find developers in business sponsored teams have a real desire to be productive for customers but lack some of the rigor that is prevalent in IT based teams (particularly around maintainability and change control). The IT department can seem to be a blocker to the teams agility when it is unable to adhere to the timescales expected by the business teams. I think some effort needs to be made on both sides to understand the constraints the other teams are under and work together. Critically I feel the IT department needs to realise that this trend will continue and the IT org is at risk of becoming irrelevant (other than to keep the lights on and maintain legacy systems). Perhaps this is the natural evolution of the consumerisation of technology but I do think that IT organisations can have a very relevant role to play in this shift. By sponsoring agile business centric development teams to support the business better the IT organisation of the future can have a very relevant role and IT professionals are ideally positioned to populate these teams and support the growth in DIY applications whilst adding some beneficial structure.

Estimates: A Necessary Evil

Despite being an age old problem in the IT industry (and presumably a problem in other industries) it still concerns me how we have to rely so much on estimates to manage resources on complex multi-million pound projects. We call them estimates to hide the truth that they are at best educated guesses, or at worst complete fantasy. In the same way that fortune tellers can use clues to determine a punters history and status (e.g. their clothes, watch, absence of wedding ring etc) we as estimators will naturally seek out clues as to the nature of a potential project. Have we dealt with this business function before, is their strategy clear? Will we get clear requirements in time? We then naturally use these clues to plan out the project in our head and load our estimates accordingly but it’s hard to avoid Hofstadter’s Law which states that:

“It always takes longer than you expect, even when you take into account Hofstadter’s Law.”
     — Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid[1]
     

We are daily asked to glare into our crystal balls and come up with an accurate prediction of how long something will take based on very little requirements or even context. How far out are these estimates likely to be in this situation? Well the boffins at NASA can help here with their Cone of Uncertainty 

ConeOfUnCertainty
The cone of uncertainty is excellent at visually displaying the evolution of uncertainty on a project. Based on research by NASA it shows that at the start of a project the estimates could be out by as much as 4x. Whilst this reduces as the work progresses and therefore more is known about the project, it is often at the very early stage where estimates are collected and used as a basis for a business case or to acquiring resources. This is despite the fact that it is known at this point that they are significantly inaccurate.

Estimating is hard and by its nature inaccurate but that is not surprising considering the human nature aspects we have to deal with. These are excellently outlined in this post and they include our strong desire to please and “The Student Syndrome” (whereby we tend to put off until later what we could do now). The post compares overestimation and underestimation highlighting that the effects of underestimating are far worse than overestimating, and concludes…

"Never intentionally underestimate. The penalty for underestimation is more severe than the penalty for overestimation. Address concerns about overestimation through control, tracking and *mentoring* but not by bias."

So underestimating is bad, shame then that we have the concept of the "Planning Fallacy" based on research by Daniel Kahneman and Amos Tversky which highlights a natural…

"tendency for people and organizations to underestimate how long they will need to complete a task, even when they have experience of similar tasks over-running."

There are many explanations of the results of this research but interestingly it showed that it …

"only affects predictions about one’s own tasks; when uninvolved observers predict task completion times, they show a pessimistic bias, overestimating the time taken."

…which has implications for the estimating process and conflicts with the sensible thoughts of many (including Joel on software) on this subject that dictate that the estimate must be made by the person doing the work. It makes sense to ask the person doing the work how long it will take and it certainly enables them to raise issues such as a lack of experience with a technology but this research highlights that they may well still underestimate it. 

In many corporate cultures it is no doubt much safer to overestimate work than to underestimate it. the consequences of this over time however can result in organisations where large development estimates become the norm and nothing but mandatory project work is authorised. This not only stifles innovation but also makes alternative options more attractive to the business, such as fulfilling IT resource requirements externally via 3rd parties (e.g. outsourcing/offshoring).

Close up shot of calculator buttonsThe pace of technological change also fights against our estimating skills. The industry itself is still very young and rapidly changing around us. This changing landscape makes it very difficult to find best practice and make it repeatable. As technologies change so does the developers uncertainty of estimating. For example, a developer coding C++ for 5 years was probably starting to make good estimates for the systems on which he worked, but he might move to .Net and his estimating accuracy is set back a few years – not due to the technology but just his familiarity with it. It’s the same for Architects, System Admins and Network professionals too. As an industry we are continuously seeking out the next holy grail, the next magic bullet and yet we are not taking the time to train our new starters or to grow valid standards/certifications in order to grow as an industry. This was a challenge that many professions in history have had to face up to and overcome (e.g. the early medical profession, structural architectures, surveyors, accountants etc) but that’s a post for another day.  

Ok, ok so estimates are evil, can we just do without them? Well one organisation apparently seems to manage. According to some reports

"Google isn’t foolish enough or presumptuous enough to claim to know how long stuff should take."

…and therefore avoids date-driven development, with projects instead working at optimum productivity without a target date in mind. That’s not to say that everything doesn’t need to be done as fast as possible it just means they don’t estimate what "fast as possible" means at project start. By encouraging a highly productive and creative culture and avoiding publically announcing launch dates Google is able to build amazing things quickly in the time it takes to build them, and they are not bound by an arbitrary project deadline based on someone’s ‘estimate’. It seems to work for them.  Whether this is true or not in reality it makes for an interesting thought. The necessity for estimates comes from the way projects are run and how organisations are structured and they do little to aid engineers in the process that is software development. 

So why do we cling to estimates? Well unless your organisation is prepared to radically change its organisational culture then they are without doubt a necessary evil that whilst not perfect are a mandatory element of IT projects. The key therefore is to improve the accuracy of our estimating process one estimate at a time, whilst still reminding our colleagues that they are only estimates and by their nature they are wrong.

Agile estimating methods include techniques like Planning Poker (LINK) which simplify the estimating process down to degrees of complexity and whilst they can be very successful they are still relying on producing an estimate of sorts, even if they just classify them by magnitude of effort. I was just this week chatting to a PM on a large agile project who was frustrated by the talented developments team’s inability to hit their deadlines purely as a result of their poor estimating.

There are many suggested ways to improve the process and help with estimate accuracy and I’m not going to cover them all here, but regardless of the techniques that you use don’t waste the valuable resource that is your ‘previous’ estimates. Historic estimates when combined with real metrics of how long that work actually took are invaluable to the process of improving your future estimates.

Consistency is important to enable a project to be compared with previously completed one. By using a template or check-sheet you ensure that all factors are considered and recorded, and avoid mistakes being made from forgetting to include items. Having an itemised estimate in a consistent format enables estimates to be compared easily to provide a quick sanity test (e.g: "Why is project X so much more than project Y?"). It also allows you to capture and evolve the process over time as you add new items to your template or checklist as you find them relevant to the process. Metrics, such as those provided by good ALM tools (e.g. Team Foundation Server or Rational Team Concert) are useful for many things but especially for feeding back into the estimating process. By knowing how long something actually took to build you can accurately predict how long it will take to build something similar. 

In summary then, estimates by their nature are wrong and whilst a necessary evil of modern organisations are notoriously difficult for us mere humans to get right. Hopefully this post has made you think about estimates a little more and reminded you to treat them with the care that they deserve in your future projects.

It’s All About Culture (Enterprise IT Beware)

This interesting post by PEG recently highlights an organisations culture as being in reality the only differentiating factor that they have. In his view assets, IP, cost competitiveness, brand and even people can be copied or acquired by your competition but it is your company culture that will lead to success/failure. I agree with his assertion on the importance of culture, and would add that whilst it has always been the case that culture is critical the rapidly changing new world is bringing with it new challenges to competitiveness resulting in culture becoming even more important for competitive advantage. I must clarify here that I see a huge gulf between the ‘official’ culture of an organisation that is documented and presented by senior management and the real culture that is living and breathing on the shop floor (and they rarely match).

This excellent highscalability.com post by Todd Hoff recently highlights the way that the rules framing IT (and business) are changing and how start-ups are becoming a beacon for investigating this new world. When we look at this new world the key elements are flexibility, adaptability and innovation. These traits thrive in start-ups where the ‘culture’ encourages them. Many of these new industry shakers lack the assets, brand and IP to use but excel in using their ‘culture’ to outmanoeuvre bigger rivals and drive innovation in their industry.

Some enterprises get this and are trying hard to foster a more innovative and customer focused culture. The emergence of agile development practices can help to focus the team on the true business value of features and aid flexibility in the use of resources. SAP has recently designed its new office environment to fully promote Agile development practices (see this post), and whilst this on its own can’t change corporate culture, it can remove some of the blockers to a more agile culture emerging. Of course there are many other enterprises that still rely on military inspired hierarchical structures and attempt to enforce a desired culture. This InfoQ article by Craig Smith summarizes recent articles covering how mainstream management are missing the benefits of an agile approach within their organisations. 

"…The management world remains generally in denial about the discoveries of Agile. You can scan the pages of Harvard Business Review and find scarcely even an oblique reference to the solution that Agile offers to one of the fundamental management problems of our times.”

The benefits of Agile practices are well documented and so if this fundamental approach is still not connecting with mainstream managers then how long will it be before they grasp the bigger paradigm shift that is occurring underneath them. This shift is being forged by start-ups and enabled via cloud computing.

Let’s look into what is happening in the start-up space; Todd Hoff’s post again highlights the use of small dedicated autonomous teams with the power (and responsibility) to make rapid changes to production systems. these teams being responsible for the entirety of their systems from design to build, test, deployment and even monitoring. How does this work? Well it can only work effectively via a shared culture of innovation, ownership and excellence. Facebook/Google staff have stated that their biggest motivator is the recognition of their good work by their peers (see here and here). This is ‘Motivation 3.0’ in action with the intrinsic rewards of the job at hand being the main motivation to succeed. Compare this with the traditional and still prevalent command and control approach used in a lot of enterprises with tightly controlled processes, working habits and resources. Splitting the responsibility for each stage of the development lifecycle between different teams (usually separated by reporting lines, internal funding requests, remote locations etc.) and then expecting a coherent solution to prevail is not going to work in this new world.

We are now starting to see the emergence of the cloud as a major force in driving the democratising of computing power and enabling the emergence of empowered teams. Todd’s post covers in detail how the cloud is making the once impossible possible and I recommend you take the time to read it. Only time will tell if the cloud’s impact on Enterprise IT will act as a catalyst to driving the emergence of more agile competitive corporate cultures in enterprises in the future.

The Enterprise & Open Web Developer Divide

In this interesting Forrester post about embracing the open web Jeffrey Hammond highlights the presence of two different developer communities. In his words:

"…there are two different developer communities out there that I deal with. In the past, I’ve referred to these groups as the "inside the firewall crowd" and the "outside the firewall crowd." The inquiries I have with the first group are fairly conventional — they segment as .NET or Java development shops, they use app servers and RDBMSes, and they worry about security and governance. Inquiries with the second group are very different — these developers are multilingual, hold very few alliances to vendors, tend to be younger, and embrace open source and open communities as a way to get almost everything done. The first group thinks web services are done with SOAP; the second does them with REST and JSON. The first group thinks MVC, the second thinks "pipes and filters" and eventing."

Following the tech industry it is clear to me that this division is tangible and in fact I would suggest the gap is currently increasing. I recently started to revisit my open web development skills after it occurred to me how large this divide was beginning to get and how important these skills will be key in the future. Whilst the Enterprise developer often traditionally focuses deeply on a handful of technologies (too often from one Vendor) the Open Web developer is constantly learning new languages and choosing between best of breed open source frameworks to get the job done. The new Open Web developer has evolved from a different age and with different perspectives and in many ways leaving behind the rules/constraints of the Enterprise developer building typical Line Of Business (LOB) applications. I’m not suggesting that Enterprise developers don’t understand these technologies already, I assume many do, but they’re unlikely to be living and breathing them. This is not just about web development technologies and techniques, but more about mind-sets, architectural styles and patterns. Perhaps it can be viewed historically as similar to the evolution from mainframes to distributed computing, and this is just the next evolution. This movement compliments the emergence of cloud computing and one can assume that the social, dynamic LOB applications of tomorrow will rely heavily on the skills and technologies of the Open Web community. To quote Jeffrey again:

"In the next few years, their world is headed straight to an IT shop near you."

The proliferation of devices, cloud computing and a new breed of ‘surfing since birth’ young blood entering the industry combined with the shift towards this new world from big players like Microsoft (e.g. using JavaScript to build Windows 8 apps) mean that Enterprise IT will have to converge with the Open Web approach in order to meet future consumer needs. Only the integration of these worlds will enable Enterprises to integrate their existing application landscapes with the new web based consumption model.

John R. Rymer’s Forrester post on the subject provides his view on the differences between these communities and his accompanying post details the technologies you need to focus on now (HTML5, CSS3, JavaScript, REST). Whilst it can be tricky to follow this sort of fast moving decentralized movement, the good news is that now is a great time to get into these technologies with the growth of the umbrella HTML5 movement raising awareness within the industry and bringing some standards to advanced web design. Keep an eye on what the big web frameworks are offering, and track the innovations at companies like Google and Twitter. I recommend you read these Forrester articles and think about how this affects your architecture, IT organization and career.

For some quality content on these technologies check out these links:  ‘Mozilla Developer Network’, ‘Move The Web Forward’ and ‘HTML5 Rocks’.

Enterprise IT Project Insanity

A study published in the Harvard Business Review  has again shown that many IT projects continue to come in late and over budget. In addition it shows that there is a higher than expected number of large scale failures. These failures are massively over budget (200% in this study) and over deadline (70% overruns) and it cites examples where this has even contributed to the collapse of the company or at best reduced its profit forecasts leaving it at the mercy of the City. It’s interesting reading with the study showing that in total 27% of the 1471 projects overran in some way.

Now you could argue that these are mostly major IT transformation projects and hence the risks associated with them are bound to be significant (although this highlights another fact in that if the project was seen as transformational then perhaps there is an absence of a culture of continuous improvement at those organisations). Regardless though this is still damming evidence of the ability for this industry to implement IT projects. A comparison with other industries has been made on numerous occasions and IT tends to come out worst for achieving project success (although it’s not alone with many large defence sector projects for example suffering too).  There are many reasons for IT projects to fail and I’m not going to cover them all here but instead ask the question why are we still repeating the same mistakes in many organisations. It has been 36 years since Frederick P Brooks wrote the seminal book "The Mythical Man Month" yet many of its key themes within his essays remain problems today. Walk around many traditional enterprise IT departments today and you need to avoid the Tar Pit on your way to joining the  Death March on the "Tower Of Babel" project. For those of you not familiar with the book, the Wikipedia article on it summarizes it well. Some concepts will seem obvious and basic pitfalls but don’t forget this was written in a different age. The key points made in the book being the importance of progress tracking, tooling, communication and the iconic Mythical Man Month (whereby assigning more programmers to a project running behind schedule will make it even later, because of the time required for the new programmers to learn about the project, as well as the increased communication overhead).

So what has changed in 36 years?  In some ways a lot has changed: The emergence of Lean/Agile practices and Motivation 3.0? (check out Daniel Pink’s work on intrinsic motivation) have dragged the thought leaders in the industry in the right direction (usually forced via a groundswell movement) but crucially the impact of these innovations vary by company and corporate cultures. It seems in many traditional enterprises little has changed. We continue to march on for projects destined to fail with old world approaches, planning in the ‘mythical man months’ based on ‘lies’ (sorry ‘estimates’). We still see people being thrown at a problem to resolve it despite this being a known anti-pattern.  Many of the new approaches aimed at addressing these problems (e.g. lean/agile etc.) thrive in the smarter/learner enterprises and technology companies but have have struggled to get traction in the traditional red brick enterprises despite the impressive results of agile/iterative approaches. Even those enterprises that have adopted them have struggled to instil the philosophy behind Agile and instead just dogmatically implement an specific Agile methodology. The shift is happening but what’s stopping faster evolution in these enterprises?

Many of the issues could be classed as cultural, organizational or management issues and so can we can blame the senior management? Well that is way too simplistic and all stakeholders in an IT project have a part of play in its success or failure.  Perhaps more likely is that the management tools, processes, procedures and attitudes that manage a modern enterprise don’t naturally fit to managing software development. In some ways this is not surprising as these enterprises are busy managing their core competencies (finance, manufacturing, construction, logistics etc.) with software development being done internally without the focus it perhaps needs. Take the world of banking as an example. A bank might spend millions on its IT and perhaps even consider some of it’s IT systems to be a competitive advantage, however it is a bank, not a software house and any activity within the organisation must fit the internal processes whether that fit is a natural one or not. In my post about the "Future of the IT Department" I covered how these IT departments have become large and unwieldy beasts. Fitting software development into a non-IT enterprise can be difficult and rigid. How many times have you had to bend software development reporting to fit into a model for which it doesn’t naturally fit? We have all had to dumb down technical analysis of issues/risks/progress into bite size chunks of simplistic bullet points that fit neatly into a PowerPoint slide but convey little in the way of accuracy. No doubt you get dragged into needless conference calls as a result of someone misunderstanding the technical situation. That’s not to say that those we report to are not intelligent (often far from it) but that they often lack the required skills to manage technical details. So what about the IT experts who progress up to management, they understand the problem right? Well yes they did, but it doesn’t take long to adjust to the culture of the organisation as it is a necessity for them to do so. They cannot do their job without following the processes that run the rest of the business and they need information/milestones/gateways to track progress. The industry is getting wise to this down at the development team level however as new tools emerge through the agile space that track the progress and velocity of a development team in easy to consume visual ways (e.g TFS v.next). Whilst these are not intended to be consumed at exec level I expect that more accurate reporting at the local level will result in better reporting flowing up the organisational structure. The pace of change in the industry is to blame also here for the managers of today were the IT guys of yesterday when different architecture landscapes were around and Mainframes were king.Where there is an inability for management and IT professionals to communicate and share a culture the gulf is  filled with consultants and IT sales reps which can compound the problem. The use of Offshoring and outsourcing can also make the problem worse as the barriers to communication are now formal 3rd party engagements. We need to find a common ground to fill this cultural divide in an effort to help speed up the evolution of change.

It’s not unusual for these large enterprises to use Waterfall methodologies and tightly control the IT department resources so that IT can be managed neatly within the organisations financial and resource planning models. This will only get more engrained over the next few years as the economic climate dictates tighter financial controls and risk monitoring. This of course does little to aid the progress of IT evolution unless risks are taken to counteract the cost squeeze with more agility. The study mentioned above also highlighted that those projects that do succeed are often using Agile approaches and focusing on customer value, which makes sense. IT projects are one-off bespoke creations and yet are managed as though they were identical widgets produced on a production line. Enterprises are organised into a post war hierarchical command and control structure that is structured to actually avoid the communication between teams yet we still apply that model to the IT organisation in many enterprises. If we look at the likes of Google and Facebook, their developers are given more autonomy (and responsibility) for the software they build and its implementation, test, adoption and its evolution. They are encouraged to deploy often and innovate with trust placed upon them based on the fact that Software development is a skill and one developer is not as replaceable with another as is often assumed (explained brilliantly in John Miano’s post here on The Myth of the Interchangeable Programmer). In comparison the enterprise developer more often than not sits in India creating code based on detailed specifications (based on wrong assumptions at a point in time) separated by offshore coordinators, managers, on-shore coordinator’s, and a few thousand miles of undersea cable from the business they serve – stifling their ability to add value. When quality of output into test drops the next project is assigned more testing time and resource to catch defects, but at the expense of the design/development time. this leads to rushed design/build and again lower quality, so again more testing is required. And then in response there is a whole bureaucratic process of change management policies to rightly protect service. I’m not suggesting that change management processes and IT policies are not important (or indeed vital) but there is a balance between innovation and risk aversion that must be reached for the benefit of the company and for enterprise IT to evolve.

As the downturn in the economy bites cost controls will be tightened and IT departments trimmed, more often than not replaced by off shore labour, but this is an opportunity to rethink the approach and evolve the IT organisation into something leaner backed by more autonomy and lean processes. Agile/Iterative approaches have been proven to reduce costs and improve quality in comparison to traditional methods. Enterprises need to fully embrace Agile and encourage innovation within smaller business aligned teams of IT, relax some of the processes and abandon the often rigid enforcement of "Waterfall For All" and instead enable teams to choose the methodology that best fits.

IT within enterprises is evolving slowly but perhaps a revolution is required to speed things up, are you ready to revolt?

Embedding Pro-Active Tasks In Your Dev Team

We have made huge advances over recent years in the tools available to the development team, including the more proactive and investigative tools (profiling tools, code analysis, performance analysis, debugging etc). However demanding project timelines mean that we have increasingly less time to investigate, trial and use these tools. Compounding the problem is that unfortunately the first thing to get abandoned on a tight project are the proactive development tasks that lead to a better quality project but that don’t necessarily help get ‘something’ out the door. Obviously we need to try to get these approaches embedded into the development lifecycle despite their upfront costs. At first glance this seems a difficult challenge but then consider automated unit testing. This proactive task of developing tests alongside your code was a hard pill for many to swallow initially (and still is in some organisations) but as an industry we embedded the believe that the effort was worthwhile and the result was better quality, more tested and rigorous code. The same approach needs to be considered for other proactive tasks. Here’s a simple guide for getting something proactive adopted by your dev team:

  1. Firstly the task need to be qualified: what is the task, what benefit does it provide, when would it be best used and what are the costs associated with not doing it?
  2. Evangelize to the wider team. Hold demo’s of the approach to build awareness. Try to focus on one approach first and build a buzz around it so that it fixes in people’s daily psyche so it seems odd not to take the approach. Don’t forget to include the wider development stakeholders (Business Analysts, Architects, Project Managers) too as they may be impacted for better or worse. Use the concept of ‘Technical Debt‘ to help justify the long term impact of decisions affecting system quality.
  3. Automate, automate, automate! How can you make it easier and quicker to get the initiative embedded in your development process? Can it be incorporated into your automated Continuous Integration solution for example?
  4. The effort involved with the approach will need to be quantified so that they can be factored into development task estimates for project planning early enough to enable projects to be planned with these initiatives included. it is much harder for to find the time (and project manager commitment) for unplanned tasks. 
  5. Pilot the approach on small projects to be able to refine the approach and prove the benefits.
  6. Include the approach in your ‘done lists’.
  7. Vocalise and Visualise. Document the results/benefits of the approach and shout loudly when it avoids a production incident or missed deadline.

I would say that the most important step is the second one – Evangelising to your colleagues. it’s hard to stop a groundswell of enthusiasm for a new approach and you’ll progress much faster with peer support.

The Future Of The IT Department

Recently I have been witness to rapid, often painful, change within my own internal IT division over the last few years and observed the on-going developments in the industry. It is clear that IT departments changed dramatically in a short amount of time and the pace is not relenting. This has led me to try to picture what IT will look like within large institutions in the future. It is becoming more and more apparent that the structure of our internal IT organisations are very often based on the traditional legacy models that served enterprises well in the past. Big IT investments and centralised systems are best managed and maintained by an rigid organisational structure. The IT department and the business units are today usually far more disconnected than many CIOs would care to admit. IT used to be something that was done by the IT department based on fairly static business processes. However we’re now in a different world, where IT is seen increasing as just a commodity and business processes need to be able to react quickly to changing economic conditions. No longer is the IT department responsible for big monolithic systems (e.g. payroll etc.) but IT is now embedded in every business process so in some sense every department is an IT department. Surely if the IT organisation doesn’t aid the business then it will be eventually pushed aside and replaced.

The Journey From Past to Present

This excellent post by PEG covers this subject well. PEG paints the picture of the traditional IT organisation as it was in many enterprises and then slices it up to represent the current model once outsourcing/off-shoring has been considered. The left hand diagram showing the more traditional split, and the right showing the emerging norm:

Factoring in the effort required to manage out-sourced projects


Diagrams from PEG: The IT department we have today is not the IT department we’ll need tomorrow

It surprises me how many people consider their jobs as not being under threat from outsourcing as they’re role is above the bottom tier on this sort of diagram, but as you can see it is inevitable that the line between permanent staff and outsource partner staff will continue to rise to the point represented in the triangle on the right, with a good cross section of IT roles being fulfilled by partner organisations. This represents where many large enterprises are at present whereby some “doing” roles are maintained in-house but the management and planning layers are also supplemented by outsource/offshore partners. The bulge in the middle represents the extra permanent resources required to cover the additional overhead of managing partner resources.  Taking a bank to be the textbook example of a large enterprise with a significant scale IT organisation then this research into European banks activities provides some insight into the strategy driving these changes. Unsurprisingly cost reduction is key, but its not the only factor…

“Survey participants cited cost reduction as the primary reason to outsource IT functions, followed by cost variability (for example, the flexibility to respond to peak demand without ramping up internal resources) and access to know-how or skilled personnel. The main benefits of outsourcing were access to know-how or skilled personnel and a guaranteed level of service. (The cost benefits associated with outsourcing often fell short of expectations.) The biggest disadvantages of outsourcing were high switching costs and limited control over critical elements of the IT environment. On the whole, however, the survey shows that banks have embraced outsourcing. Only 3 percent of the banks surveyed were planning to decrease their outsourcing activities. The case for offshoring was slightly different. Although banks used offshoring primarily for the same reason they used outsourcing—to reduce costs—the main benefit of offshoring was less stringent foreign labour laws. The biggest disadvantages of offshoring were opposition among domestic personnel, large overhead, and loss of control.”

Both partner strategy models are therefore seen as suffering from elements of losing control of assets or deliverables and somewhat adding to management overheads, but providing some agility by providing a mechanism to ramp up or down resources as required.

PEG extends his model to show that in the future there will be an increased reliance on SaaS and automation tools and therefore a chunk of the IT organisation structure will be replaced by these as well as outsourcing/offshoring roles.

A skills/roles triangle for the new normal

Diagram from PEG: The IT department we have today is not the IT department we’ll need tomorrow

Within the current model, management layers have often become too complex and unwieldy. With the IT organisation being a business entity itself within the enterprise and with 65% of IT spend just being used to maintain current service, business functions and IT often clash over priorities and the allocation of funding. In many instances resulting in the business going outside of the IT Org to secure services or growing their own ‘black ops’ internal capability just to get things done. This again challenges the traditional IT organisational model where IT keeps a tight control.

Changing Objectives

Tighter financial conditions, increasingly competitive environments and a desire to maximise returns is leading to a model of pay per use and more utilising of partners and outsourcing models. Technology advances are making this transition possible (e.g. Cloud Computing, SaaS). Future IT departments will increasingly utilise these external services resulting in them adopting a very different structure. Whilst the traditional IT organisation has been geared to building and maintaining large complex systems and is staffed with technical people, the rapidly emerging model is one where IT skills are outsourced to numerous vendors and IT staff become the negotiators and orchestrators of these relationships and contracts. Instead of managing systems changes internally the IT organisation is increasingly just the middleman between the business and the outsource/offshore partners. The role becomes one of managing projects more than technically implementing them. Reports can be found of in-house IT departments cutting 90% of headcount with a rapid shift to offshore/outsourcing with the remaining staff focusing on the planning and relationship management tasks. This Boston Consulting Group paper suggests there is an essential move from “doer” to “orchestrator”,  with the IT Organisation “doing fewer of the traditional ‘run the business’ activities” instead leaving them to external providers and doing more coordinating of (one or many) providers activities to meet the design.  This “network of external providers and integrators” needs monitoring and tuning and the structure of the IT Organisation will need to centre around these activities.

A quote from Reinventing The IT Organisation by Antoine Gourevitch, Stuart Scantlebury & Wolfgang Thiel…

“Unless CIOs take swift action, the IT organisation will be at risk of being reduced to a thin layer between the business and the specialist outsourcing firms.”

The outcome will presumably be either a slim organisation staffed with Change Managers and Project Managers responsible for liaising with the partners to satisfy business requirements, or alternatively these changes could prove the catalyst required to move to true business driven IT, where IT skills are integrated with the business units to enable them to react rapidly to changing business needs. Larry Dignan in his post welcomes the idea of breaking up the traditional IT organisation, seeing it as an anachronism. He classes CIOs as often “out of their league”, “process jockeys” who would “rather be scouting new technologies” than innovating. I would agree that this appears to be the case in many large organisations where IT, some would argue, has frustratingly become detached from the goal of driving business value through technology, losing itself in bureaucratic processes. These organisations can seem a long way from delivering core bottom line business value. PEG discusses the detachment of Enterprise Architecture and the business, together with a description of little ‘a’ and big ‘A’ architects, here and its well worth a read. Even where IT organisations do deliver real value its often to timescales that seem painfully long to the business customer but painfully short to the IT guy wrapped up in bureaucratic red tape. Perhaps this isn’t ITs fault as such but more the  arcane structure of the IT organisation as we have come to accept.

One way suggested for IT organisations to remain relevant and address future challenges is for the business and IT to move closer together than ever. This has been talked about for many years but with the demise of the monolithic IT organisation the next few years could see this model mature. Perhaps decentralised pockets of business IT shops closely aligned to the business units will be the norm, introducing new challenges around how to control these pockets.

This shift towards IT/business integration could be very rewarding for an enterprise as in reality modern business processes are often tightly intertwined with the LOB applications in use and so anything that can be done to ensure that those LOB applications support the business processes instead of restricting the pace of business change will be welcomed. Dreischmeier & Thiel suggest new ways of working may be required as IT organisations are forced to adjust their operating model to become faster, more agile and to embrace rapid-development approaches. The business can’t afford to be held back by a slow and unwieldy IT organisation.

One concept I particularly like is the concept of  “introducing Product or Solution Managers” to address the “lack of end to end ownership within IT Orgs”. The person would “own the IT product/solution across all technical layers”. This role should improve TCO and aid business & IT priority alignment. Dreischmeier & Thiel also see the CIO as a key player in ensuring that the IT organisation is “Proactively Engaging in Business Transformation Activities” and that even the IT organisation is very well positioned to be a key player in this transformation as it is aware of the end to end business processes (in theory). They suggest:

“Creating, together with the business, a new-business-model team that seeks out and addresses the changes in economics of the relevant industry as it changes through increased competition and environmental forces”. 

The growth of agile development practices have a a part to play here too. Having innovative IT teams that ‘fail fast and often’ and use lean agile techniques to maximise business value could replace traditional models. Smaller, focused development teams under the direct control of the business units using Agile practices and being supported by a central infrastructure function (probably outsourced) could prove a very effective way of actually building what the business really need. The evolution of Cloud Computing technologies provides real opportunities to make these teams very capable. A business unit based developer could ‘mashup’ cloud services together with core on-premise web services to produce a powerful line of business application that is then deployed to PaaS cloud based infrastructure. Forester Analyst Alex Cullan sells the benefits of this model with the term “Empowered BT (Business Technology)” where IT’s role is to empower the business to utilise the technology that they need in order to remain competitive. The traditional arguments against this approach such as the expected system proliferation and business technology decisions being driven by hype, are dismissed as actually not as bad as we in IT would believe. He argues successfully that some proliferation is acceptable if it empowers the business, but there would have to be trust in business leaders to choose the right path for this to work. Is that trust there at this moment in time? Well not according to this MIT & Boston Consulting Group survey where it shows that current CIOs believe that business leaders are not positioned to lead IT enabled business transformation. Only 33% of CIOs consider their company’s senior execs effective at driving business value with IT, and 40% consider them effective at prioritizing IT investments. However perhaps this reflects the differences in the current differing priorities of the of traditional IT Organisations and the business units, with IT enforcing its traditional maintenance role (“keeping the lights on”) and role of application development/innovation more than a real distrust. The paper does however highlight the benefits that can be achieved when the IT organisation avoids the simple “middle man” role and takes the lead role of driving business change (such as lower maintenance costs, faster realisation of business benefits from new systems, and higher employee satisfaction).  Perhaps the future of the IT organisation is that of a business in its own right, an internal consulting firm offering assistance in business process design, innovation and development management.

Proctor and Gamble run their IT Organisation as a business within the enterprise running alongside other business services (e.g accounting etc.). Their services are branded and marketed to the enterprise and billed on a usage basis with business units empowered to choose to consume these services or go elsewhere. The emphasis is on running this as a viable competitive internal business that is in tune with its customers (in this case the internal business units) needs. They have Brand managers responsible for “the innovation, pricing and commercialization of the services” that ensuring that the total end to end offerings can match that of 3rd party offerings. Underpinning this though is a collection of external partner relationships that still need to be managed and so  in essence this is still heading towards becoming an integrator, orchestrating these partner services into a clear cohesive branded, and hopefully relevant, service. The key here though is the added value provided by this internal IT business service that crucially understands the business and offers competitive services that are completely relevant to the business. This is supported by the BCG research that found where IT Organisations really drove business change they often delivered their IT services as shared services and placed more emphasis on relevant prices and alternative service levels. They tended to centralise IT with lower levels of recorded “shadow” IT being instigated by the business, which could perhaps suggest that these business units felt they were getting sufficient value from their shared IT services, even though it was under central control.

Future Skills

All these changes have massive implications on the skills required within the IT organisation of the future. In the current model maintaining a relevant skilled workforce can be tricky with many key staff feeling demotivated by the outsourcing/offshoring partner model and the subsequent removal of technical roles from their organisation. The loss of junior IT roles to partner resources destroys any future progression opportunities and shows that this model is unsustainable moving forward. Engaging technical people will be increasingly difficult in the current model but perhaps a move to more business aligned IT can help skilled staff remain technical if they wish and also benefit the business through enhanced IT innovation and passion for their roles, instead of forcing good techies to oversee offshore/outsource relationships.

It seems essential now that IT staff of the near future will be expected to have an enhanced level of business acumen and market knowledge to fulfil their roles. Will this come at the expense of excellent technical skills? Maybe! Perhaps the technical skills will be embedded within the offshore/outsource partners and the relevant ‘technical’ skills required in the IT Organisation will be those around technical process design and system analysis. Knowledge of the business will perhaps be more important than any technical skill (for the majority of roles) and therefore it makes more sense to recruit IT staff from within the business units themselves. This is evident in a number of studies with CIOs, such as this BCG study

“In general, CIOs told us that Internal IT staff roles are shifting away from application development and towards process analysis and engineering, business relationship management, project management and architecture design and implementation.”

Within the previously mentioned Proctor & Gamble organisation the same theme emerges as the skills reflect the role of IT within the organisation:

“..traditional IT is just 30% of what we do. If traditional IT is all a person masters, he or she will never be a leader here. The rest is about business knowledge. Those who embrace that approach will certainly increase their value…” 

This view was supported by the previously mentioned study into European Banking, but it also went further, pointing out that technical skills were being neglected …

“…many banks appear to be underestimating the value of technical tools and skills, which are critical to developing high-impact applications, maintaining an efficient infrastructure, and managing outsourcing partners.”

So where does this leave you and I? Well, I expect the relevant number of deeply technical IT professionals will decline in Western countries but this decline will be dwarfed by the increase in semi-professional developers, working in the business but using end-user computing tools to develop systems that are meant to be rapid, easy and throw away. Where more complex solutions are sought then outsource partners will happily fill that gap. Escaping the large enterprises and fleeing to the small and medium enterprises will not be sustainable longer term either as the partner model will win there too eventually. It is entirely possible that the partner model will lose some of its lustre (it’s already happening in places) and there may be some swing back to in-house technical teams. If that happens then the IT community needs to be ready to promote a new ‘agile’ alternative that understands and drives true business benefits.

This evolution of the IT organisation is natural in such an immature industry as this but one thing is definite the future is different and we need to adapt. Whichever direction the future takes for you spend some effort in the meantime trying to understand your business customers needs better and keep innovating for them!

Ray Ozzie’s Dawn of a New Day

I would recommend everyone interested in technology to read Ray Ozzie’s (Chief Software Architect of Microsoft) memo – "Dawn of a New Day". It’s a fascinating insight into the vision of a key player in the industry and a call to arms for Microsoft and it’s partners. What interests me the most about this vision is that it is a conceivable vision and one that I share. This vision of "appliance-like connected devices" being the norm and consuming "Cloud Based Continuous Services" is one that is easy to visualise as this day is dawning now around us. Smart phones, tablets, connected TVs etc are set to become the principle means of interacting with our online world.

"Complexity kills"

Whenever I’m called upon to help out family and friends with their PCs it often strikes me how inappropriate these machines are for the needs of the basic user. The power and complexity of the PC is it’s great power but it also makes them often too difficult to manage and secure. Huge numbers of basic PC users now in reality only use their browser and don’t install software applications anymore. These people are also now enjoying the simplicity provided by smart phone OS’s such as Android and iOS. In fact many of these users are able to fulfil their needs via App Stores etc whilst their PCs gradually gather dust. the future vision where devices rule makes total sense. Whilst Apple is proving the master in the device market Microsoft have the ‘Windows’ advantage. The failure of Linux netbooks to maintain market share shows that given similar pricing models consumers will stick with the familiarity and safe option of Windows, and this is an opportunity for Microsoft. They could capitalise on this with a lean “appliance like” version of Windows in the future.

"Complexity sucks the life out of users, developers and IT. " – I have seen numerous projects needlessly suffer in delivery due to overly complex designs, sometimes from overly complex requirements. Because we can create software to be configurable and feature rich we feel we have to, but of course every additional feature brings additional overhead. This overhead my be felt by the end user or perhaps just the developer and testers trying to implement or test the features.

"Cloud-based continuous services"

Ray’s vision of cloud services being continuous is key for the connected future. Consumers need to be able to depend on the cloud always being available and willing to serve them. As these services grow in importance they will be expected to grow in number and complexity. This is a real challenge for industry engineers and we really need to learn the lessons of the hugely scalable consumer web sites such as Facebook and Google. I look forward to seeing what technologies are produced to aid the development of these services and which scalability patterns move towards the mainstream.

It’s an exciting future for our industry and one that I look forward to playing my part in.

Private Clouds Gaining Momentum

Well its been an interesting few weeks for cloud computing, mostly in the “private cloud” space. Microsoft have announced their Windows Azure Appliance enabling you to buy a Windows Azure cloud solution in a box (well actually many boxes as it comprises of hundreds of servers) and also the OpenStack cloud offering continues to grow in strength with RackSpace releasing its cloud storage offering under Apache 2.0 license with the OpenStack project.

OpenStack is an initiate to provide open source cloud computing and contains many elements from various organisations (Citrix, Dell etc) but the core offerings are Rackspace’s storage solution and the cloud compute technology behind NASA’s Nebula Cloud platform. To quote their web site…

The goal of OpenStack is to allow any organization to create and offer cloud computing capabilities using open source software running on standard hardware. OpenStack Compute is software for automatically creating and managing large groups of virtual private servers. OpenStack Storage is software for creating redundant, scalable object storage using clusters of commodity servers to store terabytes or even petabytes of data.”

It is exciting to see OpenStack grow as more vendors outsource their offerings and integrate them into the OpenStack initiative. It provides an opportunity to run your own open source private cloud that will eventually enable you to consume the best of breed offerings from various vendors based on the proliferation of common standards.

Meanwhile Microsoft’s Azure Appliance is described as …

…a turnkey cloud platform that customers can deploy in their own datacentre, across hundreds to thousands of servers. The Windows Azure platform appliance consists of Windows Azure, SQL Azure and a Microsoft-specified configuration of network, storage and server hardware. This hardware will be delivered by a variety of partners.

Whilst this is initially going to appeal to service providers wanting to offer Azure based cloud computing to their customers, it is also another important shift towards private clouds.

These are both examples in my eyes of the industry stepping closer to private clouds becoming a key presence in the enterprise and this will doubtless lead to the integration of public and private clouds. It shows the progression from hype around what cloud might offer, to organisations gaining real tangible benefits from the scalable and flexible cloud computing platforms that are at home inside or outside of the private data centre. These flexible platforms provide real opportunities for enterprises to deploy, run, monitor and scale their applications on elastic commodity infrastructure regardless of whether this infrastructure is housed internally or externally.

The debate on whether ‘Private clouds’ are true cloud computing can continue and whilst it is true that they don’t offer the ‘no- capital upfront’ expenditure and pay as you go model I personally don’t think that excludes them from the cloud computing definition. For enterprises and organisations that are intent on running their own data centres in the future there will still be the drive for efficiencies as there is now, perhaps more so to compete with competitors utilising public cloud offerings. Data centre owners will want to reduce the costs of managing this infrastructure, and will need it to be scalable and fault tolerant. These are the same core objectives of the cloud providers. It makes sense for private clouds to evolve based on the standards, tools and products used by the cloud providers. the ability to easily deploy enterprise applications onto an elastic infrastructure and manage them in a single autonomous way is surely the vision for many a CTO. Sure the elasticity of the infrastructure is restricted by the physical hardware on site but the ability to shut down and re-provision an existing application instance based on current load can drive massive cost benefits as it maximises the efficiency of each node.  The emergence of standards also provides the option to extend your cloud seamlessly out to the public cloud utilising excess capacity from pubic cloud vendors.

The Windows Azure ‘Appliance’ is actually hundreds of servers and there is no denying the fact that cloud computing is currently solely for the big boys who can afford to purchase hundreds or thousands of servers, but it won’t always be that way. Just as with previous computing paradigms the early adopters will pave the way but as standards evolve and more open source offerings such as OpenStack become available more and more opportunities will evolve for smaller more fragmented private and public clouds to flourish. For those enterprises that don’t want to solely use the cloud offerings and need to maintain a small selection of private servers the future may see private clouds consisting of only 5 to 10 servers that connect to the public cloud platforms for extra capacity or for hosted services. The ability to manage those servers as one collective platform offers efficiency benefits capable of driving down the cost of computing.

Whatever the future brings I think that there is a place for private clouds. If public cloud offerings prove to be successful and grow in importance to the industry then private clouds will no doubt grow too to compliment and integrate those public offerrings. Alternatively if the public cloud fails to deliver then I would expect the technologies involved to still make their way into the private data centre as companies like Microsoft move to capitalise on their assets by integrating them into their enterprise product offerings. Either way then, as long as the emergence of standards continues as does the need for some enterprises to manage their systems on site, the future of private cloud computing platforms seems bright. Only time will tell.