Moving teams to Trunk Based Development (an example)

Moving teams to Trunk Based Development (an example)

In this post I am going to cover an example case study of introducing Trunk Based development to an existing enterprise Dev team building a monolithic web application. I’m not going to cover the details of trunk based development as that’s covered in detail elsewhere on the internet (I recommend trunkbaseddevelopment.com). For the purpose of this article I’m referring to all development teams working on a single code branch, constantly checking their changes in to that single branch (master). In our model the only reasons for ever creating a new branch was for one of the following reasons:

  1. A developer creates a personal branch off master for sole purposes of creating to Pull Request into the Master branch (for code review purposes).
  2. The release team take a branch off of master to create a Release Candidate build. Whilst this is not in the true spirit of Trunk based development it is often required for release management purpose. Microsoft use this model for their Azure DevOps tooling development and call it Release Flow.
  3. A team may still create a branch for a technical spike or proof of concept that will be dumped and wont ever me merged into Master.
Problem Statement

The development team in this case was an enterprise application development team building a large scale multi-channel application. There were five sprint teams working on the same monolithic application code-base with each team had its own branch of the code that they worked on independently from the other sprint teams. Teams were responsible for maintaining their own branches and merging in updates from previous releases but this was often not done correctly causing defects to emerge and features being accidentally regressed. Whats more, the teams branches would deviate more and more away from each other over time, making the problem worse with each subsequent release. Following a monthly release cycle, at the end of two fortnightly sprints all teams manually merged their code into a release branch which was then built and sanity tested. The merging of code was usually manual and often done via cherry picking individual changes from memory. Needless to say this process was error prone and complicated for larger releases. Once the merge was complete this was the first time that all the code had all been running together and any cross team impacting changes identified and so many issues were encountered at this point, either in failed CI builds or in release testing cycle. This Merge and test cycle took between 3 to 8 days therefore taking a week out of our delivery cycle and severely impacting velocity.  The more issues that were found the more the release testing was increased in an attempt to address quality issues, increasing lead times. It was clear that something needed to be done and so we decided to take the leap to trunk based development and eliminating waste by removing merges and getting each teams changes aligned sooner.

Solution

We moved to a single master/trunk stream (in Git) for ALL development and all sprint teams worked on this single stream. Each sprint team has its own development environment for their in-sprint testing of their changes. Every month we would create a cut of the master branch to create the release candidate branch for the route to live and to act as a stable/scope locked base for testing the upcoming release to production. This model follows Microsoft’s Release Flow model. Defect fixes for the release candidate were fixed in master or the release branch, depending on the sprint teams decision but each week changes made in the release branch (i.e. defect fixes) are back merged into master/trunk. This ‘back merges’ activity is owned by one sprint team per Release and this ownership is rotated each release so everyone shares the effort and benefits from improvements to this process

Feature Toggles

The move towards trunk based development was not possible without utilising Feature Toggles or Release Toggles to be more specific . We needed to support multiple teams working on the same codebase but building and testing their own changes (often targeting different releases). We needed to protect each team from every other teams untested changes, and so we used toggles to isolate new changes until they were tested/approved. All code changes were wrapped in an IF block with the old code in the ELSE block and a toggle check determines which route the code takes.

 if (ToggleManager.IsToggleEnabled("feature12345"))
 {
     // new code here 
 }
 else 
 {
     //wrap old code here unchanged
 }

As the system was a  Java EE application we chose FF4J as the toggle framework. FF4J is a simple but flexible feature flipping framework, and as the toggles are managed in a simple XML file it made it easier to implement our Jenkins solution described later. To be clear there are many frameworks that could be used and its simple to create your own.  To be able to support replacing the toggle framework and to make it as simple as possible for developers to create toggles/flips FF4J functionality was wrapped in a helper class which made adding a toggle a one line change.

We also implemented a Feature Toggle Tag Library so that JSF tags could be wrapped around content in JSF (Java Server Faces) pages to enable/disable depending on the FF4J toggle being on/off. Also a JavaScript wrapper for toggles was also developed that allowed toggles to be checked from within our client-side JS code.

A purposeful decision was made to bake the toggle config into the built code packages to prevent toggle definition files and the code drifting apart. This meant that the toggle file is baked into built JAR/EAR/TAR file for deployment. Once the package is created the toggles are set for that package and this prevents a code-configuration disconnect from forming which is the cause of many environmental stability issues. This was sometimes controversial as teams would want to change a toggle after deployment for the simple reason that they forgot to set it correctly or would hastily want to flip toggles on code in test which was not the design goal of the our Feature Toggles (although a valid scenario for toggles and a separate design was introduced for turning features on/off in Production).

All code changes are wrapped in a toggle and the toggle is set to OFF in the repository until the change has passed definition of done within the sprint team. Once the change has been completed (‘done’ includes testing in sprint) then the toggle is turned ON in the master branch – making it ON for all teams immediately (as the code base is shared). As the toggles for new untested changes are OFF in the code and all package builds come from the master code branch,  the new feature cannot be tested on a test environment without first flipping a toggle, so how can it be tested and turned on without impacting other teams?  For this we introduced Auto Toggling in our Jenkins job.  Our Jenkins CI jobs that build the code for test include a parameter for the sprint team indicator (to identify the team the built package is for) and this then automatically turns on the WIP toggles belonging to that team. This means that when Sprint Team A triggers a build, Jenkins will update the new work in progress toggles for Team A from OFF to ON in the package. This package is marked as WIP for Team A and so cannot be used by other teams. Team A can now deploy and test their new functionality but other teams will still not see their changes. Once Team A are happy with their changes they turn the Toggle ON in MASTER  for all teams and its widely available.

Unfortunately not everything can be toggled easily, and so a decision tree was built for developers to follow to understand the options available.  Where the toggle framework was not an option, then other approaches can be used. Sometimes a change can be “toggled/flipped” in other ways (e.g. a new function with a new name or a new column in the DB for new data, or an environment variable). The point is that the ability and desire to use Feature Toggles is not just about the Toggle framework you choose but instead its a philosophy and approach that must be adopted by the teams.  If after all other options there is no way to toggle then teams have to communicate changes amongst themselves  and take appropriate actions to communicate a potential breaking change.

What worked well

So what were the benefits seen after introducing trunk based development in the team?  Well firstly the obvious benefit of maintaining less code branches was immediately obvious. From day one every team was now on the latest codebase and no merging and cherry picking was required at end of sprint. This saved time, reduced merge related defects and increased the agility of the team.  Each team has saved time and effort by not having the housekeeping effort of maintaining their own branch. Use of environments became more flexible as in theory every new feature was on every environment behind a toggle meaning that Team A’s new feature can now be tested on Team B’s environment should the need arise.  Cross team design issues have been spotted earlier as a clash of changes between teams is seen during development and not later when the code is merged together prior to a release. Teams are now able to share new code more easily because as soon as a new helper function is coded it can be used by another team immediately. Any Improvements to CI/CD process or code organisation or tech debt can now be embraced by all teams at once without a delay as it filtered through each teams branches.

What didn’t go so well

Of course its not all perfect, and we have faced some challenges.  Now all teams are impacted immediately by the quality of the master/trunk branch. Broken builds have been reduced with the introduction of Pull Requests and Jenkins running quality gate builds on the check-ins on Master but any failure in the CI/CD build pipeline immediately impacts all the teams and so these issues need to be resolved ASAP (which they should anyway in any team). Teams must work together to resolve which does bring positives.   Which brings me to the next point – communication.

When teams are using trunk based development and sharing one codebase then communication between teams becomes more important. It is critical that there are open dialog between the teams to ensure that issues are resolved quickly, and that any changes that impact the trunk are communicated quickly and efficiently.  Whilst a lack of cross team comms exasperated in trunk based development any communication issues are a death-nell to a Dev team and should be resolved anyway. If this is an issue for your teams then be aware that trunk based development may not be for you until your teams are collaborating better. That said introducing trunk based development is a good way to encourage teams to collaborate better for their own benefit.

Feature toggles/flags are a key enabler to trunk based development, enabling you to isolate changes until they are ready but there is no denying that they add complexity and technical debt. We add a toggle around all changed code and use the Jira ID to name the toggle. Whilst we have a neat toggle solution there is no getting away from the fact that extra conditions mean extra complexity in your code. Unit tests must consider feature toggles and test with them on and off.  Feature toggles can make code harder to read and increase the cyclomatic complexity of functions which may result in code quality gates failing. We had to slightly adjust our Sonar quality gate to allow for this.  Whilst toggles do add technical debt to your code, but this is temporary debt and one that is an investment to improve overall productivity, and so is really in my opinion a valid form of technical debt. Its referred to as debt as its ok to borrow on a manageable scale if you pay it back over time. Removing toggles is critical to this process, and yet this is one area we have yet to address sufficiently. A process to remove toggles has been introduced but its proving harder than expected to motivate teams to remove toggles from previous releases at the same rate as they are being added. To this end we have added custom metrics in SonarQube to track toggle numbers and we will use this key metric to ensure that the number of toggles stabilises/reduces.

The state of toggles is an additional thing the team need to manage, however this is offset by the power to be able to control the release of new code changes to other teams.  We have found care should be taken to ensure that feature toggles are not mis-used for non intended purposes. Be clear on what that they are for and when they should/shouldn’t be flipped (for us its in the Definition of Done for a task in sprint). There can be demands to use them as a substitute for proper testing. Be clear on the types of Release/Feature toggles and provide guidance for what that can be used for. There is no doubt that they can be re-used at release time to back out unwanted features but this should be a controlled process and designing in from the start. We already had Release Switches for turning features on and off, but Feature Toggles (in our case) are used purely for isolating changes from other teams until ready. We strive to ensure that the toggle is set ON or OFF during the sprint before the code is moved through to release testing.

Conclusion

The benefits you derive from moving towards trunk based development will vary depending on your current processes. For a full guide to the general benefits of trunk based development check out this excellent resource trunkbaseddevelopment.com.

In our case the roll out was a success and achieved the desired improvements in cycle time and developer productivity. The process of delivering technical change was simplified in terms of configuration management and developers became more productive. This was despite the problems we faced and listed above.

There is no doubt that utilising feature toggles into your design from the start would make some of the technical challenges easier but we have proved it can be done with an existing brownfield monolith.

Next Steps

Next steps for the future are to constantly improve the toggle framework to reduce the instances where a change cant be toggled on/off, and to make it easier to communicate about changes that will impact all teams.  A renewed emphasis on removing old toggles from the code is required to ensure that teams and change approvers accept the “tax” each release of removing redundant toggles.

Links for further reading
Advertisements

SonarQube migration issue- Jenkins Using old URL

SonarQube migration issue- Jenkins Using old URL

I recently migrated a SonarQube server from one server to another in order to scale out the service to our dev team. All went well until builds failed due to them looking at both the old and new server URLs for the Sonar results and so I’m writing some notes here to help me (and others) out in the future if I hit this again.

I installed the same version of SonarQube on the new application server that is on the old server. The database was not being moved, just the application server (the server with the Sonar service running).

After installation I ensured that the same custom config settings were made onthe new server as had been made on the old server, and ensured that the same plugins were in place. I then stopped the Sonar service on the old server and started the service on the old box.

Once Sonar was confirmed to be up and running and connecting to the Database and showing the project dashboards I updated the Jenkins server configuration to point to the new box. All looked good so I ran a build, and then got this (log truncated)….

 
INFO: ANALYSIS SUCCESSFUL, you can browse http://OLDSERVER/dashboard?id=123
INFO: EXECUTION SUCCESS
The SonarQube Scanner has finished
Analysis results: http://NEWSERVER/dashboard/index/APP
Post-processing succeeded.
withSonarQubeEnv
waitForQualityGate
java.lang.IllegalStateException:
Fail to request http://OLDSERVER/api/ce/task?id=3423
at org.sonarqube.ws.client.HttpConnector.doCall(HttpConnector.java:212)

Bizarely the Jenkins build has managed to use both the new Sonar URL and the old one.The upload was successfull to the new server but some of the links for the resport point to the old server. Also the Quality Gate check whcih validates that the Sonar Quality Gate was successfull has tried to read the report on the OLD server and therefore failed as its not there (because its on the new sonar URL).

After checking Jenkins for any reference to the old Sonar server and restarting the service to clear any caches I was still getting the error. Eventually I ran a Jenkins build and interactively peeked into the Jenkins workspace on the Jenkins Slave and in there is an auto generated file containing lots of Sonar config settings. This file, SonarQubeAnalysisConfig.xml, is created during the Jenkins build initialisation stage. In the file I found references to the new Sonar URL but also this property pointing to the old URL:

  sonar.core.serverBaseURL  

This value is set in SonarQube configuration and is not dynamic and so will not be updated when you migrate the server or change the server URL/port. To change it open SonarQube > Administration > Configuration > General and change Server base URL to your new URL (e.g. http://yourhost.yourdomain/sonar). It says that this value is used to create links in emails etc but it in reality is also used to integate results.

Visual Studio 2019 Offline Installer

Visual Studio 2019 Offline Installer

Microsoft have now released Visual Studio 2019 and like VS2017 there is no offline installer provided by Microsoft, but you can generate one by using the web installer setup program to write all the packages to disk.

To create the offline installer just download the usual web installer exe from the Microsoft download site and then call it from the command line passing in the layout flag and a folder path like this:

vs_community --layout  "C:\Setup\VS2019Offline"

In the example above I’m dowloading the Community verssion, but if its the Enterrpise editions installer then the setup file you downloaded will be called vs_enterprise.

The packages will all be downloaded and a local setup exe installer created.

If you want to limit to English then pass –lang en-US flag too.

vs_community --layout  "C:\Setup\VS2019Offline" --lang en-US

You can also limit what workloads to download if you know their names by listing them after a –add flag.

Enjoy your offline installs.

Useful Git Training Links

Useful Git Training Links

git_logoHaving recently had to compile a list of useful learning resources for a development team migrating to git, I thought I would share them here.

Git is a very powerful and versatile distributed source control system but its not the easiest for a newbie to get their head around. The below links are ordered from tutorials based on giving an overview of git through to more advanced topics.

  1. What is Git – a nice overview article by Atlassian
  2. Learn Enough Git to Be Dangerous tutorial by Michael Hartl
  3. Git the Simple Guide – An excellent simple, straight to the point guide to git  by Roger Dudler. (My favourite guide)
  4. Git Tutorial – Another tutorial
  5. Git Cheat Sheet – cheat sheet for git and github commands
  6. The official git site documentation and tutorials
  7. Pro GIT ebook – an excellent definitive guide to git in a free ebook format


GitHub External Training Links: 

If you or your team also need to learn GitHub then here are some good training links.

  1. A great hello world example and introduction to GitHub
  2. Git Started With GitHub – free course from udemy
  3. Training videos on YouTube

Also its worth remembering that Microsoft offer FREE private git repository hosting via the Visual Studio Team Services if you don’t want to host all your projects publicly.

 

Building A Learning Culture

I’m keen on fostering a learning culture within teams and was drawn to this article on InfoQ Creating a Culture of Learning and Innovation by Jeff Plummer which shows what can be achieved through community learning. In the article Jeff outlines how a learning culture was developed within his organisation using simple yet effective crowd sourcing methods.

imageI have implemented a community learning approach on a smaller scale using informal Lunch & Learns where dev’s give up their lunch break routine to eat their lunch together whilst learning something new, with the presenter\teacher being one of the team who has volunteered to share their knowledge on a particular subject. Sometimes the presenter will already have the knowledge they are sharing but other times they have volunteered to go and learn a subject first and then present it back to the group. Lunch & Learns work even better if you can convince your company to buy the lunch (it’s much cheaper per head than most other training options).

It’s hard to justify expensive training courses these days but that said it’s also never been easier to find free or low cost training by looking online. As Jeff points out innovation often comes from learning subjects not directly relevant to your day job. In my approach to learning with team I have always tried to mix specific job relevant subjects with seemingly less relevant ones. For example a session on Node.js for a team of .Net developers would be hard to justify in monetary terms however I’ve no doubt the developers took away important points around new web paradigms, non-blocking threads, web server models, and much more. Developers like to learn something new and innovation often comes from taking ideas that already exist elsewhere in a different domain and applying them to the current problem.

I agree with Jeff’s point that the champions are key to the success of this initiative. It is likely that the first few subjects will be taught by the champion(s) and they will need to promote the process to others. One tip to take some of the load off the champions is to mix in video sessions as well as presenter based learning sessions. There are a lot of excellent conference session videos and these can make a good Lunch & Learn sessions. Once the momentum builds it becomes the norm for everyone to be involved and this crucially triggers a general sense of learning and of sharing that learning experience with others.

The Growth Of Business IT

In my popular post on “The Future of the IT Department” I covered how IT is changing rapidly in enterprises and touched on how business aligned IT teams are going to become more relevant. Some of these agile ‘business focused development and delivery teams’ will be official IT sponsored initiatives whilst others will be somewhat rogue business division sponsored teams working without the IT department as a response to the expensive, often poor quality service provided by the IT division.

The rapid pace of marketplace innovation and the lack of flexibility of many IT organisations within enterprises, when fuelled with the consumerization of IT and the growth of cloud computing is leading to a boom in DIY business application development. Gartner predicts that…

Rank-and-file employees will build at least a quarter of all new business applications by 2014, up from less than 5% in 2007.” [Gartner]

For many years now there has always been the power in the business to harness Excel macros and VBA to enhance end-user productivity, but now this is being enhanced by new friendly end-user tools such as easy  mobile app development, the ability to host new websites in the cloud in a few clicks and a whole SaaS model to replace your IT in house infrastructure over night .  

business-it-supportThe business benefits of this boom are clear to see. The ability of end-users and Business IT teams to manipulate the data and process flows to meet the shifting demands of the market are attractive. Customer demands can in theory be more easily met by those closet to the customer building applications quickly and with their day to day use clearly in mind. As the market changes the user can adjust their homebrew application to fit the market, or throw it away and start a new one. Instead of  a business analysis working closely with the developer to create an application she can reduce the communication overhead by just building it herself. Even if the application is only to be used as a POC this is a very efficient process to find out what works and what doesn’t. In this article on BusinessWeek the CEO of NetApp explains the benefits seen by  encouraging employees to build their own tools, such as cost savings and customer satisfaction. It’s not all peachy though. There are obvious pitfalls to this approach. the IT organisation may be slow and expensive but they often have genuine reasons for being that way. Interoperability, support, security, regulatory concerns, supplier contracts and economies of scale are all topics the IT organisation has to consider and so too does the business if its going to promote this DIY application approach.

Business run IT teams can do very productive work and react quickly to change, but from my experience the problem comes when they have to rely on the IT department to support the implement their change and that’s where tension can arise. Teams outside the IT structure can find it hard to understand the constraints of the IT department. I find developers in business sponsored teams have a real desire to be productive for customers but lack some of the rigor that is prevalent in IT based teams (particularly around maintainability and change control). The IT department can seem to be a blocker to the teams agility when it is unable to adhere to the timescales expected by the business teams. I think some effort needs to be made on both sides to understand the constraints the other teams are under and work together. Critically I feel the IT department needs to realise that this trend will continue and the IT org is at risk of becoming irrelevant (other than to keep the lights on and maintain legacy systems). Perhaps this is the natural evolution of the consumerisation of technology but I do think that IT organisations can have a very relevant role to play in this shift. By sponsoring agile business centric development teams to support the business better the IT organisation of the future can have a very relevant role and IT professionals are ideally positioned to populate these teams and support the growth in DIY applications whilst adding some beneficial structure.

Estimates: A Necessary Evil

Despite being an age old problem in the IT industry (and presumably a problem in other industries) it still concerns me how we have to rely so much on estimates to manage resources on complex multi-million pound projects. We call them estimates to hide the truth that they are at best educated guesses, or at worst complete fantasy. In the same way that fortune tellers can use clues to determine a punters history and status (e.g. their clothes, watch, absence of wedding ring etc) we as estimators will naturally seek out clues as to the nature of a potential project. Have we dealt with this business function before, is their strategy clear? Will we get clear requirements in time? We then naturally use these clues to plan out the project in our head and load our estimates accordingly but it’s hard to avoid Hofstadter’s Law which states that:

“It always takes longer than you expect, even when you take into account Hofstadter’s Law.”
     — Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid[1]
     

We are daily asked to glare into our crystal balls and come up with an accurate prediction of how long something will take based on very little requirements or even context. How far out are these estimates likely to be in this situation? Well the boffins at NASA can help here with their Cone of Uncertainty 

ConeOfUnCertainty
The cone of uncertainty is excellent at visually displaying the evolution of uncertainty on a project. Based on research by NASA it shows that at the start of a project the estimates could be out by as much as 4x. Whilst this reduces as the work progresses and therefore more is known about the project, it is often at the very early stage where estimates are collected and used as a basis for a business case or to acquiring resources. This is despite the fact that it is known at this point that they are significantly inaccurate.

Estimating is hard and by its nature inaccurate but that is not surprising considering the human nature aspects we have to deal with. These are excellently outlined in this post and they include our strong desire to please and “The Student Syndrome” (whereby we tend to put off until later what we could do now). The post compares overestimation and underestimation highlighting that the effects of underestimating are far worse than overestimating, and concludes…

"Never intentionally underestimate. The penalty for underestimation is more severe than the penalty for overestimation. Address concerns about overestimation through control, tracking and *mentoring* but not by bias."

So underestimating is bad, shame then that we have the concept of the "Planning Fallacy" based on research by Daniel Kahneman and Amos Tversky which highlights a natural…

"tendency for people and organizations to underestimate how long they will need to complete a task, even when they have experience of similar tasks over-running."

There are many explanations of the results of this research but interestingly it showed that it …

"only affects predictions about one’s own tasks; when uninvolved observers predict task completion times, they show a pessimistic bias, overestimating the time taken."

…which has implications for the estimating process and conflicts with the sensible thoughts of many (including Joel on software) on this subject that dictate that the estimate must be made by the person doing the work. It makes sense to ask the person doing the work how long it will take and it certainly enables them to raise issues such as a lack of experience with a technology but this research highlights that they may well still underestimate it. 

In many corporate cultures it is no doubt much safer to overestimate work than to underestimate it. the consequences of this over time however can result in organisations where large development estimates become the norm and nothing but mandatory project work is authorised. This not only stifles innovation but also makes alternative options more attractive to the business, such as fulfilling IT resource requirements externally via 3rd parties (e.g. outsourcing/offshoring).

Close up shot of calculator buttonsThe pace of technological change also fights against our estimating skills. The industry itself is still very young and rapidly changing around us. This changing landscape makes it very difficult to find best practice and make it repeatable. As technologies change so does the developers uncertainty of estimating. For example, a developer coding C++ for 5 years was probably starting to make good estimates for the systems on which he worked, but he might move to .Net and his estimating accuracy is set back a few years – not due to the technology but just his familiarity with it. It’s the same for Architects, System Admins and Network professionals too. As an industry we are continuously seeking out the next holy grail, the next magic bullet and yet we are not taking the time to train our new starters or to grow valid standards/certifications in order to grow as an industry. This was a challenge that many professions in history have had to face up to and overcome (e.g. the early medical profession, structural architectures, surveyors, accountants etc) but that’s a post for another day.  

Ok, ok so estimates are evil, can we just do without them? Well one organisation apparently seems to manage. According to some reports

"Google isn’t foolish enough or presumptuous enough to claim to know how long stuff should take."

…and therefore avoids date-driven development, with projects instead working at optimum productivity without a target date in mind. That’s not to say that everything doesn’t need to be done as fast as possible it just means they don’t estimate what "fast as possible" means at project start. By encouraging a highly productive and creative culture and avoiding publically announcing launch dates Google is able to build amazing things quickly in the time it takes to build them, and they are not bound by an arbitrary project deadline based on someone’s ‘estimate’. It seems to work for them.  Whether this is true or not in reality it makes for an interesting thought. The necessity for estimates comes from the way projects are run and how organisations are structured and they do little to aid engineers in the process that is software development. 

So why do we cling to estimates? Well unless your organisation is prepared to radically change its organisational culture then they are without doubt a necessary evil that whilst not perfect are a mandatory element of IT projects. The key therefore is to improve the accuracy of our estimating process one estimate at a time, whilst still reminding our colleagues that they are only estimates and by their nature they are wrong.

Agile estimating methods include techniques like Planning Poker (LINK) which simplify the estimating process down to degrees of complexity and whilst they can be very successful they are still relying on producing an estimate of sorts, even if they just classify them by magnitude of effort. I was just this week chatting to a PM on a large agile project who was frustrated by the talented developments team’s inability to hit their deadlines purely as a result of their poor estimating.

There are many suggested ways to improve the process and help with estimate accuracy and I’m not going to cover them all here, but regardless of the techniques that you use don’t waste the valuable resource that is your ‘previous’ estimates. Historic estimates when combined with real metrics of how long that work actually took are invaluable to the process of improving your future estimates.

Consistency is important to enable a project to be compared with previously completed one. By using a template or check-sheet you ensure that all factors are considered and recorded, and avoid mistakes being made from forgetting to include items. Having an itemised estimate in a consistent format enables estimates to be compared easily to provide a quick sanity test (e.g: "Why is project X so much more than project Y?"). It also allows you to capture and evolve the process over time as you add new items to your template or checklist as you find them relevant to the process. Metrics, such as those provided by good ALM tools (e.g. Team Foundation Server or Rational Team Concert) are useful for many things but especially for feeding back into the estimating process. By knowing how long something actually took to build you can accurately predict how long it will take to build something similar. 

In summary then, estimates by their nature are wrong and whilst a necessary evil of modern organisations are notoriously difficult for us mere humans to get right. Hopefully this post has made you think about estimates a little more and reminded you to treat them with the care that they deserve in your future projects.

Just Do It !

nike-just-do-it1Now I’m not a big fan of New Year’s Resolutions and in fact have the same one each year which I religiously stick to, which is to never make any New Years Resolutions. However whilst we are all in the spirit of renewed enthusiasm for the new year ahead I’d like to quote the great tag line of the Nike brand:

“Just do it!”

We all speak to people who have a great idea for the next big thing be it a concept for a new phone app, a great web site or a business idea that could make a fortune. For some it will always be just a vision in their head, some will get started but get distracted or bored, but very few will get their ideas off the ground. Be different, get that idea out of your head and into the real world because as Woody Allen said…

“80% of success is showing up!”

So if you have an idea, a domain name gathering dust or some half finished code lying around then either move on and forget it or do something about it. Even if it fails you’ll learn a lot along the way.

justdotit2If the task seems just too big, then check out this excellent post by Jim Highsmith where he talks about just getting that project started and focusing on delivering value. For more ideas on getting started with just the bare minimum features then check out the the Minimal Viable Product (MVP) concept used by many start-ups.

Oh and don’t forget – if you become a millionaire on the back of this post, don’t forget me 🙂

Useful Web Based UML Drawing Tools

A basic sequence diagram can be a very powerful tool to explain the interactions in a system but drawing them can often be too time consuming to bother for disposable uses. I find that many people draw them out on rough paper to help explain their argument but less actually ever bother to build them in soft form unless for a formal document. There are a lot of powerful feature rich UML building tools but recently I found this: http://www.websequencediagrams.com.

It lets you build sequence diagrams like the one below in seconds by typing the object interactions in a short hand form, such as:

title Authentication Sequence
Alice->Bob: Authentication Request
Bob-->Alice: Authentication Response
Bob-->Jeff: Pass Request
Jeff-->Bob: Return Response

…which draws this in real time in the browser: 
WebSequenceDigrams 

And you can even choose the style and colouring too. There’s also the functionality to save diagrams and import saved diagram text. Check out the API page too for tips on embedding the drawing engine into your web pages allowing you to edit your diagrams as well as plugins for Confluence, Trac and xWiki. There are also example implementations for Ruby, Java and Python.

A similar online tool is http://yuml.me with which you can draw Class, Activity and Use Case Diagrams. Here is an example of a Use Case diagram definition:

[Customer]-(Make Cup of Tea)
(Make Cup of Tea)<(Add Milk)
(Make Cup of Tea)>(Add Tea Bag)

…which makes this:

YUMLUseCase

yUML.me also supports API integration with a whole host of things (Gmail, Android, .Net, PowerShell, Ruby and more).

Now there’s no reason to not use a quick UML diagram to explain what you mean!

It’s All About Culture (Enterprise IT Beware)

This interesting post by PEG recently highlights an organisations culture as being in reality the only differentiating factor that they have. In his view assets, IP, cost competitiveness, brand and even people can be copied or acquired by your competition but it is your company culture that will lead to success/failure. I agree with his assertion on the importance of culture, and would add that whilst it has always been the case that culture is critical the rapidly changing new world is bringing with it new challenges to competitiveness resulting in culture becoming even more important for competitive advantage. I must clarify here that I see a huge gulf between the ‘official’ culture of an organisation that is documented and presented by senior management and the real culture that is living and breathing on the shop floor (and they rarely match).

This excellent highscalability.com post by Todd Hoff recently highlights the way that the rules framing IT (and business) are changing and how start-ups are becoming a beacon for investigating this new world. When we look at this new world the key elements are flexibility, adaptability and innovation. These traits thrive in start-ups where the ‘culture’ encourages them. Many of these new industry shakers lack the assets, brand and IP to use but excel in using their ‘culture’ to outmanoeuvre bigger rivals and drive innovation in their industry.

Some enterprises get this and are trying hard to foster a more innovative and customer focused culture. The emergence of agile development practices can help to focus the team on the true business value of features and aid flexibility in the use of resources. SAP has recently designed its new office environment to fully promote Agile development practices (see this post), and whilst this on its own can’t change corporate culture, it can remove some of the blockers to a more agile culture emerging. Of course there are many other enterprises that still rely on military inspired hierarchical structures and attempt to enforce a desired culture. This InfoQ article by Craig Smith summarizes recent articles covering how mainstream management are missing the benefits of an agile approach within their organisations. 

"…The management world remains generally in denial about the discoveries of Agile. You can scan the pages of Harvard Business Review and find scarcely even an oblique reference to the solution that Agile offers to one of the fundamental management problems of our times.”

The benefits of Agile practices are well documented and so if this fundamental approach is still not connecting with mainstream managers then how long will it be before they grasp the bigger paradigm shift that is occurring underneath them. This shift is being forged by start-ups and enabled via cloud computing.

Let’s look into what is happening in the start-up space; Todd Hoff’s post again highlights the use of small dedicated autonomous teams with the power (and responsibility) to make rapid changes to production systems. these teams being responsible for the entirety of their systems from design to build, test, deployment and even monitoring. How does this work? Well it can only work effectively via a shared culture of innovation, ownership and excellence. Facebook/Google staff have stated that their biggest motivator is the recognition of their good work by their peers (see here and here). This is ‘Motivation 3.0’ in action with the intrinsic rewards of the job at hand being the main motivation to succeed. Compare this with the traditional and still prevalent command and control approach used in a lot of enterprises with tightly controlled processes, working habits and resources. Splitting the responsibility for each stage of the development lifecycle between different teams (usually separated by reporting lines, internal funding requests, remote locations etc.) and then expecting a coherent solution to prevail is not going to work in this new world.

We are now starting to see the emergence of the cloud as a major force in driving the democratising of computing power and enabling the emergence of empowered teams. Todd’s post covers in detail how the cloud is making the once impossible possible and I recommend you take the time to read it. Only time will tell if the cloud’s impact on Enterprise IT will act as a catalyst to driving the emergence of more agile competitive corporate cultures in enterprises in the future.