Building A Learning Culture

I’m keen on fostering a learning culture within teams and was drawn to this article on InfoQ Creating a Culture of Learning and Innovation by Jeff Plummer which shows what can be achieved through community learning. In the article Jeff outlines how a learning culture was developed within his organisation using simple yet effective crowd sourcing methods.

imageI have implemented a community learning approach on a smaller scale using informal Lunch & Learns where dev’s give up their lunch break routine to eat their lunch together whilst learning something new, with the presenter\teacher being one of the team who has volunteered to share their knowledge on a particular subject. Sometimes the presenter will already have the knowledge they are sharing but other times they have volunteered to go and learn a subject first and then present it back to the group. Lunch & Learns work even better if you can convince your company to buy the lunch (it’s much cheaper per head than most other training options).

It’s hard to justify expensive training courses these days but that said it’s also never been easier to find free or low cost training by looking online. As Jeff points out innovation often comes from learning subjects not directly relevant to your day job. In my approach to learning with team I have always tried to mix specific job relevant subjects with seemingly less relevant ones. For example a session on Node.js for a team of .Net developers would be hard to justify in monetary terms however I’ve no doubt the developers took away important points around new web paradigms, non-blocking threads, web server models, and much more. Developers like to learn something new and innovation often comes from taking ideas that already exist elsewhere in a different domain and applying them to the current problem.

I agree with Jeff’s point that the champions are key to the success of this initiative. It is likely that the first few subjects will be taught by the champion(s) and they will need to promote the process to others. One tip to take some of the load off the champions is to mix in video sessions as well as presenter based learning sessions. There are a lot of excellent conference session videos and these can make a good Lunch & Learn sessions. Once the momentum builds it becomes the norm for everyone to be involved and this crucially triggers a general sense of learning and of sharing that learning experience with others.

The Growth Of Business IT

In my popular post on “The Future of the IT Department” I covered how IT is changing rapidly in enterprises and touched on how business aligned IT teams are going to become more relevant. Some of these agile ‘business focused development and delivery teams’ will be official IT sponsored initiatives whilst others will be somewhat rogue business division sponsored teams working without the IT department as a response to the expensive, often poor quality service provided by the IT division.

The rapid pace of marketplace innovation and the lack of flexibility of many IT organisations within enterprises, when fuelled with the consumerization of IT and the growth of cloud computing is leading to a boom in DIY business application development. Gartner predicts that…

Rank-and-file employees will build at least a quarter of all new business applications by 2014, up from less than 5% in 2007.” [Gartner]

For many years now there has always been the power in the business to harness Excel macros and VBA to enhance end-user productivity, but now this is being enhanced by new friendly end-user tools such as easy  mobile app development, the ability to host new websites in the cloud in a few clicks and a whole SaaS model to replace your IT in house infrastructure over night .  

business-it-supportThe business benefits of this boom are clear to see. The ability of end-users and Business IT teams to manipulate the data and process flows to meet the shifting demands of the market are attractive. Customer demands can in theory be more easily met by those closet to the customer building applications quickly and with their day to day use clearly in mind. As the market changes the user can adjust their homebrew application to fit the market, or throw it away and start a new one. Instead of  a business analysis working closely with the developer to create an application she can reduce the communication overhead by just building it herself. Even if the application is only to be used as a POC this is a very efficient process to find out what works and what doesn’t. In this article on BusinessWeek the CEO of NetApp explains the benefits seen by  encouraging employees to build their own tools, such as cost savings and customer satisfaction. It’s not all peachy though. There are obvious pitfalls to this approach. the IT organisation may be slow and expensive but they often have genuine reasons for being that way. Interoperability, support, security, regulatory concerns, supplier contracts and economies of scale are all topics the IT organisation has to consider and so too does the business if its going to promote this DIY application approach.

Business run IT teams can do very productive work and react quickly to change, but from my experience the problem comes when they have to rely on the IT department to support the implement their change and that’s where tension can arise. Teams outside the IT structure can find it hard to understand the constraints of the IT department. I find developers in business sponsored teams have a real desire to be productive for customers but lack some of the rigor that is prevalent in IT based teams (particularly around maintainability and change control). The IT department can seem to be a blocker to the teams agility when it is unable to adhere to the timescales expected by the business teams. I think some effort needs to be made on both sides to understand the constraints the other teams are under and work together. Critically I feel the IT department needs to realise that this trend will continue and the IT org is at risk of becoming irrelevant (other than to keep the lights on and maintain legacy systems). Perhaps this is the natural evolution of the consumerisation of technology but I do think that IT organisations can have a very relevant role to play in this shift. By sponsoring agile business centric development teams to support the business better the IT organisation of the future can have a very relevant role and IT professionals are ideally positioned to populate these teams and support the growth in DIY applications whilst adding some beneficial structure.

Estimates: A Necessary Evil

Despite being an age old problem in the IT industry (and presumably a problem in other industries) it still concerns me how we have to rely so much on estimates to manage resources on complex multi-million pound projects. We call them estimates to hide the truth that they are at best educated guesses, or at worst complete fantasy. In the same way that fortune tellers can use clues to determine a punters history and status (e.g. their clothes, watch, absence of wedding ring etc) we as estimators will naturally seek out clues as to the nature of a potential project. Have we dealt with this business function before, is their strategy clear? Will we get clear requirements in time? We then naturally use these clues to plan out the project in our head and load our estimates accordingly but it’s hard to avoid Hofstadter’s Law which states that:

“It always takes longer than you expect, even when you take into account Hofstadter’s Law.”
     — Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid[1]

We are daily asked to glare into our crystal balls and come up with an accurate prediction of how long something will take based on very little requirements or even context. How far out are these estimates likely to be in this situation? Well the boffins at NASA can help here with their Cone of Uncertainty 

The cone of uncertainty is excellent at visually displaying the evolution of uncertainty on a project. Based on research by NASA it shows that at the start of a project the estimates could be out by as much as 4x. Whilst this reduces as the work progresses and therefore more is known about the project, it is often at the very early stage where estimates are collected and used as a basis for a business case or to acquiring resources. This is despite the fact that it is known at this point that they are significantly inaccurate.

Estimating is hard and by its nature inaccurate but that is not surprising considering the human nature aspects we have to deal with. These are excellently outlined in this post and they include our strong desire to please and “The Student Syndrome” (whereby we tend to put off until later what we could do now). The post compares overestimation and underestimation highlighting that the effects of underestimating are far worse than overestimating, and concludes…

"Never intentionally underestimate. The penalty for underestimation is more severe than the penalty for overestimation. Address concerns about overestimation through control, tracking and *mentoring* but not by bias."

So underestimating is bad, shame then that we have the concept of the "Planning Fallacy" based on research by Daniel Kahneman and Amos Tversky which highlights a natural…

"tendency for people and organizations to underestimate how long they will need to complete a task, even when they have experience of similar tasks over-running."

There are many explanations of the results of this research but interestingly it showed that it …

"only affects predictions about one’s own tasks; when uninvolved observers predict task completion times, they show a pessimistic bias, overestimating the time taken."

…which has implications for the estimating process and conflicts with the sensible thoughts of many (including Joel on software) on this subject that dictate that the estimate must be made by the person doing the work. It makes sense to ask the person doing the work how long it will take and it certainly enables them to raise issues such as a lack of experience with a technology but this research highlights that they may well still underestimate it. 

In many corporate cultures it is no doubt much safer to overestimate work than to underestimate it. the consequences of this over time however can result in organisations where large development estimates become the norm and nothing but mandatory project work is authorised. This not only stifles innovation but also makes alternative options more attractive to the business, such as fulfilling IT resource requirements externally via 3rd parties (e.g. outsourcing/offshoring).

Close up shot of calculator buttonsThe pace of technological change also fights against our estimating skills. The industry itself is still very young and rapidly changing around us. This changing landscape makes it very difficult to find best practice and make it repeatable. As technologies change so does the developers uncertainty of estimating. For example, a developer coding C++ for 5 years was probably starting to make good estimates for the systems on which he worked, but he might move to .Net and his estimating accuracy is set back a few years – not due to the technology but just his familiarity with it. It’s the same for Architects, System Admins and Network professionals too. As an industry we are continuously seeking out the next holy grail, the next magic bullet and yet we are not taking the time to train our new starters or to grow valid standards/certifications in order to grow as an industry. This was a challenge that many professions in history have had to face up to and overcome (e.g. the early medical profession, structural architectures, surveyors, accountants etc) but that’s a post for another day.  

Ok, ok so estimates are evil, can we just do without them? Well one organisation apparently seems to manage. According to some reports

"Google isn’t foolish enough or presumptuous enough to claim to know how long stuff should take."

…and therefore avoids date-driven development, with projects instead working at optimum productivity without a target date in mind. That’s not to say that everything doesn’t need to be done as fast as possible it just means they don’t estimate what "fast as possible" means at project start. By encouraging a highly productive and creative culture and avoiding publically announcing launch dates Google is able to build amazing things quickly in the time it takes to build them, and they are not bound by an arbitrary project deadline based on someone’s ‘estimate’. It seems to work for them.  Whether this is true or not in reality it makes for an interesting thought. The necessity for estimates comes from the way projects are run and how organisations are structured and they do little to aid engineers in the process that is software development. 

So why do we cling to estimates? Well unless your organisation is prepared to radically change its organisational culture then they are without doubt a necessary evil that whilst not perfect are a mandatory element of IT projects. The key therefore is to improve the accuracy of our estimating process one estimate at a time, whilst still reminding our colleagues that they are only estimates and by their nature they are wrong.

Agile estimating methods include techniques like Planning Poker (LINK) which simplify the estimating process down to degrees of complexity and whilst they can be very successful they are still relying on producing an estimate of sorts, even if they just classify them by magnitude of effort. I was just this week chatting to a PM on a large agile project who was frustrated by the talented developments team’s inability to hit their deadlines purely as a result of their poor estimating.

There are many suggested ways to improve the process and help with estimate accuracy and I’m not going to cover them all here, but regardless of the techniques that you use don’t waste the valuable resource that is your ‘previous’ estimates. Historic estimates when combined with real metrics of how long that work actually took are invaluable to the process of improving your future estimates.

Consistency is important to enable a project to be compared with previously completed one. By using a template or check-sheet you ensure that all factors are considered and recorded, and avoid mistakes being made from forgetting to include items. Having an itemised estimate in a consistent format enables estimates to be compared easily to provide a quick sanity test (e.g: "Why is project X so much more than project Y?"). It also allows you to capture and evolve the process over time as you add new items to your template or checklist as you find them relevant to the process. Metrics, such as those provided by good ALM tools (e.g. Team Foundation Server or Rational Team Concert) are useful for many things but especially for feeding back into the estimating process. By knowing how long something actually took to build you can accurately predict how long it will take to build something similar. 

In summary then, estimates by their nature are wrong and whilst a necessary evil of modern organisations are notoriously difficult for us mere humans to get right. Hopefully this post has made you think about estimates a little more and reminded you to treat them with the care that they deserve in your future projects.

Just Do It !

nike-just-do-it1Now I’m not a big fan of New Year’s Resolutions and in fact have the same one each year which I religiously stick to, which is to never make any New Years Resolutions. However whilst we are all in the spirit of renewed enthusiasm for the new year ahead I’d like to quote the great tag line of the Nike brand:

“Just do it!”

We all speak to people who have a great idea for the next big thing be it a concept for a new phone app, a great web site or a business idea that could make a fortune. For some it will always be just a vision in their head, some will get started but get distracted or bored, but very few will get their ideas off the ground. Be different, get that idea out of your head and into the real world because as Woody Allen said…

“80% of success is showing up!”

So if you have an idea, a domain name gathering dust or some half finished code lying around then either move on and forget it or do something about it. Even if it fails you’ll learn a lot along the way.

justdotit2If the task seems just too big, then check out this excellent post by Jim Highsmith where he talks about just getting that project started and focusing on delivering value. For more ideas on getting started with just the bare minimum features then check out the the Minimal Viable Product (MVP) concept used by many start-ups.

Oh and don’t forget – if you become a millionaire on the back of this post, don’t forget me 🙂

Useful Web Based UML Drawing Tools

A basic sequence diagram can be a very powerful tool to explain the interactions in a system but drawing them can often be too time consuming to bother for disposable uses. I find that many people draw them out on rough paper to help explain their argument but less actually ever bother to build them in soft form unless for a formal document. There are a lot of powerful feature rich UML building tools but recently I found this:

It lets you build sequence diagrams like the one below in seconds by typing the object interactions in a short hand form, such as:

title Authentication Sequence
Alice->Bob: Authentication Request
Bob-->Alice: Authentication Response
Bob-->Jeff: Pass Request
Jeff-->Bob: Return Response

…which draws this in real time in the browser: 

And you can even choose the style and colouring too. There’s also the functionality to save diagrams and import saved diagram text. Check out the API page too for tips on embedding the drawing engine into your web pages allowing you to edit your diagrams as well as plugins for Confluence, Trac and xWiki. There are also example implementations for Ruby, Java and Python.

A similar online tool is with which you can draw Class, Activity and Use Case Diagrams. Here is an example of a Use Case diagram definition:

[Customer]-(Make Cup of Tea)
(Make Cup of Tea)<(Add Milk)
(Make Cup of Tea)>(Add Tea Bag)

…which makes this:

YUMLUseCase also supports API integration with a whole host of things (Gmail, Android, .Net, PowerShell, Ruby and more).

Now there’s no reason to not use a quick UML diagram to explain what you mean!

It’s All About Culture (Enterprise IT Beware)

This interesting post by PEG recently highlights an organisations culture as being in reality the only differentiating factor that they have. In his view assets, IP, cost competitiveness, brand and even people can be copied or acquired by your competition but it is your company culture that will lead to success/failure. I agree with his assertion on the importance of culture, and would add that whilst it has always been the case that culture is critical the rapidly changing new world is bringing with it new challenges to competitiveness resulting in culture becoming even more important for competitive advantage. I must clarify here that I see a huge gulf between the ‘official’ culture of an organisation that is documented and presented by senior management and the real culture that is living and breathing on the shop floor (and they rarely match).

This excellent post by Todd Hoff recently highlights the way that the rules framing IT (and business) are changing and how start-ups are becoming a beacon for investigating this new world. When we look at this new world the key elements are flexibility, adaptability and innovation. These traits thrive in start-ups where the ‘culture’ encourages them. Many of these new industry shakers lack the assets, brand and IP to use but excel in using their ‘culture’ to outmanoeuvre bigger rivals and drive innovation in their industry.

Some enterprises get this and are trying hard to foster a more innovative and customer focused culture. The emergence of agile development practices can help to focus the team on the true business value of features and aid flexibility in the use of resources. SAP has recently designed its new office environment to fully promote Agile development practices (see this post), and whilst this on its own can’t change corporate culture, it can remove some of the blockers to a more agile culture emerging. Of course there are many other enterprises that still rely on military inspired hierarchical structures and attempt to enforce a desired culture. This InfoQ article by Craig Smith summarizes recent articles covering how mainstream management are missing the benefits of an agile approach within their organisations. 

"…The management world remains generally in denial about the discoveries of Agile. You can scan the pages of Harvard Business Review and find scarcely even an oblique reference to the solution that Agile offers to one of the fundamental management problems of our times.”

The benefits of Agile practices are well documented and so if this fundamental approach is still not connecting with mainstream managers then how long will it be before they grasp the bigger paradigm shift that is occurring underneath them. This shift is being forged by start-ups and enabled via cloud computing.

Let’s look into what is happening in the start-up space; Todd Hoff’s post again highlights the use of small dedicated autonomous teams with the power (and responsibility) to make rapid changes to production systems. these teams being responsible for the entirety of their systems from design to build, test, deployment and even monitoring. How does this work? Well it can only work effectively via a shared culture of innovation, ownership and excellence. Facebook/Google staff have stated that their biggest motivator is the recognition of their good work by their peers (see here and here). This is ‘Motivation 3.0’ in action with the intrinsic rewards of the job at hand being the main motivation to succeed. Compare this with the traditional and still prevalent command and control approach used in a lot of enterprises with tightly controlled processes, working habits and resources. Splitting the responsibility for each stage of the development lifecycle between different teams (usually separated by reporting lines, internal funding requests, remote locations etc.) and then expecting a coherent solution to prevail is not going to work in this new world.

We are now starting to see the emergence of the cloud as a major force in driving the democratising of computing power and enabling the emergence of empowered teams. Todd’s post covers in detail how the cloud is making the once impossible possible and I recommend you take the time to read it. Only time will tell if the cloud’s impact on Enterprise IT will act as a catalyst to driving the emergence of more agile competitive corporate cultures in enterprises in the future.

Physical vs Virtual Kanban Boards

Physical vs Virtual Kanban Boards

In previous posts I covered an introduction to Kanban and a review of a trial Kanban project. In this, my third in a series of Kanban posts, I’ll cover physical versus software Kanban boards.  For some this is an almost religious debate, and many would not question their decision for one other another. Whilst the answer for which is best will, of course, always be ‘it depends’, there are some fundamental differences that exist. I generally find that there is an expectation within IT that you virtualise everything, because well that’s what we do. Whilst this is often true I would add that a physical board should not be underestimated as out-dated and indeed should be considered equal for its significant merits in many areas.

Due to recent growth of Kanban there are now many software Kanban tools providing varying levels of functionality but all offering at the very least a virtual multi-user Kanban task-board. For a comprehensive list of these tools check out this AgileScout post. If you don’t like these tools then could always create your own in MS Access, Excel or even Word if you like. These might be enough to get you started for a Kanban proof of concept project but will of course lack the features built into specialist software such as MI, history tracking, cadence monitoring and reporting. Similarly physical boards can start out as being lines drawn on a whiteboard or string pinned out on a pin-board, then littered with post-it-notes. Either way the barrier to entry is low, although the Software As A Service (SaaS) model used by the many web based Kanban board providers will mean there will be a long term cost associated with utilizing a 3rd party tool. An alternative to a SaaS model would be to use on-premise Kanban software. For teams using Team Foundation Server you can easily build on your existing infrastructure by adding a Kanban viewer for your TFS work items. Telerik Work Item Manager for TFS has a taskboard view that can be customised and there are other Kanban based offerings that integrate with TFS (e.g. Microsoft can obviously see the power of this approach considering it’s now built into Team Foundation Server 11 which will have a web based taskboard view via the Web Access application, complete with workflow rules and drag & drop updates. Check out the screenshot below.

TFS11_candeleteThe first real difference between a physical and virtual board is the sheer presence that a physical board gives. On my project documented in part 2 of this Kanban series the move from a virtual software taskboard (albeit a very simple one) to a physical board made the whole method much more tangible and “real” to the team. It moved from being another software tracker to update to something different, something big and bold and in the room! You’re not going to miss a physical board on your way to the coffee machine forcing them to become a talking point in the office. Project stakeholders pay attention to it as it stands out when they’re in the team’s work area. The fact that we have hundreds of existing software based processes makes the introduction of a physical one slightly alien to the team and therefore grabs attention. Size is important here too, as teams can gather around the board for stand-ups ensuring a shared vision. Teams struggle to gather around a PC screen or print out. This ‘presence’ feature can be replicated for a virtual Kanban board by utilizing a large LCD screen and I would recommend that if you can do it.

Of course the presence of a physical board is only good if your team is co-located. Remote teams are a challenge with Kanban as they are with any method or methodology and therefore a virtual board may be the only viable solution. In my recent project I utilised the on-site coordinator to act as a remote team proxy for the board which can work but is by no means ideal. A virtual tool allows everyone in the team to contribute regardless of their location.

A key aspect of the Kanban method is its simplicity and flexibility. Any board whether physical or virtual must be easily accessed, read, updated and changed. Continuous improvements to your processes are key and changing the board to reflect those changes should not offer any resistance. Complicated physical boards can be difficult to change as can some of the virtual tools. There’s no clear winner here but I would keep this in mind when reviewing Kanban software tools or when building a complex physical board.

The Kanban BoardVirtual boards provide the ability to easily produce MI and reporting information from your workflow, something just not possible with a physical board without manual work which could slow down the process. The ability for a team to track underlying trends and have up to date cadence figures is a huge benefit for an experienced Kanban team. I would suggest it may have less benefit for a team new to the process however as the Kanban method suggests that the board alone should be able to provide the data you need, but of course its not prescriptive and is open to adaption as a team sees fit.

One hurdle that some organisations struggle with is data security and privacy. Whether you class the data on your board as sensitive or not may influence your decision on the type of board to use. On one hand Virtual boards provide the ability to restrict access to verified users therefore maintain data security but on the other hand, many Kanban tools are externally hosted and therefore you are trusting your data to a third party. It’s not just security of course but also reliability of the vendor, e.g I’d hate to be mid project when my Kanban board disappears because the vendor has ceased trading. A physical board has its own challenges. Anyone in your office can see the board with no restriction on user roles. Without an automated change history it is also open to malicious changes from disgruntled colleagues. Backing up a virtual board is second nature but its harder to backup/restore a physical board, however taking regular photos is the best way I have found to date.

When I consider what would be my ideal board it would have to be easy, flexible, big and as integrated as possible with our existing systems. My ideal would be a virtual board with all the benefits that brings (especially remote teams) but it would have to have real presence in the office such as a large LCD on the wall constantly displaying the work in progress. In addition the screen would be a touch screen or alternatively put a terminal under display that was always logged in and used solely for the purpose of team members to update the board, thus removing the barrier for people to update the board. Of course the team could log on from their machines and update board but I see real benefit in walking up to the board to make changes, especially if required during a stand-up. This replicates the ability for you to physically move that post it note on the physical board. This setup should provide the majority of benefits of both approaches.

In summary, the choice is a personal one for your team and your circumstances, but for teams new to Kanban I wouldn’t underestimate the importance of having something bold that stands out and demands attention from the team and this is usually easier to implement with a physical board. Once the process is embedded and the board becomes a key part of the teams day then this would become less important and the power of virtual boards can be maximized.

For an excellent list of Kanban tools check out:

A Trial Kanban Project – What Worked. What Didn’t.

A Trial Kanban Project – What Worked. What Didn’t.

Recently I introduced Kanban to an inflight development project with the objective of trialling this technique with the team to raise productivity by overcoming various communication and process issues. In the first part of this 3 part series I presented an introduction to Kanban. This second part will cover the results obtained by introducing it to an in-flight project and part 3 will give some fuel to the “Physical versus Virtual Kanban boards” debate.

Project Background

This was a service release aiming to investigate and resolve a fixed set of production defects on a newly implemented enterprise scale system. The team consisted of 6 people onshore including Project Manager, Work Packet Manager, Technical Architect, System Analyst and 12 offshore developers (with limited experience of the system). This was a Waterfall project with the project plan based on the agreed prioritized defect list, a fixed resource and time-boxed approach. The number and complex nature of the defect investigations together with dependency management (with other teams) made the tracking of items within the project difficult and opaque which exposed the lack of a structured workflow within the team.

Approach Taken

Mid project I suggested the team adopt a Kanban approach and after explaining the concept to the team quickly got their buy-in. A quick virtual task-board was introduced (in Microsoft Excel initially) with columns defined based on the current informal process with added enforced quality assurance steps. Each work item (defect or change request) was added to the board and tracked, with updates applied daily by the team, as a very minimum, during a morning stand-up.

The task-board immediately brought a visual dimension to the project status making managing work in progress (WIP)easier and it also helped to bring focus to the daily stand-ups. After this initial success, to overcome the limitations of the virtual board and to get wider team involvement, the project moved to a physical board with very positive results.

The Kanban Board

The new board evolved from the previous version based on feedback and was simpler to use. Its physical nature and the fact that it was always on display for the team to consume made a huge difference and resulted in full team commitment to the approach. The physical board became the focal point for the daily stand-ups and for ad-hoc progress updates. The off-shore’s teams updates were applied regularly by the onshore coordinator. Team members used the Kanban queues to action work and move them to the next queue, monitoring/resolving blockages.

What worked well

Implementing Kanban was particularly easy compared to other methods and approaches. As the method logically represents a team to-do list it was easy to explain to people and therefore was not seen as a new complicated process which no doubt aided adoption with the team. It was immediately apparent that this method would compliment and not conflict with existing methodologies and processes. The fact that the board initially should mirror you current processes combined with the fact that it is a method not a methodology meant that it easily slotted in. Again this helped with team adoption and it received a very positive reception from the project team.

The team liked that Kanban immediately brought a visual dimension to project status. They could quickly see items in their queues and where the bottlenecks (if any) were. One quick glance at the board showed the current status of all items. This enabled the team to quickly provide progress updates and set realistic expectations with stakeholders outside the project team. Effort was saved by avoiding wasted deployments into test environments by being able to see at a glance what was going to be ready to deploy in the next build and then set early expectations with the QA team on whether they wanted to take the build or not.

As the flow of work was visualized and monitored it became easier to enforce the current process stages more explicitly. An item would have to pass through the “code review” stage before it could get to the “Ready to Deploy” stage – not only that but the process was completely transparent and clear to everyone involved. This aided communication too which facilitated the current process. As the process was transparent, and the team were able to use the stages as a common ‘vocabulary’, amendments to the process were easily discussed and small incremental process improvements were made, increasing the efficiency of the team. Kanban encourages this constant improvement and the flexibility of the method ensures that it is not a barrier to change.

From a project management perspective the board provided an excellent starting point for meeting updates and status tracking. With all WIP now visible the control of that work, and the resources involved became easier. It was this visibly that actually triggered  a useful resource allocation investigation resulting is resources being discovered to be under-utilised in one area and enabled reallocation.

On this project the move from a virtual software task-board (albeit a very simple one) to a physical board made a huge difference and it made the whole method very tangible and “real”. Not just another piece of software to update but something different, something big and bold and in the room!

What didn’t work so well

With any new method a team has to find its feet and test the boundaries of what is productive and what isn’t. The rules to our process and our task board were not as clearly communicated to the whole team as at first thought resulting in some lack of full team ownership and involvement in the first few days. This was evidenced by some of the team acting as purely ‘read-only’ users initially until  they were encouraged to fully participate and engage with the method. Despite being a simple concept, without an explanation of the columns their meaning was open to a users interpretation. These were not problems with the Kanban method but merely communication problems that were quickly rectified.

It was discovered that the Kanban board could become misused. There was an urge to overcomplicate the board by adding more detail to the columns and the cards (such as expected delivery date and resource names). As our first board was virtual  we found that additional information could easily be added without much control.  This was stamped out early on after discussions within the team led to the information that was only required for reporting and tracking reasons being added or supplied from elsewhere as required. The board was simplified again and then a subsequent move over to a physical board helped eradicate this problem.

We found that it was possible for the Kanban board to be used to just display the current status of work as opposed to being used as intended to drive work allocation. The board was being updated daily to reflect the days progress but the board was not driving the workflow and the team was not using the queues to allocate all their work. This meant that it was still adding value as an information radiator but not enabling its full potential. A similar problem was that some WIP items were not added to the board at all but managed separately. This was compounded if stand-ups were driven from an alternative list (defect tracker tool, meeting notes) instead of the Kanban board. All team members had to be reminded that all WIP had to be on the board no matter what the work was was, as it all had to be made visible. Luckily these issues were overcome by reminding people to use the board as the “one truth” and ensuring that all work was progressed via the board, which in a short time became embedded in the team practices.

An area where Kanban (and indeed most Agile methods) suffers is the ‘Remote Team’ scenario. The common assumption that all the teams are co-located is a common failing but one that has to be overcome using practical solutions where possible. Ideally teams need to be co-located for maximum efficiency but the use of telepresence, chat tools and conference calls can help. On this project we managed with an on-shore coordinator acting as a proxy to the offshore team and the Kanban board, combined with usual modern communication methods. Use of a good virtual Kanban board would potentially resolve some of these issues. To be clear Kanban did not introduce any additional impediments to remote teams than didn’t already exist within the team and its methodology.


This was just an informal trial of Kanban on a inflight project and so in the future I would like to extend the trial to other projects but with some adjustments. At project kick off I would not assume that the whole team had an understanding of the columns and the workflow process, and would ensure full buy-in. More monitoring of ‘cycle time’ would be advised to gain more insight into the current process. I would also like to trial some of the professional virtual task-board software on the market, together with possible integration with Team Foundation Server Work Items.

In summary the Kanban method worked very well on this project and with its simple/flexible approach I’m sure that it can be useful on a huge variety of projects. Critically the feedback from this project team was extremely positive, so much so that team members actually went on to implement Kanban on their other projects afterwards. It was seen by the team (and stakeholders) as a successful work tracking and communication technique, whilst also providing the opportunity to continuously improve. I highly recommend considering trialling Kanban on your future projects, and if you can’t manage to go that far then at least start to think how you can visualize your work in progress.

An Introduction to Kanban

An Introduction to Kanban

Recently I introduced Kanban to an inflight development project with the objective of trialling this technique with a hope to raising productivity by overcoming various communication and process issues. In this three part series I’ll cover my real world experiences and the results felt by the project/team. This first part offers an introduction to Kanban, part 2 will cover the results obtained by introducing it to an in-flight project and part 3 will give some fuel to the “Physical versus Virtual Kanban boards” debate.

Kanban has been used in manufacturing for many years and is rooted in the Kaizen (continuous improvement) philosophy documented in the Toyota Production System. It has been used in the manufacturing industry for many years. It’s focus in the field of Software Development came with the book by David Anderson. Kanban in the Software Development sense is …

“… a method for developing products with an emphasis on just-in-time delivery while not overloading the developers. It emphasizes that developers pull work from a queue, and the process, from definition of a task to its delivery to the customer, is displayed for participants to see”. – Wikipedia

Kanban is not a methodology but an enabler to the goal of Continuous Improvement: The Kanban Method encourages small continuous, incremental and evolutionary workflow changes that stick over time. When teams have a shared understanding about the flow of work they are more likely to be able to build a shared comprehension of a problem and then suggest improvements that can be agreed by team consensus.

Visualization of work has become very popular in agile development practices. XP, for example, has “informative workspaces” that display project progress at a glance and the use of “Information Radiators” is gaining ground (i.e. graphs or charts on the wall showing graphically the work in progress within the team or project. This work is too often hidden and so by publicizing the flow of work we can start to understand the real processes in place and how work flows to completion. Once you have that understanding (and can easily share it)  you can start to make improvements.

imageKanban development revolves around a visual task-board used for managing work in progress: A progress board but with rules! The columns on the board represent the different states or steps in the workflow and the cards/post-it-notes represent a unit of work (the feature/user story/task etc.). We then move the cards along the board through each process stage (i.e. column) to completion. A look at the board at any one time shows which items of work are currently within each stage of the process.

In order to limit the concurrent work in progress within the team we add rules to the Kanban board and set WIP limits on each column, e.g. only 6 items can be in QA at any one time as that is the max capacity of that team. No more work can be assigned to the QA queue whilst it is full. This limit enforces that new work is only “pulled” into the QA queue when there is available capacity. This immediately reveals the bottlenecks in your process so that they may be addressed. Once we spot a bottleneck we stop adding more work to the bottlenecked queue and collaborate within the team to eliminate the blockage (e.g. move resources off build tasks and onto QA tasks). It’s important to note here that we are not artificially introducing a limit here, as we have created the board initially based on the current workflow and therefore the limit exists in reality (e.g. only a single team member can perform part of the process) but it may not have been currently visible to the team.

imageDon’t underestimate the power of a visual progress board! It’s useful for all project stakeholders to see the latest project status, including senior managers. It might even replace that dreaded status report. The flow of work through each state in the workflow can be monitored, measured and reported (if desired). By actively managing the flow the continuous, incremental and evolutionary changes to the system can be evaluated to have positive or negative effects on the system. Ultimately we want to figure out how to keep development and throughput running smoothly by tracking down bottlenecks and then adjusting our workflow or our resources to eliminate them. We may be able to relieve repeated bottlenecks by changing the number and types of people in each role and cross training our team. One thing that can be monitored is the ‘cadence’ or rhythm of the pace of change. How long does it take for an item to be completed once its added to the board (i.e. its travel time from left to right)? This cycle time can be calculated as:

Cycle Time    =     Number of Things in Process  / Average Completion Rate

A critical benefit of Kanban is that it is very simple to implement, use and improve. Knocking up a task board on your current process is straight forward and team members soon pick up the approach which really aids team engagement. In addition as it is not a methodology in its self it can be used within your existing methodology and actually provides a more gradual evolutionary path from Waterfall to Agile. It is also less dependent on cross functional teams than Scrum. Using Kanban to manage tasks within the Design, Build and Test phases of a Waterfall project can prove very successful regardless of the wider methodologies in use. In addition, the ability for it to handle irregular, ad-hoc work makes it also especially good for managing defect resolutions within a build team or for a Operations/Service team handling incidents.

A 2010 case study with the BBC Worldwide team using Lean methods including Kanban recorded a 37% increase in lead time improvement, a 47% increase in delivery consistency and a 24% drop in defects reported. This was not all due to Kanban, but Kanban appears to have been a key ingredient together with smaller batch sizes, stand-ups  and an empowered team.

I think that Kanban is a very powerful approach that can fit within your existing methodology well (including Waterfall) to drive benefit and importantly Continuous Improvement. For more information on Kanban check out these links:

The Enterprise & Open Web Developer Divide

In this interesting Forrester post about embracing the open web Jeffrey Hammond highlights the presence of two different developer communities. In his words:

"…there are two different developer communities out there that I deal with. In the past, I’ve referred to these groups as the "inside the firewall crowd" and the "outside the firewall crowd." The inquiries I have with the first group are fairly conventional — they segment as .NET or Java development shops, they use app servers and RDBMSes, and they worry about security and governance. Inquiries with the second group are very different — these developers are multilingual, hold very few alliances to vendors, tend to be younger, and embrace open source and open communities as a way to get almost everything done. The first group thinks web services are done with SOAP; the second does them with REST and JSON. The first group thinks MVC, the second thinks "pipes and filters" and eventing."

Following the tech industry it is clear to me that this division is tangible and in fact I would suggest the gap is currently increasing. I recently started to revisit my open web development skills after it occurred to me how large this divide was beginning to get and how important these skills will be key in the future. Whilst the Enterprise developer often traditionally focuses deeply on a handful of technologies (too often from one Vendor) the Open Web developer is constantly learning new languages and choosing between best of breed open source frameworks to get the job done. The new Open Web developer has evolved from a different age and with different perspectives and in many ways leaving behind the rules/constraints of the Enterprise developer building typical Line Of Business (LOB) applications. I’m not suggesting that Enterprise developers don’t understand these technologies already, I assume many do, but they’re unlikely to be living and breathing them. This is not just about web development technologies and techniques, but more about mind-sets, architectural styles and patterns. Perhaps it can be viewed historically as similar to the evolution from mainframes to distributed computing, and this is just the next evolution. This movement compliments the emergence of cloud computing and one can assume that the social, dynamic LOB applications of tomorrow will rely heavily on the skills and technologies of the Open Web community. To quote Jeffrey again:

"In the next few years, their world is headed straight to an IT shop near you."

The proliferation of devices, cloud computing and a new breed of ‘surfing since birth’ young blood entering the industry combined with the shift towards this new world from big players like Microsoft (e.g. using JavaScript to build Windows 8 apps) mean that Enterprise IT will have to converge with the Open Web approach in order to meet future consumer needs. Only the integration of these worlds will enable Enterprises to integrate their existing application landscapes with the new web based consumption model.

John R. Rymer’s Forrester post on the subject provides his view on the differences between these communities and his accompanying post details the technologies you need to focus on now (HTML5, CSS3, JavaScript, REST). Whilst it can be tricky to follow this sort of fast moving decentralized movement, the good news is that now is a great time to get into these technologies with the growth of the umbrella HTML5 movement raising awareness within the industry and bringing some standards to advanced web design. Keep an eye on what the big web frameworks are offering, and track the innovations at companies like Google and Twitter. I recommend you read these Forrester articles and think about how this affects your architecture, IT organization and career.

For some quality content on these technologies check out these links:  ‘Mozilla Developer Network’, ‘Move The Web Forward’ and ‘HTML5 Rocks’.