Saturday, September 21, 2013

How management consultants will kill technology innovation

Recently I was forwarded an article by a colleague of mine. It was an article published by McKinsey&Company, with the subject line "FW: Improving application development". Ironically, after reading the article, I was left with a stark vision of the future of technology with McKinsey consultants in the driver's seat. Spoiler: it would look something like an assembly line.

The article, titled "Enhancing the efficiency and effectiveness of application development", introduces a pedestrian concept of measuring the input side of application development as a means to measure productivity on the output side. The concept is an attempt to quantify use cases with a point system and the points are used to measure productivity. While there was nothing inherently problematic with the concept in the abstract, reading the article raised huge red flags.

According to the article:

"organizations often don’t have a robust way to gather and organize functional and technical requirements for application-development projects. Instead, they list requirements in what often amounts to little more than a loosely structured laundry list. Organizations may have used this laundry-list approach over a long period of time, and it thus may be deeply entrenched."

The diction used suggests that the reason that the requirements gathering is not robust and organized is a function of inefficiency of the individuals or processes used to gather them. A major flaw in this line of thinking is often the organization itself has an insufficient understanding of the root problems they face and how to solve them. The word entrenched suggests irrationality, with the implication that there may be something of a hostile takeover required to correct the problem. But, what is the problem?

"organizations find it difficult to fully and accurately capture requirements and align them with the needs of their internal or external business clients. Their application-development projects tend to suffer the inefficiencies of shifting priorities, last-minute change requests, and dissatisfied business users. In our experience, these changes often amount to cost overruns of 30 to 100 percent... ...use cases provide a logical and structured way to organize the functional requirements of an application-development project. Each use case is a description of a scenario under which the user of an application interacts with that application. For example, a UC for an online-banking application might describe each of the steps that a bank customer follows to log into her account and check the balance of available funds, as well as the transactions involved when that application calls on a database to pull up the stored information."

According to authors, the problem is that the requirements aren't logical or structured. That lack of logic and structure is responsible for project overruns. However, use cases have long been employed for documenting requirements during object-oriented analysis using UML use case diagrams. This approach is structured. A well-known benefit of use case diagrams is that they are commonly understood by business people lacking technical backgrounds. However, logic and structure is not missing. It is converging on an understanding of the root problem, a solution (or solutions) and accurately predicting time and resources involved in implementing it. Project analyses are commonly logical and structured. However, overruns come from a changing understanding of the problem(s) or the resources required to solve them.

Traditional waterfall design promoted large up-front analysis, assuming that it was just a matter of a robust organization and structure to understanding the problem and costs involved. However, modern methods start with the assumption that it is natural and expected that the solution and estimates change as you start to better understand what problem you are trying to address and the details involved in a solution. Initial predictions matter less because they are based on insufficient understanding. Modern methods try to incorporate this learning to deliver concrete working software, iteratively. The idea is  that all along you produce something you can potentially use.

From the sidebar, it appears the authors don't understand these methods.

"...where the primary purpose of SPs is to allocate the workload across the team. ...However, because SPs are based solely on gut feel, they are too subjective or too easy to game to compare different development teams or even the performance of a single team over multiple periods.
Use-case points (UCPs) represent a sweet spot between FPs and SPs. UCPs are easier to calculate than FPs, provide a similar level of accuracy and objectivity, and require far less overhead. At the same time, UCPs provide significantly more accuracy and objectivity than SPs, without unduly adding overhead."

Story points aren't an absolute because, if you are being honest with yourself, your true ability to define and estimate work is not good enough. You are more likely to be able to identify relative complexities, and ultimately, that's good enough to predict a velocity once you have established a baseline on real work with your team. The experience levels, problems, tools and techniques you have will drastically reduce the efficacy of an estimate taken from another company or team. However, if you estimate your entire backlog of identified work (stories) in relative terms, after measuring the velocity of the team you can estimate delivery dates. However, remember your understanding is changing as you learn more about your problem(s) and solution(s).

Trying to optimize productivity when you aren't ensuring that you regularly converge on the right problem(s) is misguided. Applying metrics based on poor assumptions and using it to steer decisions without a full awareness of the biases and limitations undermines the whole point of building something: to deliver something valuable. But then, how do you measure value? Maybe, we haven't done a good job of that one either.

There are people out there doing better, but not by mashing together old-school business thinking and antiquated software development practices. It is natural to fail to meet our expectations when they are unrealistic. Starting with an appreciation of how poor our initial understanding is and an ability to fail fast and pivot quickly with learning allows you to converge on something valuable. It might not be what you thought you needed, but it will be something of value. Repeatably delivering valuable things, not hitting Use Case Points estimates, will propel the world's next wild success story.

Supporting code review with Maven, Git, Jenkins and Atlassian Stash

I have to admit, the first time I used git it left a bad taste in my mouth. The commands seemed to promote using options in your default workflow and it felt overly complicated, like a tool designed by developers who love complexity. I chalked it up to the fact it was designed by Linux kernel hackers. I used the tool only when I had to, such as when contributing to projects hosted in GitHub, like Jenkins. It turned out, I was wrong. I was working with a different mental from the tool's designers and expecting the tool to feel natural to me

Recently, a colleague of mine sent a couple of videos that explain how git works. After watching them, the elegance and simplicity of the tool was obvious. Now, I am spending the weekend rushing to get a git repository manager, Atlassian Stash, integrated into our work flow in time to welcome a new team member with pre-integration code reviews; something that Git and Stash make extremely easy.

If you are interested in those videos, I can't recommend them more especially for Git skeptics:

Atlassian Stash and Code Review

I wanted to introduce code review into the core of my team's development loop. I have seen teams completely transformed by it. I have had success with patch-based review, but the exceptions like handling binary files and trying to update patches with conflicts for teams with reservations about review being too cumbersome is not recommended. It still has more benefits than drawbacks, but with negative stakeholders it can be like walking through a mine field. Distributed version control makes these exceptions a non-issue since a pull request can be integrated as-is, no patch-up intervention necessary.

When looking for a supporting tool, GitHub Enterprise was a natural choice, but the pricing was expensive especially for the size of my team. Alternatively, Atlassian Stash had extremely affordable pricing for the size of my team. The code viewer is admittedly more cumbersome than GitHub's, but is still better than the tools we were using before. More importantly, it handles the pull request work flow with reviewers and once you configure it, it integrates well with Jenkins and Jira.

Configuring Maven

Stash supports the SSH transport, and can simply add your SSH key to your user profile. Under Cygwin/Linux it's the normal SSH key authentication setup. Add your public (.pub) key under your user profile in Stash and clone an empty Stash repo. You can then setup Maven to use the repo by adding an SCM section in a project pom:


By default, the Maven Git SCM provider will use this as both the fetch and push URL. You can optionally specify different push/fetch URLs.

Configuring Jenkins

Before you start, you will want to install the following:

  • Jenkins Maven Integration (included with recent Jenkins versions)
  • Jenkins Git Plugin
  • Stash Notifier Plugin

You'll need to configure key auth for Jenkins to clone the repo. You will also want to configure Stash settings for the notifier (Manage Jenkins->Configure System). Once you've done this, to set up a single branch build on 'master':

  • Source Code Managment: Git
  • Repository URL: ssh://
  • Name: origin
  • Refspec: +refs/heads/master:refs/remotes/origin/master
  • Branches to build: master
  • Checkout/merge to local branch (Advanced): master (required to use release plugin)
  • Repository browser: stash
  • Repository browser URL:
  • Build triggers: when snapshots dependencies are built, Poll SCM with long interval
  • Maven release build (if you are using it)
  • Post-build actions: Notify Stash Instance (reports build results back to Stash)

Pull-request Builds

There are multiple ways to configure this, including writing your own plugins. However, using another build you can publish build results to Stash pull requests. Create a new job and configure the following:

  • Source Code Managment: Git
  • Repository URL: ssh://
  • Name: origin
  • Refspec: +refs/pull-requests/*:refs/remotes/origin/pull-requests/*
  • Branches to build: origin/pull-requests/*/from
  • Repository browser: stash
  • Repository browser URL:
  • Build triggers: when snapshots dependencies are built, Poll SCM with long interval
  • Build: Goals and options = clean verify
  • Post-build actions: Notify Stash Instance (reports build results back to Stash)

Triggered Builds

This will work, but if you want to start the builds immediately upon pushed changes you need to configure a web hook in stash. In the repository settings, add the 'Stash Post-Receive Webhook to Jenkins'. Configure it with the following settings:

  • Jenkins URL:
  • Git Repo URL: ssh://

The plugin tells your Jenkins server to poll immediately and a build will start if there are relevant changes to build. You could just trigger a build with a direct web hook to the job, but this will always trigger a build, regardless of whether the changes are for the branch(es) you are building.

How It Works

Using this setup, your master build publishes artifacts with approved changes to your repository. Your team integrates changes by merging submitted pull requests. When pull requests are submitted, Jenkins detects them and updates the pull request with the build status.This allows your reviewers to  review the code and allows them to see whether the pull request builds in advance.