software solutions / project leadership / agile coaching and training

Implied metrics

Posted on September 14, 2011

In my last post, I talked about the kinds of metrics that I don’t like. But sometimes even good metrics can go wrong.

When I worked for a previous employer, we used “load sheets” on our Agile card wall. These were pieces of paper divided into 10 blocks (10 days in a 2 week iteration) and then we would cut out cards for the features that were sized to take up as many blocks as the estimate (so a 3 day estimate card would take up 3 blocks). We would use different colored dot stickers to indicate when we had discussed the requirements, discussed the design, completed the development, and when QA was complete. Each developer on the team would have one of these sheets on the wall, so you could see who was working on what and where we were pretty easily.

Load Sheets

I didn’t come up with the system, but I’m guessing that they designed it for at least two reasons (probably more than that):

  • Anyone can come and see what we’re working on and what the status is of a particular feature
  • We would meet with the project sponsor every other week and they would pick out features for the next two weeks, and the differently-sized cards gave a good visual representation of the cost associated with each feature

This system did an excellent job of both of those things. The problem is that the development team interpreted it in a completely different way.

When the developers looked at the sheets, these are some of the things that they thought:

  • The most important thing is for me to get my features to QA in the estimated time
  • Completing my tasks in the estimated time is important (more important than things like writing tests or helping other developers, because there are no metrics for those)
  • Completing my tasks this iteration is more important than helping others complete their tasks (even if helping someone else would be better for the team). This indirectly discouraged pairing on tasks.

As a result, the following things actually happened (I’m not making this stuff up):

  • One developer segregated himself from the team and focused on completing as many features as he possibly could. He did a good job of staying focused and he did get a lot done, but there was little collaboration or design discussion and virtually no unit tests. He used his productivity metric to ask for a raise (because he was completing twice as much work).
  • Developers would work on features, and as soon as the estimated time passed and they weren’t done, they got openly flustered and complained about the estimates being wrong (which was correct), and focused on getting it done instead of getting it done right (which led to technical debt).
  • Developers would get the work most of the way done and then say it was done, saying, “It’s far enough along that QA can take a look at it and let me know what’s wrong.” (That way they could move on to the next task.)

Whoa, what happened?!? I’m sure this wasn’t what the people who came up with this system intended. But that’s the problem — while the system provided a lot of positive benefits, it also indirectly encouraged individualistic behavior and cutting corners and discouraged collaboration because of implied metrics. I was leading the team, and I wasn’t going into TFS and spitting out reports to compare developers, and I wasn’t tracking who was completing tasks within the estimated time, because it didn’t matter at all. It didn’t matter what I was or wasn’t doing — the data was being tracked, so developers assumed that something was (or could) be done with it. (I’m not using this example to criticize the people who came up with the system, because no system is perfect, and there were many very successful projects completed using this approach.)

What could we have done differently? We could figure out what message we want our card wall to convey to the developers and structure it that way. For example:

  • I care much more about how much the team accomplishes in an iteration than how much each individual developer gets done, so don’t organize the cards by team member (if you want to write developers’ names on cards, that’s fine, just don’t organize them in a way that it’s easy to compare developers).
  • I care much more about a feature being done and ready to be deployed than I care about when a developer hands it over to QA, so instead of measuring how long it takes developers to complete their first pass at development, I’ll measure how long it takes from the time we start working on a feature to the time that it’s ready to be deployed. We need to break down the traditional silos of development and have BAs, QAs, and developers work together to achieve one goal.
  • I value communication and collaboration, so when I’m talking to the team, I’ll emphasize working together, being willing to help each other out, pairing, and things like that. I don’t want developers to think that they should be smart enough to figure something out when they’re stuck, I want them to ask for help and I want people to solve each other’s problems.


  1. Great insight. Thanks for sharing

    Justin Kohnen — September 14, 2011 @ 11:49 am

  2. What a great post. I worked quite a bit on projects using this system, but some your insights never occurred to me. I agree with the implied metrics and can vouch for the results you mention. Also, something not mentioned is that each developer’s card had a different number of open slots depending on the skill or familiarity of the developr, which also caused issues such as comparisons of worth.

    While I think this methodology had its place and was great for the time, it was just a stepping stone on the way to learning even better ways to interact.

    Matt Casto — September 15, 2011 @ 7:53 am

  3. We use load sheets in our company and we also used to notify completion with dot but we don’t use dots anymore. Who was your previous employer, was it Quick Solutions?

    Manisha — September 19, 2011 @ 9:43 pm

Leave a comment

I have over 15 years of software development experience on several different platforms (.NET, Ruby, JavaScript, SQL Server, and more). I recognize that software is expensive, so I'm always trying to find ways to speed up the software development process, but at the same time remembering that high quality is essential to building software that stands the test of time.
I have experience leading and architecting large Agile software projects and coordinating all aspects of a project's lifecycle. Whether you're looking for technical expertise or someone to lead all aspects of an Agile project, I have proven experience from multiple projects in different environments that can help make your project a success.
Every team and every situation is different, and I believe that processes and tools should be applied with common sense. I've spent the last 10+ years working on projects using Agile and Lean concepts in many different environments, both in leadership roles and as a practitioner doing the work. I can help you develop a process that works best in your organization, not just apply a prescriptive process.
Have any questions? Contact me for more information.
From Stir Trek 2017
Iteration Management - Your Key to Predictable Delivery
From Stir Trek 2016 and QA or the Highway 2015
From CodeMash 2016, QA or the Highway 2014, Stir Trek 2012
The Business of You: 10 Steps For Running Your Career Like a Business
From CodeMash 2015, Stir Trek 2014, CONDG 2012
From Stir Trek 2013, DogFoodCon 2013
(presented with Brandon Childers, Chris Hoover, Laurel Odronic, and Lan Bloch from IGS Energy) from Path to Agility 2012
From CodeMash 2012 and 2013
(presented with Paul Bahler and Kevin Chivington from IGS Energy)
From CodeMash 2011
An idea of how to make JavaScript testable, presented at Stir Trek 2011. The world of JavaScript frameworks has changed greatly since then, but I still agree with the concepts.
A description of how test-driven development works along with some hands-on examples.
From CodeMash 2010
From CodeMash 2010