Implied metrics
In my last post, I talked about the kinds of metrics that I don’t like. But sometimes even good metrics can go wrong.
When I worked for a previous employer, we used “load sheets” on our Agile card wall. These were pieces of paper divided into 10 blocks (10 days in a 2 week iteration) and then we would cut out cards for the features that were sized to take up as many blocks as the estimate (so a 3 day estimate card would take up 3 blocks). We would use different colored dot stickers to indicate when we had discussed the requirements, discussed the design, completed the development, and when QA was complete. Each developer on the team would have one of these sheets on the wall, so you could see who was working on what and where we were pretty easily.
I didn’t come up with the system, but I’m guessing that they designed it for at least two reasons (probably more than that):
- Anyone can come and see what we’re working on and what the status is of a particular feature
- We would meet with the project sponsor every other week and they would pick out features for the next two weeks, and the differently-sized cards gave a good visual representation of the cost associated with each feature
This system did an excellent job of both of those things. The problem is that the development team interpreted it in a completely different way.
When the developers looked at the sheets, these are some of the things that they thought:
- The most important thing is for me to get my features to QA in the estimated time
- Completing my tasks in the estimated time is important (more important than things like writing tests or helping other developers, because there are no metrics for those)
- Completing my tasks this iteration is more important than helping others complete their tasks (even if helping someone else would be better for the team). This indirectly discouraged pairing on tasks.
As a result, the following things actually happened (I’m not making this stuff up):
- One developer segregated himself from the team and focused on completing as many features as he possibly could. He did a good job of staying focused and he did get a lot done, but there was little collaboration or design discussion and virtually no unit tests. He used his productivity metric to ask for a raise (because he was completing twice as much work).
- Developers would work on features, and as soon as the estimated time passed and they weren’t done, they got openly flustered and complained about the estimates being wrong (which was correct), and focused on getting it done instead of getting it done right (which led to technical debt).
- Developers would get the work most of the way done and then say it was done, saying, “It’s far enough along that QA can take a look at it and let me know what’s wrong.” (That way they could move on to the next task.)
Whoa, what happened?!? I’m sure this wasn’t what the people who came up with this system intended. But that’s the problem — while the system provided a lot of positive benefits, it also indirectly encouraged individualistic behavior and cutting corners and discouraged collaboration because of implied metrics. I was leading the team, and I wasn’t going into TFS and spitting out reports to compare developers, and I wasn’t tracking who was completing tasks within the estimated time, because it didn’t matter at all. It didn’t matter what I was or wasn’t doing — the data was being tracked, so developers assumed that something was (or could) be done with it. (I’m not using this example to criticize the people who came up with the system, because no system is perfect, and there were many very successful projects completed using this approach.)
What could we have done differently? We could figure out what message we want our card wall to convey to the developers and structure it that way. For example:
- I care much more about how much the team accomplishes in an iteration than how much each individual developer gets done, so don’t organize the cards by team member (if you want to write developers’ names on cards, that’s fine, just don’t organize them in a way that it’s easy to compare developers).
- I care much more about a feature being done and ready to be deployed than I care about when a developer hands it over to QA, so instead of measuring how long it takes developers to complete their first pass at development, I’ll measure how long it takes from the time we start working on a feature to the time that it’s ready to be deployed. We need to break down the traditional silos of development and have BAs, QAs, and developers work together to achieve one goal.
- I value communication and collaboration, so when I’m talking to the team, I’ll emphasize working together, being willing to help each other out, pairing, and things like that. I don’t want developers to think that they should be smart enough to figure something out when they’re stuck, I want them to ask for help and I want people to solve each other’s problems.
Great insight. Thanks for sharing
What a great post. I worked quite a bit on projects using this system, but some your insights never occurred to me. I agree with the implied metrics and can vouch for the results you mention. Also, something not mentioned is that each developer’s card had a different number of open slots depending on the skill or familiarity of the developr, which also caused issues such as comparisons of worth.
While I think this methodology had its place and was great for the time, it was just a stepping stone on the way to learning even better ways to interact.
We use load sheets in our company and we also used to notify completion with dot but we don’t use dots anymore. Who was your previous employer, was it Quick Solutions?