This post is a part of a series of posts about iteration management. If you want to start from the beginning, go here.
In order to get really good at planning, you need to get really good at data analysis. You need to collect all the data that you can and use it to prove your assumptions and accurately plan future work based on past results. The key word in that sentence is accurately – if you come up with a plan that isn’t accurate, that’s not going to be much better than no plan at all.
Here are some ways that data analysis has bailed me out in the past:
- I was working for a consulting company and we were doing a project with two phases. We started on the first phase, and when that wrapped up, we were going to bid on the second phase. We had estimated the work for both phases, but the project manager wanted to know how accurate the estimates were, so he had all of the developers track and record the actual amount of time they spent on each work item. We found that we were underestimating everything by 20%. If we hadn’t realized that we were underestimating, we would’ve underbid the second phase of the project by 20%. If you do the math, that comes out to a lot of money we could’ve lost and a lot of extra hours worked.
- On a recent project, we thought we had enough time to get the work done by the deadline, but when we analyzed how long the work would take combined with the expected amount of scope increase that we expected based on past releases, we were able to predict that we weren’t going to be able to make the release date with the current scope and team, 5 weeks before the release date (the project was only a 2 month project).
- On the same project, we were actually ahead of schedule at one point, but we could see that the project had very few road bumps compared to past iterations so we were able to explain that while we were ahead of schedule, we still felt that the project was at risk.
- When planning for an iteration, I was able to see that exactly how much time I had been spending on analysis vs. development so that I didn’t over-commit to development work when I really was spending more time doing analysis.
Let me give you an example. Have you ever played the game where you have a jar of jellybeans and you have to guess the number of jellybeans in the jar? Pretty hard to do right? (I’ve never won that game.) What if someone gave you several other jars of jellybeans of different sizes and told you how many jellybeans were in those jars? It would get a lot easier to estimate the number of jellybeans in the first jar, wouldn’t it? What made it easier? You based your estimate off of actual data from similar situations.
We like to say that estimates are always wrong. Why are estimates wrong? Because there are too many variables that we don’t know enough about in order to make a correct prediction of what will happen. What if we could reduce those variables, either by controlling the environment or learning more about an activity using past data? As you bring more things under your control, you can become much more accurate with your estimates and your delivery cadence can become much more predictable.
Also, most teams estimate development work, but teams don’t always estimate analysis and testing work. Why not? Are those disciplines any less important than development? (The answer’s no.) Your BAs and testers should give estimates on tickets just like developers do, and those estimates should be used in the planning process.
Let’s look at some of the variables that we can control.
In order to be able to use past data to help predict the future, you need the team to remain the same for a period of time so that you can begin to see the trend. This means that you try to limit changes to the makeup of the team, and you should not have team members that are only partially dedicated to working on the team. If these things are in flux, it’s going to be a lot harder to use past data to predict future performance.
I turned this point into its own post.
Any planning is easier to do when you have fewer people to plan for. It’s shouldn’t be any surprise that the most accurate planning I’ve done is for teams of 1 or 2 people. If you reduce the number of variables (and people are always variable), you can be remarkably more accurate.
Many of you probably have a team that is larger than just two people, but you can still divide a large team into smaller sub-teams. Remember, every time you add a person to a team, you make it a little less efficient. You add one more variable to the system, one more person that needs to be kept in the loop.
Expecting the unexpected
What’s harder that planning for the work that you know about? Planning for the work that you don’t know about. This could be unexpected production support issues, scope increase, or bugs that come up during the iteration that increase the amount of work needed to complete a feature. However it happens, it happens all the time, and it often happens in relatively predictable amounts. Occasionally you’ll have that really bad week where everything comes to a halt because of serious production issues. But over time, you’ll begin to see a trend of how much of this kind of stuff happens, and you can plan for it.
The more data you track, the more you will learn. This may seem tedious to your team (for whatever reason, there’s nothing developers hate more than tracking their time), but it’s really important. There’s actually a lot it for them. For example, if a developer thinks that they spend 80% of their time doing development each iteration but they actually spend 60%, they’re always going to be over-committing. Maybe they consistently underestimate their features (this happens a lot). If they’re being held accountable for getting work done, then they should want to estimate accurately.
Knowledge is power, and data analysis can give you the knowledge that you need to become a better planner and estimator. Next time we’ll talk about what kinds of things to track and what kinds of things to not track.
Read the next post in this series, Consistency.