What metrics do you need?

Recently I faced a significant issue about my current project. I’m participating to a very interesting experiment in France. Antoine Contal (not everyone’s called Antoine in France) proposed a few agilists to work on a problem solving A3 on their own project. By doing this, we hope we can present some more objective feedbacks about commonly faced issues in agile projects, and about ways to solve these issues. So I started mine…

… and realized I almost had no data to objectively understand the issues I had on my project.

Agile focuses on producing value. And proposes to iteratively solve issues through reviews for the backlog, and retrospectives for the process. It fosters minimal process. As I fully agree with this approach, we decided to record as few metrics as possible, and to use simple and concrete tools, i.e. paper and talk wherever possible. That leads to 2 direct consequences:

  • Decisions taken in retrospectives are based on subjective concerns. We don’t have objective records to help the team arbitrate issues and solutions.
  • When problems need more analysis than a quick talk, we have very little information to understand them.

I agree that problems that matter are generally the ones we think about. And I agree that we generally find relevant solutions to these problems by throwing ideas on the wall and sorting them collectively. People and teams are intelligent and positive. But first this retrospective approach is not always enough, and secondly, by doing this, we miss the big picture. As Dr. House said, symptoms considered separately might hide a different cause when taken as a whole.

Though their are both relevant, retrospectives and A3 approaches tend to be contradictory. But not incompatible: someone, like the scrum master, can still think more deeply about problems and propose his conclusions during retrospectives.

The thing is, to analyze a situation, we need records. The first thing we do to understand a problem in A3, is to “grasp the situation”, by extracting anomalies from the project history, and sorting them to identify patterns. Later on, we analyze these patterns. Extracting anomalies is way more complicated than it sounds. And generally, on agile projects, we first face the fact that we don’t have the right data. One thing to do in this case is to start record data, and wait for them to be enough to understand the situation, but this is a great loss of time.

On the other hand, we don’t want to waste our time recording all kinds of data throughout the day, because we still need to produce value, and fight frustration.

So we need the right trade-off: record the right metrics, hopefully implicitly by saving what already matters, while having the right data when you need them in the future… And that’s kind of where I’m stuck. I’m quite sure that there is no solution for every project, as usual in real life. Anyway, I would suggest a few data to record on every project.

Release Burnup. This allows you to see the evolution of your production and your backlog on the same chart. This is why I suggest using a burnup and not a burndown. Following up the evolution of the backlog is one of the things I’m missing the most today.

Velocity Trend. If your estimates are consistent, this is the basic info you need to track your project efficiency… Even if you only track your efficiency against raw production and not value, but that’s a topic for another post. On a release burnup, you don’t see clearly differences between what you produced in each iteration. The velocity trend is based on the same data than the Release Burnup, so why not display it as well…

Test coverage. If you have a functional test plan, record when you run test campaigns, which tests you run, in which conditions, and so on. Whether tests are manual or automatic doesn’t matter. You need to know your capacity to verify what matters for your customer, and your trust about the completeness of the features you committed to deliver and non-regression.

Any time-tracking data that is handy for you. Iteration burdowns are useful during the iteration, but tell you very little afterwards. But tracking the number of ideal hours you can produce can be a very useful tool. Not only to check your raw productivity, but also to see what you track or not. Be very careful though, that team members don’t feel like you’re spying on them, or that it’s competition. Don’t blame John because Ted did better, or because his figures decrease. Figures are very easy to cheat, and it’s not the goal. You don’t want the team to sprint on stories while leaving a huge technical debt, or to get over functional coverage. And of course, tracking those figures must be easy, if not transparent. If you have to repeat everyone every day to connect to that thing in order to track their time, there’s probably an issue.

All anomalies. As you can see, I’m really stuck. But after the above metrics, what you need really depends on the process you adopted. For example, defects that you manage on the fly are not necessarily recorded, but might already be. Switching a story to the next iteration is not necessarily a problem if you never do it, or if you do it on a very limited and regular basis. Tracking the evolution of the story states during the iteration can’t be a problem if you limit work in progress. And so on…

So what I suggest is that you save, somewhere, whatever the format, anything that is not normal (some call these red cards): defects, stories that are not finished at the end of the iteration, re-estimates, technical debt, frustration… It might be boring, but if data are not stored in a too formal way, it won’t take too much time to manage (for example, people can stick red sticky notes on the board during stand-ups that the scrum master can record and move somewhere else). When you need these data, you can sort them. You can identify unexpected pattern sooner. When things settle up, you will probably identify which metrics really matter for your team. And this will help the team raise more relevant issues during the retrospective.

And this is where I’d like to have a few people following this blog, to talk about this topic and share experiences.

4 comments

  1. Johan Martinsson

    I’m not so sure that tracking all of this will solve your problem. That is having the right metrics to analyze the situation once a problem comes up.

    However I find it interresting to listen to what metrics people use in which situation, so thank you for sharing your experience.

    For my part I like to measure
    – bugs to incite us to do quality
    – features misunderstood (for instance rejected at the demo) because that measures how well we work with other stakeholders
    – in addition to test coverage I find test execution SPEED extremely important.

    • Antoine Alberti

      Thank you Johan. Very interesting comment. I totally agree with test execution speed and bugs.

      And feature misunderstanding is the perfect example for illustrating my point. In this project, we never felt that functional misunderstanding was a problem, so we never recorded any data about it. We had some stories rejected during reviews, but mostly due to bugs, regressions, poor testing, partial implementations (due to bad communication from the developers and not functional misunderstandings).

      But maybe our feelings were wrong, and there was a deeper cause for these issues, related to misunderstanding. So maybe we would have better seen such a cause by having the right metric. For example, we are very liberal regarding the preparation of stories.

      Regarding this point, what kind of metric do you have?

      Oh yeah, and of course, stories rejected would be part of the anomalies I mentioned.

      And like you I’m very eager hearing other experiences, so that we can hopefully come up with a kind of conclusion.

  2. Pingback: Agile metrics – what to measure | agileoffice
  3. Pingback: Agile metrics – what to measure « agileoffice

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s