Run experiments

In the previous post, we saw why we couldn’t apply user stories or spikes for a big technical epic we’re working on: developing an efficient engine in addition to the default one for a subset of what the default one supports. With user stories, we have no control on what we’re doing, and we can’t commit on what we’ll deliver. Ultimately, it does not allow to show progress, because it doesn’t correspond to what we do.

The key here is that we are progressing. We know it. We could implement a basic engine, and validate that the performance improvement is what we could expect from it. We have an overall idea of big chunks of stuff that will need to be done to make it functional. We just can’t make it real according to agile by the book. But there is a solution: as this is what we do, let’s make iterative expirements come true.

I’d like to introduce a new type of backlog item: an experiment iteration. It’s a kind of spike slice, or lean startup applied to technical stuff:

  • Define an objective, that you may or may not achieve.
  • Timebox it. This timebox is the main acceptance criteria. The backlog item is over when the timebox is over, period.
  • Just do it until the timebox is over.
  • Validate non-regression.
  • When it’s over, the key is that you check what you learned. This is what you do in the validation/test/acceptance column. What you learned might be:
    • Ideally, a backlog.
    • Worst case, give up and rollback. You must always keep this option open.
    • We added this and that tool or feature and it’s ok.
    • We should add this to the non-regression campaign.
    • We couldn’t support that case.
    • This or that piece of code must be re-designed.
    • That part will take time.
    • We need to check what we want to get regarding this.
    • You should check if this works as you expect.
  • Decide where to go next, based on what you learned.

Note that code branching or conditional compilation are not valid ways to avoid risks of regressions. They are only ways to delay risk, and thus to increase consequences. All experiments should be implemented in the trunk, using feature branching if necessary.

The main difference with a spike or a user story is that we focus on learning. It is transparent to everyone. You will be expected to implement the objective, because it is not the priority. We also make discoveries about the product and the code base more transparent, because there is no limit to what you might declare as learned. It might also save you some time during retrospective or andon meetings, because you already introspected on many technical topics.

Iterative experiments should be run when large chunks of software are to be implemented without knowing where to go in detail. It’s not too big to fail, i.e. too big to succeed, but too big to know all details in advance. They should lead to more deterministic work like an actual backlog.

What do you think? Have you ever focused on what you learned? Do you have some feedback about it?

2 comments

  1. Pingback: Why technical user stories are a dead-end « AAAgile
  2. Pingback: Known or unknown unknowns we know we don’t know or not « AAAgile

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s