Known or unknown unknowns we know we don’t know or not

In this video Adrian Howard talks about how to replace academic user stories with hypotheses to validate. In short, he proposes to ask more questions about what you need to do. You even need to question why, as this role, Bob wants that so that blah blah, or if he really wants or needs it. This is a great proposal. He justifies this by the fact we need to focus more on what we know we don’t know: the know unknowns. We tend to consider user stories as a specification. He wants us to ask more questions, even what the user story asks the team to implement.

The hypothetical user story is not in contradiction with my previous post about iterative experiments. I actually thought about this possibility, without getting as deep as Adrian Howard did. But I thought it wasn’t enough for our case, where we don’t know what we don’t know. Anyway, thinking about it, and putting it parallel with known/unknown knowns/unknowns, helped me understand something about what we do. When we tackle a topic, we get through several phases:

We explore. In this phase, we need to put our brains around why we need to do something, what we want to do, and how we should do it. We mainly don’t know. Even what we don’t know. We need to experiment and learn, about anything: the scope, the user, the stakeholders, the technology, the constraints… As we don’t know what we don’t know, we can’t have a precise goal to reach. As we can’t rely on the result, we timebox. As we want to learn, we focus on what we learnt at the end, and re-assess where to go from there.

We validate hypotheses. In this phase, we know that we don’t know stuff. We need to clarify what we know we don’t know. The best way we know to validate a hypothesis is the lean startup way: as cheap and scientific as possible.

We know what to do, how to do it, and why. Yes, this might also occur. In this case, a user story is the best when there’s a user with a story. Acceptance tests are also great. A specification might also do the trick.

Note that these phases are not sequential. We may start with a hypothesis to validate or technical backlog items to install a CI server. Neither do these phases describe the lifecycle of a project. A project is a sequence of releases, epics, topics, and each of these require exploration, hypotheses, and more linear implementation phases. Finally, you may need to experiment about what to do and be more confident about the technology. In other words, the why, the how, and the what may be tackled in different ways. As a consequence, these phases will be interlaced in a project, from the beginning to the end (I’ve never seen an end, but I heard it happens).

Now we know we need these three types of backlog items, so we need to know when to apply them. The problem is that we can’t know when we don’t know enough. One of these categories is about UNKNOWN unknowns. You may always pick the iterative experiment, but it’s not really the more efficient. And not the more predictable at all, in terms of planning. So you need to pick the right tool when you start with a backlog item. And this is were I will leave you with your guts. I don’t have a formula for choosing what a backlog item will mainly be about. After all, if you make the wrong choice, you’ll learn from your mistake. This is the beauty of doing small iterative things.

So what do you think? What do you know I don’t know I don’t know, or don’t know I know? Please share your experience.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s