Category: Uncategorized

Not a decision

This article is a translation from the French original post on Arolla‘s blog.

Risky Business Insurance, IT silo, Friday 9pm, meeting room Bob Ross

— Pluto Krath: We won’t come back to this, we already decided we wouldn’t use a database.
— Nadia: We had no idea we would need to navigate data that don’t fit in memory. If we don’t have a database, we will need to re-implement everything we need to do that from the disk: indexing, relational queries, execution plans, serialization… If we hadn’t told we would use a database, you wouldn’t even have noticed.
— Pluto Krath: No, we decided we would only use the memory and the disk, and that’s what we’re gonna do. Implement everything you need. And don’t forget I need the minutes for this meeting by Monday morning.

Pluto Krath’s office, 9:45pm

In the last illuminated office, Pluto was sending the most urgent emails before the weekend. He suddenly realized David Hasselhoff was sitting in front of him. He looked at his Sharknado the 4th awakens poster, missing one character, then back at David.

— Shut you mouth, blink. 
— But how… 
— Don’t change subject. So you just kicked back Nadia. Are you proud at least?
— It’s not that simple. We needed to take a decision, and that’s what I did.
— Oh the dirty word. A decision. Don’t use vocabulary for grown-ups. What options did you have?
— What do you mean? Oh, well, use a database or re-implement everything from scratch.
— That’s it? Wasn’t it you, plastering and quoting Virginia Satir’s rule of three: to have one choice is no choice, to have two choices is a dilemma, and to have three choices offers new possibilities. Where’s the third choice? I can easily think of a dozen.
— OK, I can find more. But what for? We can’t use anything we wouldn’t build ourselves.
— You’re right, let’s tackle your assumptions. What makes you think you should build everything by yourself?
— Because we can’t afford a license for an Oracle database. If we go there, I’ll have to deal with Mickey and his team of DBAs, and he really wants me out of his career.
— Well that was something. Unverified assumptions, not even shared with your team. Starting with your first assumption, once again: you only have two options. Aren’t there free databases? Is the relational model the best option for your context? Can’t you find indexing libraries that could prevent you from reinventing some square wheel? Would these solutions force you to deal with Mickey?
— I got it, there were other options, and we didn’t explore all of them. Whatever. We don’t have the time, the decision is taken, we’re moving forward.
— Oh yeah, I almost forgot the master of all options: waiting. Why take that decision now?
— That’s the way it works! We move forward with decisions, action plans, concrete stuff. How do you want to build a house without foundations?
— And you call these foundations. You’re doing meetings! You haven’t invested or implemented anything. You have nothing to modify. That’s what thinking is good for: as long as it’s ideas, it doesn’t cost a cent to change your mind. “Foundations”… You work at a company managing risk. Can’t you find someone who can explain you the value of an option? There is nothing more valuable for your product right now. You should cherish your options. Not kill them without reasons.
— So you want our specs to be bundles of dead ends. How do you want us to synchronize with the rest of the company if you don’t freeze anything?
— I don’t want anything. Let me remind you I’m only in your mind. You might even be sleeping. You’d better spend time with your family. Instead, you’re wasting your youth arguing about specs. Another way of saying you want to freeze reality inside your fantasies. For now you only need to generate options and explore them, and you’re doing the exact opposite. You can be sure the best solution is not the ones you see today. And it’s definitely not to cut other options today.
— Should we code disposable prototypes? Throw code away?
— Here you are, you’re considering options. You’re even thinking about ways to limit the cost and blast radius of experiments. We’re getting somewhere. Of course it’s time to cope with real constraints. Have you ever seen a product developed in meeting rooms? When I see you try to look like pros in stock photos, you look like your kid playing at tea parties.
— But how will we know which solution to choose?
— And here is the light. Congratulations Pluto! You’re wondering what all this is good for. You’re on the right track. Some paths will be dead ends, others will spawn new ideas or new problems to solve. However, the solution you will adopt will probably be none of the ones you’re considering today. It will be a mix of what you know now and the knowledge you’ll gather along the way. You already made the main progress you needed, by wondering why. Once you understand the problem, you’ll be able to consider conditions of success or failure for your experiments. Did you feel the click? Your might have aches between your ears tomorrow morning. It happens when you use new muscles.
— Fine, we’ll talk about that again.

Pluto was so impatient to send his mail to Nadia, he didn’t even notice his poster got all his characters back, and his neighbor was gone.

Random comments

Let’s show empathy. If people like Pluto fantasize about Capital Decisions, it’s probably because they’re not equipped for uncertainty. We never learned models helping us be comfortable with uncertainty. We always had to know. And a good way to predict the future is to force it, even in arbitrary ways. Thus the need for decisions. Decisions allow us to take further decisions. Therefore, the best way to get out of that spiral would be to help everyone understand little bit less deterministic models, like cynefin.

Many teams suggest to maintain a decision log. Because of a bad lazite, I never had the occasion to practice this, but I think it can really help organizations, by offering an objective knowledge base to workaround cognitive biases. You need to find a common format for your decisions, for example by stating the problem to solve, options, assumptions, reasons to choose, expected results, anticipated risks, etc. A decision log allows evaluating past decisions, and improving decision taking. Internet is full of templates and articles. Go on and discover the topic, like I would discover it if I started implementing it.

Judging a decision is independent of its results. You can take a great decision leading to disastrous results, or stupid ones and have great results. Every decision is made in an evolving context.

There are several tactics for taking decisions, like:

  • HiPPO (Highest Paid Person Opinion), where the boss decides. You’d better have a boss who is smart all day long, and who has all the information.
  • Consensus, where decisions are not taken until everyone agrees. It requires more time and energy, and decisions tend to be more, well, consensual.
  • Consent, where you only need nobody against the decision. In this mode, you’d better limit the blast radius of your decisions.
  • Anarchy, where everyone does what they want. To keep some coherence, you need the system to be clear enough to align everyone.
  • These methods can be applied by representative minorities. Representatives should really be representative, and information must flow well and massively in all directions.
  • I almost forgot voting. Voting engages all participants, and frustrates losers. By merging reasons to vote against a decision, a pure vote hides those reasons. You can manipulate elections by choosing a voting method.

I prefer the consent by default. Other people have other preferences. Anyway, just be aware that your preference, or your organization’s, are not the only options. And of course, it depends. To get deeper, have a look at David Market’s delegation scale, Management 3.0 by Jurgen Appelo, or holacracy.

Take a step back and look at your decisions, you won’t regret it. What decisions did you take recently? What decisions were taken for you? What were the options? Couldn’t it be postponed? Do that exercise, and you’ll discover possibilities your weren’t aware of.

My cheapest estimate

This article is a translation from the original French version available at Arolla’s blog.

Predictions are hard, especially about the future. Still, everybody wants some. In a coherent world, we would only need to predict the release of a very few features. However, as we are often forced into estimating everything, the quicker the better. Life is too short to waste your time on divination.

At a previous job, we could make predictions in an efficient and effective way. It took 2 or 3 hours to predict several months of releases for each team. And the results were pretty good, compared to the organizations I had seen before. In other organizations, predictions were only derived from cost estimates. In this organization, we relied as much as we could on what we had done in the past.

Context

Let’s start with a disclaimer: I don’t have numbers anymore, so mores or lesses will have to do. Now with the context.

We were 3 or 4 teams, releasing 2 or 3 products on the shelf, more or less related to each other.

Dependencies, between teams or internal to teams, were extremely limited. We’ll come back to that later.

Backlog items were small. By that I mean they were done in 2 to 3 days top. I usually need to work on that issue first. In that organisation, it was already anchored in people’s minds.

Every delivered item was attached to bigger items. Even bugs (I still don’t see the difference between bugs and the rest, I only see things to do, gaps towards an ideal). Using jira, we called big items epics. It took time for the teams to realize epics needed to have an end in order to be useful. But in this article, let’s consider epics were done in the release.

An epic was made up of 10 to 50 smaller items. We’ll call these smaller items tickets. Jira or not, I refuse to call them user stories: user stories are stories, about users, period.

At the end of a release, we could quickly know what the team had foreseen at the beginning of a release.

Recap. At the end of a release, we knew:

  • What epics the team had prophecized.
  • What epics were done.
  • What tickets were done in each epic.

From there, we could announce a content for the next epic:

  • Estimate each epic to do.
  • Compute each team’s capacity, in tickets.
  • Compute the proportion of that capacity we can use for planned and unplanned items.
  • Thus, predict the release content.

Epics estimate

Considering we knew how many tickets past epics were made up of, we stored them in big buckets. The following buckets were enough: 1, 3, 10, 30, 100. In my career, I quickly realized the Fibonacci sequence was too precise. For example, 1 2 and 3 could be merged into a common 2. And when you predict the future, you don’t want to let people think you know what you’re doing. In the word estimate, there is estimate, never forget that.

Then, we took future epics, from highest to lowest priority, and put each one in the corresponding bucket, by comparing them to the done ones.

We avoided at all cost estimating the number of tickets it would take to finish the epic. That is the mistake you don’t want to make: predicting the future.

We agreed by consensus. A simple rule to settle conflicts: if we thought an epic didn’t fit a given bucket, it went to a bigger one.

This exercise took approximately 1h, with the whole team.

Some people told me I should talk about t-shirt sizing, when they saw the 1/3/10/30/100 scale. A few comments about that:

  • T-shirt sizing doesn’t have an arythmetic. You can’t add t-shirt sizes.
  • That being said, it’s also its main quality. And choosing buckets for epics while ignoring how many tickets will compose them, is very close to t-shirt sizing. We briefly tried that, but participants were a lot faster with numeric cues. Culture, it depends on teams.
  • When I compare kids playing in the kindergarten and the seriousness of estimation meetings, I realize we could also use animals: fly, dog, elephant, diplodocus. The main advantage of this scale is that diplodocuses don’t exist, and we would always like big backlog items didn’t exist.

Team capacity

That’s pretty simple. If you did 200 tickets in the last release, you can predict your team to do 200 in the next one.

You could cross product values when release durations vary. But be wary of such computations: releases always have more or less exploratory periods, and they don’t spread linearly.

Applying linear computations for number of people, availability of people, or holidays, is even more dangerous. I prefer considering a team as a whole. Waiting times explain more of durations than the capacity of teams to parallelize tasks correctly done.

Actually, I think that the function (working time -> team capacity) is not computable or predictable. It is not increasing, not even continuous, and certainly not linear. Don’t evaluate team capacity from working time, it’s simpler.

This exercise took about 15 minutes for the scrum master and the PO.

The unexpected

By comparing what had been planned and what had been done, we got an unplanned rate. Between the start and the delivery of a release, we change our minds, reprioritize, discover functional holes, technical debt, and so on. Well, there is no reason for that to change. Therefore, we considered that unplanned rates would remain identical between releases.

We only considered planned and unplanned epics. We didn’t need to categorize tickets of an epic.

Our unplanned rates were approximately 65 to 75%. That is to say, 65 to 75% of what we did in a release had not been foreseen at the beginning of the release. That’s the way it is. Just take reality as it is, don’t try to distort it. Neither should you do hope-based planning, by firmly affirming you wouldn’t change your mind next time.

Taking new information into account is good news. If you have little unplanned work, don’t take that as good news without digging into it. There is a great chance that someone is burying his head, or, like the three monkeys, is shutting down every door, yelling lalalala prostrated in the corner of a meeting room.

This exercise took approximately 1 hour for the PO, depending on the difficulty of archeological excavations.

Prediction

We had team capacity, and an unplanned rate. From these, we knew how many tickets we couldn plan. For example, if we had done 400 tickets in the previous release, including 100 planned tickets, then we could plan 100 tickets for the next release. Having estimated epics in number of tickets, we knew which epics we could announce for the next release.

Turtles all the way

What we did for a release could be adapted to other levels or granularity. We used a variation of this method for 2-week iterations, by replacing epics with counting tickets).

The only thing you need to know is, unplanned rate must be evaluated independently for each level of granularity. Knowing you have 75% unplanned work at the release level won’t help you evaluate uncertainty at the iteration level, and vice versa. It can be more, it can be less.

Explanation

Why did this method work?

First, we found the right level of granularity. Like all nested systems, you have one or more stable (i.e. predictable) levels. In our case, stable levels were epics for the release, and tickets for the iteration. We were lucky to have a system with stable levels, and to identify them the first time.

Then, we already talked about it, dependencies were very few. Workload thus explained most of delivery times. It is rarely the case. In general, items spend most of their time in queues. In this case, estimating workload is useless. Rather measure lead times, and start from there.

Finally, the law of large numbers and having many small tickets per epic helped smoothing out disparities. At the epic scale, and considering the error inherent in divination, we could consider all tickets as identical.

Conclusion

It’s up to you to find the stable elements of your system. This method was the result of several iterations. Stable elements will help you predicting the future. By measuring these stable elements, and projecting them into the future, you have a higher chance to come up with realistic previsions. Avoid estimating what’s going to happen, at all cost. Too many biases will pollute your calculations.

I didn’t invent anything, it is more or less the #NoEstimate approach. I thought of proposing a few links to this approach here, but internet already took care of that. It’s up to you now. Happy exploration.

Frugal conclusion

Just in case. These words, and all their cousins, predict disasters.

We add features, code, processes, just in case. I prefer adaptability to new requirements, rather than anticipating every detail.

We create laws that paralyze whole sectors, to protect ourselves from limited harm caused by one black sheep. I prefer detecting problems, rather than preventing their hypothetical causes.

We create rigid processes to make sure of what they create. It may be useful in cynefin’s simple or complicated domains, not in the complex one. I prefer allowing good surprises, rather than avoiding bad ones.

We monitor proxy indicators because it’s easy. Sir Tony Hoare, the creator of null, who named it his billion-dollar mistake, added it because it was easy to do so. I prefer adding something because the need and the solution were validated, rather than because it’s easy.

What a relief when you remove some process, a 300-line method, or one third of the backlog. You can feel instantly your cognitive load decreasing, the blood weight diminishing in your brains. Afterwards, you can’t even remember why they were here in the first place. Let’s anticipate. Verify that what you add is useful. Remove what is not. Slim it down.

The whole series:

Frugal process

After backlog and code, let’s go on with the process. In this article, I’ll refer to the procedures, the methodology, the method, the practices, and not about the program doing stuff in a computer.

A process tells us what to do in a given context. It states pre-conditions for applicability, post-conditions of success, variations, attention points. A process always exist, be it implicit or explicit. A process has multiple nested levels of granularity, to produce, understand production, update the process, manage conflicts, communicate…

Company culture is everything making mandatory or forbidden, encouraging or discouraging, people reactions in some context. We can thus consider a company culture as a process. As behaviors, we can list and modify it (more or less easily is you want the change to stick).

The process is huge. We don’t want to make it more complex by adding arbitrary clauses to it. We need to factorize it, make it as little as possible. If the process becomes too complex, we need some additional process to interpret it. We don’t need that.

A process can accelerate things by:

  • Guiding people, so that they don’t need to search for the right way to go.
  • Avoiding people to keep wondering the same things again and again.
  • Helping people do the things right first time.

A process can also ruin your life when:

  • It complicates more than necessary what needs to be done.
  • It prevents you from doing what needs to be done, unless you work around it.

An important quality of a process is its adaptability. Situations vary, and evolve. So a process must remain plastic. So we’re back to the same considerations as code: simple things are easier to adapt.

  • When the process is a 200-page document, nobody will get back to it to adapt it to the new context.
  • When the process requires 200 persons to synchronize in order to update it, nobody will make the effort.
  • When the process is implicit, nobody can discuss it.

In addition, the process has to be more or less prescritive, depending on the context. The cynefin model, the more useful I know, helps us understand this, and choose a strategy to handle a situation, depending on its nature:

  • In the obvious domain, where you can easily predict consequences from the context, define and follow a list.
  • In the complicated domain, you can predict consequences given some analysis. Ask experts to tell you what to do.
  • In the complex domain, the systems has too many too dynamic relations to be predictable, and they change when you touch them. State hypotheses, and validate them through experimentation. This domain is the most frequent in software. In the complex domain, you need a more abstract process. This process will give you clues to design experiments, gather feedback, verify you didn’t overlook some perspective.
  • In the chaotic domain, it’s fire. Get out of there as quickly as you can.
  • To which we add the cliff, from obvious to chaotic, where you violently loose your illusions. You thought your situation was comfortable, and competition shows you a very different reality. Falling to chaos is hard, and you need to get out while you’re still on kodak’s board.
  • And the disorder, where you don’t know where you stand. It could deserve a whole series of its own.

The process must take all these domains into account, in a relevant way. Follow ikea instructions in the complex domain, and you will make too many useless errors. Experiment on methods to assemble an ikea furniture, and you’ll waste more than the traditional week-end of family chaos.

Note that lean proposes a very useful tool to handle this: the standard. The standard is the best way we know today to do something. It is the support for continuous improvement, because it documents our current knowledge, and constantly evolves from there. We get back to it as soon as we observe a gap with the objective (i.e. often), in order to understand what can be improved. It documents pre and post conditions, variations, attention points. Standards can be involved in anything, given its level of abstraction corresponds to the task at hand.

In short

  • Take the time to understand cynefin. My current level of understanding took me a few minutes of epiphany, and several years of deeper study. Every second was worth it.
  • Adapt your procedures to the context.
  • Make your procedures explicit.
  • Don’t add useless procedures.

Finish it

Frugal code

I feel the most like a good dev when I delete stuff. But attachment to code is the most painful challenge to overcome when I try to help my colleagues adopting an ingineering culture. I.e. an experimentation culture. For reasons I don’t understand, devs crave their code, each line of it, since the first minute of its existence (at least for reasons I don’t understand *anymore*, as I probably had the same attachment to code in a life I don’t remember). And the more devs write code, the more they are attached to it.

Quality

Organizations need to adapt to needs. Societies evolve. People grow up, their needs change. Our comprehension of these needs, and of our work environment, gets more relevant. We can not freeze the world during the months or years product development requires.

Our job is to modify code. We thus need a code that is plastic enough to adapt to these changes. In a problem resolution activity like dev, this is what I call quality: the capacity to keep a satisfying pace over a desired period. To be agile, this capacity is not negociable. As devs, it is our duty to always enforce this capacity, without asking anyone.

Design

We find code plasticity in patterns, modularity, high cohesion and low coupling, and so on. But you need to be aware that, each time you choose a pattern, you pick a compromise, you make on axis harder to loosen another one.

Take the DRY principle, for example (Don’t Repeat Yourself). It is universally accepted as a base principle. I hardly hear it discussed. Still, it participates in creating dependencies, source of evil. This is a compromise to understand. In addition, two pieces of code looking alike are not the same code. Think semantics before factoring code. Martin Fowler popularized the rule of three, suggesting you should write code at least three times before extracting some common stuff. Neal Ford teaches us that “The more reusable something is, the less usable it is”. I recently discovered the acronym WET, Write Everything Twice. Let’s not remain stuck on DRY, and think about compromising.

Design patterns are also compromises to understand. They favor plasticity along one dimension while sacrificing another. For example (provocation on purpose), heritage allows multiplying embodiments of a given class of behavior. But it makes modification of this class of behavior harder.

Even if you don’t see which axis you’re making less platic, you need to be aware that a pattern is an indirection. Indirections get in the way of seeing detail, while making high level understanding easier. With indirections, you better scan principles, less details. This is a compromise, it has good and bad sides. You need to be aware of it.

Frugality

To make code more adaptable, my first track is to not write it. The easier code to modify is no code. I always try to add as little code as I need. To add indirections only if they bring value. To delete code when it brings less value than complexity.

Note that we have tools to help us limit the quantity and complexity of code:

  • With TDD, you only write the code that allows to pass one test, no more.
  • With DDD, you split the problem in bounded contexts. Each of these contexts is thus smaller. As you don’t put code in common between bounded contexts, you limit the number of dependencies, and the overall complexity.
  • With BDD, you better assess the problem boundaries. You avoid useless hypotheses.

What a delight to refactor no code. We call it greenfield, and it makes the eyes of every nerd shine. This is an extreme. It gives an idea of what you get when tending towards that ideal: serenity, hapiness, smiles.

If you want to approach that ideal, limit the quantity of code to modify:

  • Don’t write code “just in case”.
  • Don’t add patterns before code complexity requires it.
  • Don’t anticipate too much code flexibility. Wait to know which axis needs freedom of movement.
  • To support all this, learn refactoring and emergent design.

And now let’s jump to process

Frugal production

Let’s think about this enterprise philosophy hit song:

  1. I want to satisfy the needs of users, buyers, managers, stakeholders.
  2. Therefore I want features.
  3. Therefore I want to optimize the production of features.
  4. In other words I want to maximize the production of features.

Imagine that big bank. It wants to solve problems for its advisors, or even its customers. With some help from specialists at every step, the bank formalized requirements in a backlog, organized RFI and RFP, selected a software package, started a project, set up constraints to make sure everything is delivered on time and budget, mounted and unmounted every corresponding team.

At every step, the bank relied on the result of previous investments, results of which were validated, and thus considered right. If everything done before is right, we only have one thing to measure when developing: the speed of production according to specifications. I may be user stories, man.days, lines of code, green tests in a campaign, whatever.

We maximize production of features because we don’t want to come back to what was already validated, and because we don’t know how to do all the steps at the same time. In other words, because it is easy. As a consequence, we measure the accumulation of things on top of accumulated things. This logic is the cause many a failed project.

Jeff Patton teaches us that the goal is to optimize outcome (i.e. changing behavior) to improve impact (i.e. consequences for our organization), while minimizing output (i.e. production).

How to do that concretely? As said earlier, we maximize output because it is easy. Therefore solutions will be hard. This is good news: it is an occasion to take some advantage over your competitors.

Validation

We must move quietly. Take the time to verify that what we throw out the door is useful. Consolidate fundations before adding new floors.

As Jim Benson says, “Software being ‘Done’ is like lawn being ‘Mowed”. Software gets interesting when it gets into users hands. It is finished when it is decommissionned.  Make this official, by adding a validation/understanding/learning/feedback step at the end of the value stream.

A feature to release is not code to produce. It is an outcome hypothesis. John Cutler insist on talking about bets. Once the code is released, we need to evaluate its consequences, and decide where to go from there:

  • It’s perfect we can stop iterating.
  • We should try modifying this or that.
  • We need more info.
  • Let’s deactivate or remove it.
  • And so on.

By the way, if you doubt, like you should, of features usefulness, you should limit the number of experiments you run in parallel. An experiment takes time to reveal its secrets. This delay is actually an interesting topic to think about. Among other things, it helps understanding why our so-called experiments are not scientific. Cynefin rather talks about probes.

Pull system

Limit your WIP (Work In Progress, i.e. the number of things being worked on), in all steps of the stream, including studying/prioritizing. By doing so, you will avoid preparing items when the next step is not ready to take it. Your backlog will thus remain under a reasonable limit.

  • The first step of the stream is a prioritized list of problems to solve, ideas, wishes, unverified asumptions. You study these topics by priority, when you have the capacity to take them. That is to say, when it’s useful to think about them.
  • Then you can think further about them: “what for”, split, explicit the goal and constraints, share understanding…
  • Then development if needed, test, deployment, and so on.
  • And then, validate the need, gather feedback to iterate.

So, step by step, you push features from the last step: impacting the world.

Of course, it only works if items are small. How small? If you’re beginning, it’s never small enough. If you have more experience about flow, and you know why it’s too small, then improve your process to decrease the transaction cost, and then make items smaller.

Wrapping up

Let’s fix the logic described above. Instead of:

  • Users have needs.
  • So we must produce features.
  • So we must optimize feature production.
  • I.e. we must maximize feature production.

Let’s try:

  • Users have needs.
  • We might satisfy those needs with features.
  • So we must optimize feature production and impact.
  • I.e. we must minimize feature production while maximizing impact.

When you use a product, you are delighted when you can see right away the feature you need. It’s a nice surprise to use it feature fluidly, the way you expected. It’s a change, compared to those products crawling under menus of sub-menus to propose all potential options, endless forms to support every possibility, just in case.

Propose the product you love using:

  • Verify features usefulness.
  • Do less things in parallel, and finish them.
  • Focus on users.

It’s never too late to do things properly. It’s always time to validate the hypotheses induced by upfront investments, however huge they are.

Now let’s go with code.

Enough is enough

This series of article is translated from French articles on my employer’s blog, Arolla. The French articles may or may not be released at the time you’re reading this.

We consume too much. We eat, throw away, heat, send e-mails, spend, earn, too much. We need to learn how to do more with less.

We have limitless backlogs. We look for ways to produce more faster. Production is our main indicator. As a company or a country measuring its income growth to invest more to grow more, we measure our production to release more to have more features to earn more. Simple isn’t it?

We miss two parameters here:

  1. Complexity doesn’t grow linearly with size. It tends to grow in a chaotic and explosive way. Sometimes independantly of growth. And surely in an unpredictable way. The worst news is, complexity doesn’t have a maximum. It is not capped by your capacity, anyway.
  2. You can’t predict how the system you’re creating will evolve. It is a kid, living its growth in a chaotic way. You can’t predict consequences of the evolution of your system, as soon as it gets a little bit complex.

I can only see one way of keeping that under control: move slowly, carefully, checking how the system evolves while you touch it. In other words, evolve frugally.

Because frugality diserves loads of ink, and I’m paid by the article, I’l try exploring this topic in 4 steps:

  1. Let’s start with backlog.
  2. What about code?
  3. Did you forget the process?
  4. Let’s get back to serious work.

When should I…

Most agile posts I see are about finding the right bargain about this question:

When should I gather requirements/specify/test/merge to main line/document/integrate/deploy/communicate progress to customers/<insert any phase gate you’re used to and concerned about when you think about iterative/continuous software development>?

The answer is: always!

If it hurts, do it often. It there is risk, do it often.

I won’t go through every possible question, you’ll find them in every consultant’s blog. There are two short answers to all of them:

  • It depends: from your situation to the ideal one, there is a gap. You must deal with it and find the right trade-off.
  • Do everything more often. Every time you delay something, like working on a branch, you don’t decrease risk: by delaying risk, you increase risk.

These answers come together It’s an and, not a xor.

The ideal situation is: every character you change in your code works, brings value, is crystal clear to developers and users, and instantly available to your stakeholders. This is ideal. But we can be ambitious: it is the goal. Everything we do must tend towards this goal.

Run experiments

In the previous post, we saw why we couldn’t apply user stories or spikes for a big technical epic we’re working on: developing an efficient engine in addition to the default one for a subset of what the default one supports. With user stories, we have no control on what we’re doing, and we can’t commit on what we’ll deliver. Ultimately, it does not allow to show progress, because it doesn’t correspond to what we do.

The key here is that we are progressing. We know it. We could implement a basic engine, and validate that the performance improvement is what we could expect from it. We have an overall idea of big chunks of stuff that will need to be done to make it functional. We just can’t make it real according to agile by the book. But there is a solution: as this is what we do, let’s make iterative expirements come true.

I’d like to introduce a new type of backlog item: an experiment iteration. It’s a kind of spike slice, or lean startup applied to technical stuff:

  • Define an objective, that you may or may not achieve.
  • Timebox it. This timebox is the main acceptance criteria. The backlog item is over when the timebox is over, period.
  • Just do it until the timebox is over.
  • Validate non-regression.
  • When it’s over, the key is that you check what you learned. This is what you do in the validation/test/acceptance column. What you learned might be:
    • Ideally, a backlog.
    • Worst case, give up and rollback. You must always keep this option open.
    • We added this and that tool or feature and it’s ok.
    • We should add this to the non-regression campaign.
    • We couldn’t support that case.
    • This or that piece of code must be re-designed.
    • That part will take time.
    • We need to check what we want to get regarding this.
    • You should check if this works as you expect.
  • Decide where to go next, based on what you learned.

Note that code branching or conditional compilation are not valid ways to avoid risks of regressions. They are only ways to delay risk, and thus to increase consequences. All experiments should be implemented in the trunk, using feature branching if necessary.

The main difference with a spike or a user story is that we focus on learning. It is transparent to everyone. You will be expected to implement the objective, because it is not the priority. We also make discoveries about the product and the code base more transparent, because there is no limit to what you might declare as learned. It might also save you some time during retrospective or andon meetings, because you already introspected on many technical topics.

Iterative experiments should be run when large chunks of software are to be implemented without knowing where to go in detail. It’s not too big to fail, i.e. too big to succeed, but too big to know all details in advance. They should lead to more deterministic work like an actual backlog.

What do you think? Have you ever focused on what you learned? Do you have some feedback about it?

My stories are too small

We want to have backlog items that are as small as possible. It protects us from the unknown. It prevents us from having to re-work too much code if rejected, and limits the effect of wrong estimates. It allows us to be predictable.

In the same time, backlog items must have a meaning. We want to be able to release something valuable. Done-done also means useful. If it’s not, how can you adapt your backlog to actual needs? What’s the value of your high-level estimates? If something considered as done is in fact half-done, how can you release it? Splitting stories also means you can stop at the end of the first part.

I’ve never found the right recipe. And I just bumped into this situation again. The need is something like “I want to use an object of type A in screen B”. To use this object, I must first select it, so I need a selection screen. Then I need a select button that actually allows to update screen B given the object selected. The thing is… just implementing the selection screen doesn’t fit in the iteration.

Note: we are in an iterative mode but the issue would have been the same with a flow. The story is too big, and we couldn’t split it.

Only implementing the selection screen doesn’t make any sense. If we had to release at the end of the iteration, the user would have a selection screen with a disabled select button that says “to be continued”.

We chose the smallest issue: split it and deliver something that is not usable or useful at the end of iteration, but deliver something anyway. We prefer feedback than nothing, partial waste than entirely in stock. But still I’m not satisfied, like always in this situation. And finally we have this ugly story: “I want to use an object of type A in screen B, part 1”. So…

Feedback is warmly welcome. Have you encountered this situation? What do you suggest?