Category: Uncategorized

Frugal production

Let’s think about this enterprise philosophy hit song:

  1. I want to satisfy the needs of users, buyers, managers, stakeholders.
  2. Therefore I want features.
  3. Therefore I want to optimize the production of features.
  4. In other words I want to maximize the production of features.

Imagine that big bank. It wants to solve problems for its advisors, or even its customers. With some help from specialists at every step, the bank formalized requirements in a backlog, organized RFI and RFP, selected a software package, started a project, set up constraints to make sure everything is delivered on time and budget, mounted and unmounted every corresponding team.

At every step, the bank relied on the result of previous investments, results of which were validated, and thus considered right. If everything done before is right, we only have one thing to measure when developing: the speed of production according to specifications. I may be user stories, man.days, lines of code, green tests in a campaign, whatever.

We maximize production of features because we don’t want to come back to what was already validated, and because we don’t know how to do all the steps at the same time. In other words, because it is easy. As a consequence, we measure the accumulation of things on top of accumulated things. This logic is the cause many a failed project.

Jeff Patton teaches us that the goal is to optimize outcome (i.e. changing behavior) to improve impact (i.e. consequences for our organization), while minimizing output (i.e. production).

How to do that concretely? As said earlier, we maximize output because it is easy. Therefore solutions will be hard. This is good news: it is an occasion to take some advantage over your competitors.


We must move quietly. Take the time to verify that what we throw out the door is useful. Consolidate fundations before adding new floors.

As Jim Benson says, “Software being ‘Done’ is like lawn being ‘Mowed”. Software gets interesting when it gets into users hands. It is finished when it is decommissionned.  Make this official, by adding a validation/understanding/learning/feedback step at the end of the value stream.

A feature to release is not code to produce. It is an outcome hypothesis. John Cutler insist on talking about bets. Once the code is released, we need to evaluate its consequences, and decide where to go from there:

  • It’s perfect we can stop iterating.
  • We should try modifying this or that.
  • We need more info.
  • Let’s deactivate or remove it.
  • And so on.

By the way, if you doubt, like you should, of features usefulness, you should limit the number of experiments you run in parallel. An experiment takes time to reveal its secrets. This delay is actually an interesting topic to think about. Among other things, it helps understanding why our so-called experiments are not scientific. Cynefin rather talks about probes.

Pull system

Limit your WIP (Work In Progress, i.e. the number of things being worked on), in all steps of the stream, including studying/prioritizing. By doing so, you will avoid preparing items when the next step is not ready to take it. Your backlog will thus remain under a reasonable limit.

  • The first step of the stream is a prioritized list of problems to solve, ideas, wishes, unverified asumptions. You study these topics by priority, when you have the capacity to take them. That is to say, when it’s useful to think about them.
  • Then you can think further about them: “what for”, split, explicit the goal and constraints, share understanding…
  • Then development if needed, test, deployment, and so on.
  • And then, validate the need, gather feedback to iterate.

So, step by step, you push features from the last step: impacting the world.

Of course, it only works if items are small. How small? If you’re beginning, it’s never small enough. If you have more experience about flow, and you know why it’s too small, then improve your process to decrease the transaction cost, and then make items smaller.

Wrapping up

Let’s fix the logic described above. Instead of:

  • Users have needs.
  • So we must produce features.
  • So we must optimize feature production.
  • I.e. we must maximize feature production.

Let’s try:

  • Users have needs.
  • We might satisfy those needs with features.
  • So we must optimize feature production and impact.
  • I.e. we must minimize feature production while maximizing impact.

When you use a product, you are delighted when you can see right away the feature you need. It’s a nice surprise to use it feature fluidly, the way you expected. It’s a change, compared to those products crawling under menus of sub-menus to propose all potential options, endless forms to support every possibility, just in case.

Propose the product you love using:

  • Verify features usefulness.
  • Do less things in parallel, and finish them.
  • Focus on users.

It’s never too late to do things properly. It’s always time to validate the hypotheses induced by upfront investments, however huge they are.

Now let’s go with code.


Enough is enough

This series of article is translated from French articles on my employer’s blog, Arolla. The French articles may or may not be released at the time you’re reading this.

We consume too much. We eat, throw away, heat, send e-mails, spend, earn, too much. We need to learn how to do more with less.

We have limitless backlogs. We look for ways to produce more faster. Production is our main indicator. As a company or a country measuring its income growth to invest more to grow more, we measure our production to release more to have more features to earn more. Simple isn’t it?

We miss two parameters here:

  1. Complexity doesn’t grow linearly with size. It tends to grow in a chaotic and explosive way. Sometimes independantly of growth. And surely in an unpredictable way. The worst news is, complexity doesn’t have a maximum. It is not capped by your capacity, anyway.
  2. You can’t predict how the system you’re creating will evolve. It is a kid, living its growth in a chaotic way. You can’t predict consequences of the evolution of your system, as soon as it gets a little bit complex.

I can only see one way of keeping that under control: move slowly, carefully, checking how the system evolves while you touch it. In other words, evolve frugally.

Because frugality diserves loads of ink, and I’m paid by the article, I’l try exploring this topic in 4 steps:

  1. Let’s start with backlog.
  2. What about code?
  3. Did you forget the process?
  4. Let’s get back to serious work.

When should I…

Most agile posts I see are about finding the right bargain about this question:

When should I gather requirements/specify/test/merge to main line/document/integrate/deploy/communicate progress to customers/<insert any phase gate you’re used to and concerned about when you think about iterative/continuous software development>?

The answer is: always!

If it hurts, do it often. It there is risk, do it often.

I won’t go through every possible question, you’ll find them in every consultant’s blog. There are two short answers to all of them:

  • It depends: from your situation to the ideal one, there is a gap. You must deal with it and find the right trade-off.
  • Do everything more often. Every time you delay something, like working on a branch, you don’t decrease risk: by delaying risk, you increase risk.

These answers come together It’s an and, not a xor.

The ideal situation is: every character you change in your code works, brings value, is crystal clear to developers and users, and instantly available to your stakeholders. This is ideal. But we can be ambitious: it is the goal. Everything we do must tend towards this goal.

Run experiments

In the previous post, we saw why we couldn’t apply user stories or spikes for a big technical epic we’re working on: developing an efficient engine in addition to the default one for a subset of what the default one supports. With user stories, we have no control on what we’re doing, and we can’t commit on what we’ll deliver. Ultimately, it does not allow to show progress, because it doesn’t correspond to what we do.

The key here is that we are progressing. We know it. We could implement a basic engine, and validate that the performance improvement is what we could expect from it. We have an overall idea of big chunks of stuff that will need to be done to make it functional. We just can’t make it real according to agile by the book. But there is a solution: as this is what we do, let’s make iterative expirements come true.

I’d like to introduce a new type of backlog item: an experiment iteration. It’s a kind of spike slice, or lean startup applied to technical stuff:

  • Define an objective, that you may or may not achieve.
  • Timebox it. This timebox is the main acceptance criteria. The backlog item is over when the timebox is over, period.
  • Just do it until the timebox is over.
  • Validate non-regression.
  • When it’s over, the key is that you check what you learned. This is what you do in the validation/test/acceptance column. What you learned might be:
    • Ideally, a backlog.
    • Worst case, give up and rollback. You must always keep this option open.
    • We added this and that tool or feature and it’s ok.
    • We should add this to the non-regression campaign.
    • We couldn’t support that case.
    • This or that piece of code must be re-designed.
    • That part will take time.
    • We need to check what we want to get regarding this.
    • You should check if this works as you expect.
  • Decide where to go next, based on what you learned.

Note that code branching or conditional compilation are not valid ways to avoid risks of regressions. They are only ways to delay risk, and thus to increase consequences. All experiments should be implemented in the trunk, using feature branching if necessary.

The main difference with a spike or a user story is that we focus on learning. It is transparent to everyone. You will be expected to implement the objective, because it is not the priority. We also make discoveries about the product and the code base more transparent, because there is no limit to what you might declare as learned. It might also save you some time during retrospective or andon meetings, because you already introspected on many technical topics.

Iterative experiments should be run when large chunks of software are to be implemented without knowing where to go in detail. It’s not too big to fail, i.e. too big to succeed, but too big to know all details in advance. They should lead to more deterministic work like an actual backlog.

What do you think? Have you ever focused on what you learned? Do you have some feedback about it?

My stories are too small

We want to have backlog items that are as small as possible. It protects us from the unknown. It prevents us from having to re-work too much code if rejected, and limits the effect of wrong estimates. It allows us to be predictable.

In the same time, backlog items must have a meaning. We want to be able to release something valuable. Done-done also means useful. If it’s not, how can you adapt your backlog to actual needs? What’s the value of your high-level estimates? If something considered as done is in fact half-done, how can you release it? Splitting stories also means you can stop at the end of the first part.

I’ve never found the right recipe. And I just bumped into this situation again. The need is something like “I want to use an object of type A in screen B”. To use this object, I must first select it, so I need a selection screen. Then I need a select button that actually allows to update screen B given the object selected. The thing is… just implementing the selection screen doesn’t fit in the iteration.

Note: we are in an iterative mode but the issue would have been the same with a flow. The story is too big, and we couldn’t split it.

Only implementing the selection screen doesn’t make any sense. If we had to release at the end of the iteration, the user would have a selection screen with a disabled select button that says “to be continued”.

We chose the smallest issue: split it and deliver something that is not usable or useful at the end of iteration, but deliver something anyway. We prefer feedback than nothing, partial waste than entirely in stock. But still I’m not satisfied, like always in this situation. And finally we have this ugly story: “I want to use an object of type A in screen B, part 1”. So…

Feedback is warmly welcome. Have you encountered this situation? What do you suggest?

Kanban experiments from the trenches

This article is gonna be too long again, so no introduction, right on… oh yeah, hi!


Our team is made up of 3+ software engineers, 1 requirement analyst (or PO [Proxy]), 1/2 of a team leader, in Paris, and 2 QA engineers in India… I know, please don’t insist. Until this project, all agile projects were managed with a purely iterative, scrum-like, approach in our company. With 5 people in Paris and 2 in India, we couldn’t apply these principles anymore. It was the perfect reason for me to try lean formally. I already was very interested in lean, all I needed was a good excuse. And it allowed me to confirm a few problems I had with agile.

Once upon a time, we mapped our value stream, i.e. our development process, into a Kanban board. As we are a distributed team, we use an electronic tool. As we are already using VersionOne in our company, which somehow enables kanban, VersionOne is our reference kanban board.


The steps of our stream are:

  • (None): The prioritized backlog.
  • Study: The goal of this step is to make sure everyone agrees on what needs to be done. We study and estimate the story, split it if it is too big, split it into tasks, define tests. I’ll also come back to estimates, as it is not supposed to be part of kanban. We defined this step so that we make sure  everyone is involved in the story definition and agreement.
  • Ready for dev: A queue.
  • Dev: We do the thing right, and we verify tests will pass.
  • Ready for test: Another queue.
  • Test: We verify the right thing was done. Stories can be blocked here, get back to Ready for Dev, or we can open tickets (issues, defects, whatever). I won’t get into detail here, because we are still experimenting. But I have a point of view, and we can talk further about this step and issues if you’re interested.
  • Validation: Strange step, but an interesting one. Everyone loves Reviews, i.e. demonstrations of what we did. Therefore, we formally kept this in the stream, by adding a step that is a mix of a queue (items waiting for review) and something that we do (the review). But as we consider the review as an atomic event compared to the cycle time, a queue is ok.
  • Accepted: Done done.

Definitions of Done

For every step that is not a queue, we have a Definition of Done, or a Definition of Ready for the next step

Study done, or ready for dev

  • The story is estimated, and is no more than 8 story points.
  • Acceptance criteria are defined, understood, and agreed upon by everyone.
  • Tasks are defined and estimated.
  • A coarse-grained test plan is defined.
  • Everybody agrees to switch it to ready for dev.

Dev done, or ready for tests

  • All tasks are finished.
  • The CI is green.
  • Developers verified the test plan should pass, and the dev shouldn’t cause regressions.
  • Refactoring to be done is done.
  • “Enough” unit tests are written
  • Developers agree that the code is ok.

Tests done, or ready for validation

  • The test plan passed and is green.
  • Exploratory tests were done.
  • “Enough” tests were automated.
  • Everybody agrees that the story is ok for validation.
  • If an issue is found, but is quick to fix, the story remains in tests until a fix is provided. If only non-blocking issues are found, defects are opened and prioritized, and the story is accepted. Otherwise (i.e. blocking issues that can’t be fixed quickly were found), the story gets back to ready for dev.

Validation done, or ready for business

  • The story was demonstrated during a Review (same as in Scrum).
  • Everybody attending the presentation agrees. If people disagree, then we discuss in order to see if the story should be put back somewhere in the stream, or if another backlog item should be created and prioritized.

WIP limits

We have set WIP (Work In Progress) limits for all steps, including queues (because VersionOne allows to see red columns when WIP limits are exceeded). WIP limits in queues allows to control that queues remain at a reasonable size. I won’t talk much about this, its very classical, and I don’t have any particular comment on this so far… except maybe that limiting WIP definitely looks like a freaking good idea.

Explanations and comments

We focused on trying to make sure everyone is on the same page: QA is always involved, we foster communication at every step, we try to make sure we all agree at every step… And that’s what we actually do. We use phone, IM, and email a lot. Every day, we run a daily stand-up meeting remotely, on the phone. We spend as much time as necessary to ask or answer questions. It is very costly, but it’s the price to pay.

What we want to avoid at all cost is that stories get back into the stream, e.g. from test to dev. But we only avoid it if it makes sense. If a story actually has issues that we consider as blocking, we actually put it back to dev. We want WIP limits to have a meaning. Therefore, when engineers actually work on an item, we want one of their slots to be actually occupied. And we don’t cheat on what a blocking issue is. If an acceptance criteria is not met, or if we just don’t agree, by default, the story is not accepted, and it goes back to dev.

We kept many things from iterative, like story points and tasks. There are several reasons for this choice:

  • We can somehow estimate/plan, based on our previous experience and metrics. We had a velocity and so on.
  • We can easily fall back to past approaches if kanban doesn’t fit, or if conditions change.
  • It’s not very expensive to maintain.
  • It might still be relevant (see below).

I hope it’s just gonna be transient. We should be able to get a lot of metrics from our kanban board very soon. In particular, I’d like to see lead times, throughputs, times within steps, variabilities, roundtrips, depending on story types, sizes, and so on… I hope to be able to see what figures influence cycle times and throughputs, and get rid of the others (hoping dropped figures don’t become meaningful when we get rid of some bottlenecks). From this analysis, I hope to confirm, for example,  that estimates don’t really matter for the cycle time or the throughput, given story sizes are in the same order of magnitude (i.e., in our project, from 1 to 8, which is already huge from my point of view).

What needs to be improved

We need first figures: cycle time, throughput. This is a priority to be able to estimate. The goal of estimating is to be able to say: when is this item supposed to be done? “This item”, of course, can be anywhere in the backlog. The estimate should be something like:

estimate = (backlog size / throughput) + cycle time

where backlog size is the quantity of backlog that needs to enter the stream before “this item”. So we desperately need info about cycle time and throughput.

That’s not a surprise, but organizing remote retrospectives is complicated, not to say impossible. For now, we introspect informally, by talking. And we insist on the fact that everyone should propose ideas, and we listen carefully to every point of view. And regarding continuous improvement, that’s about it. But we know that we need to formally provoke introspection. We count on travels, in any direction, to provoke this, but it didn’t happen yet. Finally, we might use A3 as well when it is time to do so.

For the moment, we have issues feeding queues, because study takes a lot of time. We already made some trade-offs in our process, but we still need to improve this. Once again, it’s not a surprise: making sure we understand each other from different hemispheres takes a lot of time (touch AND lead time).

Test automation was not a deep part of our culture. Especially in pure QA teams, like we have in India. Flex automation is a pain in the ear (censored). And, of course, short term business is the short term priority. As a conclusion, we can say functional test automation is still to be done.

Improve, again and again. We must always have this in mind. Our process must never be stable, cause there’s always room for improvement, and conditions always change. The best process is the one that improves continually.

What we learned

Lean confirmed 3 problems I had with iterations, because it solved them. All of these issues are related to the fact that everything must always have the same pace  in iterations.

For example, if we “pre-plan” (or present the stories to the team, or whatever) 3 days or a week before the actual iteration planning, we must take no more than this time to think about all stories we’ll take for the iteration. But on big or more or less clear projects, questions will rise. If stories are not clear for the planning, they can’t be taken for the iteration, or they must be replaced by a spike. Causes are multiple: technical issues, functional or technical dependencies between stories or teams, functional questions, you can’t get the customer on time, PO is not enough available at that moment, QA or developers think about cases the PO had never thought about, etc… If everything was clear at the first time, we wouldn’t need to include everyone. So having a pre-defined agenda for understanding a story has been a problem on every iterative project I worked on.

Another example? Stories need to be planned for the whole iteration. But it’s sometimes difficult, because a 2-week iteration, for example, doesn’t fit all cases. For some set of stories it’s not enough, for some it’s too much. The first case is only sub-optimal, it’s not necessary a problem. The second can be a real problem. Let me explain. In an iteration, I have several stories, related to a common theme because we want our reviews to be as epic as possible. We plan and estimate them, and the iteration starts. Generally, the first story will contain some tasks to prepare the field for all stories, because they relate to the same part of the code, and a bit of architecture and tools is needed. While we do this, we discover a lot things. Sometimes good ideas, often difficulties. They can be technical or functional by the way. So once we’ve done this first story, we often realize we should re-plan all the other stories. But we never do, and the velocity never gets predictable enough to be trustworthy.

Kanban solves this by getting rid of sets of stories to prepare. We prepare stories just in time, so that we can wait for discovering the first story before studying the others.

Another problem is that some necessary steps of the iteration are not formally defined in iterative approaches, because they don’t belong to the iteration. I’m specifically referring to pre-planning, but, on big projects, there might be other steps like integration testing. We can add them specific ceremonies for these steps, but I always had issues with them. The team is focusing on its iteration, and it’s too often difficult to interrupt them.

In my past projects, the pre-planning was too often postponed, if run in any way. And the team could not think about the stories of the iteration to come anyway, because they had to finish the current iteration anyway. Finally, planning was always improvisation, and estimates were crap.

When I worked on projects where integration testing was needed, we had a dedicated team for this, which worked in iterations with 2 or 3 day offset compared to other teams. And when issues were found during integration testing, it was too late for dev teams because their iteration was over and the stories accepted anyway.

With kanban, you understood it, we solved our problems by adding the study step. This step is totally part of the story development, so it is actually and always done. And it takes the time it must take. If we had multiple teams with integration testing, we could a multi-tier board with integration testing. It doesn’t change the problem that much, but, at least, you see it specifically.

The other solution brought by kanban is removing iterations. While I agree iterations can be great for motivating the team, I’ve personally never seen the benefit of them so far. Committing a content for the iteration is too often a fantasy, and we know quickly that our commitment is not a real one. We discover too many things during the iteration, which is great, but which changes the rules of the game too soon. And having a pace driven by iterations is as regular as following a stream, so it doesn’t really wakes us up once 2 or 3 iterations are done.


So far, everybody is happy about kanban. We mapped the reality of our work. The team has a great visibility. We’re gonna get great metrics for management very soon. I’m sure we’ll be able to make release plannings and timely estimates. Everybody clearly knows what they need to do just by following the board.

As you can see, I’m not selling any church: we haven’t applied kanban by the book, we are recording a lot of figures compared to what we should do, and we will check what needs to be kept. Hopefully, I will get to the same conclusions as David J. Anderson. If there are differences, I will be able to explain objectively why we changed things. I’m really excited about it!

Many things need to be improved at the moment, but I’m sure we are using the right approach for our project.

And to conclude this conclusion, I realize this article is really from the trenches: muddy, too long, deadly. If you reached this point, you can be proud, soldier. We are fighting for a good cause my friend.

Who wants to criticize this, suggest improvements, or just exchange about it? I’m excited about seeing your point of view.