Category: kanban

People are better in the unknown

We are currently discussing about adding a team in another country to our product (single collocated team in Paris). This is the kind of discussion we’re having:

Program Manager: “We need to deliver Feature A in 6 months, and I don’t want to allocate more than 40% of our throughput to it. 40% need to remain for the expedite, and 20% for the support. Feature A alone would take about 80% of your current throughput, so we need a plan to double your throughput.”

Dev Director: “We have issues hiring here. And even if we can, it takes about 6 months from opening the job offer to having the dev in the office in Paris. So we think we’d rather open the job offers in Romania where it takes about 2 months instead of 6.”

The first thing that strikes me in this kind of conversations is that we consider people as resources (I didn’t coin the term in the discussion to avoid offending these virtual personas). When I say resource, I mean a linear resource, with a predictable and somehow linear behavior. I’m not offended being compared to a keyboard or a chair when I’m referred to as a resource. I’m just always surprised argumenting with someone with experience and responsibility, who never realized that 1+1 is not 2 when talking about people’s throughput. 1+1 might be 1, it might be 3, or 5, or sometimes 2, it depends.

A team has a complex behavior, and it’s very hard to predict it. If you add X% people to a team, you won’t add X% throughput. In general, you’ll even lose Y% (Y not being necessarily <100) for a given time. Even if you consider interactions, you won’t be able to come up with an equation that can predict a team’s output.

While I was thinking about it, I bumped into two awesome [series of] articles: people are resilience creators not resources by Johanna Rothman, and the principles not rules series, by Iain McCowatt. They helped me put words on my point of view.

People don’t have a linear behavior. They learn, they socialize, they create bounds, interact, and create together. We are not predictable. Let’s just realize it and deal with it. And you know what? That’s what we are great at! We are great in the unknown, at adapting collectively.

We won’t be more predictable or efficient by following a process or a precise plan. Or at least not very often. Actually only in cynefin’s simple quadrant. And when we’re adapted to bound processes and predictability, we’re a good candidate for being replaced by a machine, which would do better than us. Think about automated tests being great, for example, but mostly for known knowns. The organization’s interest is not to get predictable behaviors out of people. At best, you may get somehow predictable throughputs out of stable teams, but you don’t want to have more.

Back to our original discussion. Whatever the plan, we don’t know what will happen. Given the time frame, the business context, and the code base we’re working on, we are quite sure creating a team in a different country, in a different language, will have a negative effect for the release. But we’re not sure about it. We think creating a team in Romania will have a more negative impact than growing the team in Paris, but we don’t know about it. We think it might have a positive effect on throughput after some ramp up period… I could go on for pages.

The thing is, the system doesn’t want to be sure about any of these assumptions. It’s not the system’s interest. If the system could predict these assumptions, then people would have predictable behaviors, and it would be a bad thing for the organization.

So let’s start with a belief (e.g. “we can’t hire in Paris”), a value (e.g. “face to face collaboration”), a hypothesis (e.g. “a remote team could improve the throughput of this project”), a strategy (e.g. “scale that team”), and experiment/iterate on it.

Sweet WIP limit

Like many teams we adopted WIP limit. And like most of them we could see the benefits of this practice right away.

WIP stands for work in progress. Limiting WIP thus means limiting the number of items you work on at the same time within the team. When your team reaches the max number of items it can work on, it just stops starting new items.

By limiting the WIP we create slack. Which is exactly how I could finally introduce it in the team: “what if, when we reach the limit, we just did nothing?” Though it’s not as simple as this, fear of the void is the main issue to overcome when you want a team to adopt WIP limit: ultimately, when you reach the WIP limit, you stop working on the top priority items. Instead, you help to keep the items you started flowing towards their done doneness. But when you can’t help anymore, you just do nothing. Actually, the rules, when we reach the WIP limit, are:

  • You don’t start any additional item.
  • You help finishing started items.
  • If nothing can be done, you can do whatever you want, as long as it can be interrupted anytime to work on the project’s priorities.

So we created slack. And, as often described, the benefits to slack are huge:

  • Started items are done in a quicker way.
  • As a consequence, there is less uncertainty.
  • More reactivity.
  • Less interruptions.
  • Less WIP means less dependency issues within the team.
  • We have a tendency to pair more. Thus more quality, more tests, more refactoring.
  • More knowledge sharing because we help more each other.
  • We facilitate more other team’s flow.
  • We develop more tools that help us speeding up our flow.

Let’s look at the concrete stuff. We did it in a slightly different way than the academic way, or other teams on the project. Our flow is quite standard for a dev team.


Usually, you would have a WIP limit on Dev and Dev Done, and another one on Test. We tried something else. As we are a team of 4, we set a WIP limit of 3 around Dev, Dev Done, AND Test (actually it’s 2 backlog items + 1 bug at least major, but it doesn’t matter here).

Flow with WIP limit

We did this for several reasons:

  • As we have no dedicated tester in the project, we often test our backlog items ourselves. 
  • When a backlog item is rejected in test, it goes back to dev directly.

A WIP limit works because it’s applied to a production unit: a team. It’s normal to apply the WIP limit to everything the team covers. And even when testing isn’t in our scope, we are still supposed to take it into account, as the item being tested can still come back to dev. Testing doesn’t always answer yes. Kanban comes from manufacturing. In manufacturing, when an item isn’t valid, it is usually dumped, or sent to a specialized team. In our case, when an item isn’t valid, it consumes a slot to the dev team that produced it.

After a couple of iterations with this rule, I can confirm that the fear of the void is still the main impediment to overcome. We are cheating the WIP limit here and there, especially when managing bugs. We are still tweaking the WIP limit in order to make it fully functional.

I’ll keep you posted when I have some significant news. In the mean time, if you have any experience to share, please feel free to help.

Why technical user stories are a dead-end

As said in my previous post, our team works in the intermediate layers of the product. I didn’t get into details, but one of the reasons why we need horizontal slices is that we have a huge job to do to improve performance of a 4-year old very technical product. The task will take several months. We have a direction to go, but we don’t know how to get there in detail. We need to iterate on it anyway, as a tunnel is the biggest risk of development. The common way of doing it is to iterate on small user stories. This is where problems begin.

First of all, user stories start with a user, and a story. We have none of these. We just have a big technical tool to implement. If we wanted to write user stories for each iteration, with a relevant title, each of them would be something like “as any user, I would like the product to be more reactive, because fast is fun”. Maybe we could suffix titles with 1, 2, …37 to make a difference between them. Some would prefer stating technical stuff like “as Johnny the developer of XYZ module, I would like more kryptonite in the temporal convector, because I need a solar field”… and would call it a user story. You see my point.

And we need a good title for a user story… sorry… a backlog item. Because it’s the entry point for the PO, the customer, the
developers, the testers,… to understand WHAT is to be done and more important, WHY it is so (which is why I prefer beginning with the why in the title). Stating how we do stuff is irrelevant, because it’s only a dev matter. It doesn’t help understanding each other in any way. Which means we don’t have any clue about how we will agree on its doneness. In other words it’s a non-sense to commit on such a story. So we have a first dead-end: we need good descriptions , but we can’t provide them in a “user story” way.

The performance project is big because the new engine, that “caches” the default one, needs to implements a lot of rules of the default engine. And releasing it is not relevant until it implement enough of these rules. Dividing the work in rules has no sense, first because we need to build the basis of the new engine, which is already a big work. Secondly because implementing a rule in the new engine is not equivalent to implementing it in the default one. The algorithms, the engines, the approaches, are different. What is natural in one environment is a lot of work in the other one, and vice-versa. In addition, the work is so huge no one can wrap his brains around the godzillions of details we need to implement. Finally, we don’t know in detail the rules implemented by the default engine. Said another way, we continuously discover what needs to be done, along the way. Even in each backlog item, we are discovering so much we can’t estimate precisely enough at the beginning, or even make sure we agree on the same thing to be done. Not to mention longer term planning (which we will see in another post). Why denying it? Discovery is our everyday reality.

So we have no user, no story, no clear short-term scope, but we agree on the fact we need small items to iterate on. But wait! Agile has the solution: spikes! We timebox a short study, which we might iterate once or twice, and we know how to go on. But that’s what we already did for 3 or 4 iterations with a team of 4 developers, we are progressing, and we still don’t know clearly what we will do in the next iteration. So spikes might sound nice, but when you are at the bottom of a technical Everest, it needs to be taken to the next level. We are not picking a logging library among 3 candidates.

So following the basic agile rules, we’re stuck. But there is a solution we’ll talk about in the next post.

Perfection is a verb

Ok, this sentence sounds like a fortune cookie. Maybe I’m turning buddhist, but I love it. And it  applies especially well to software development. As a craft based on human interactions, software development is a set of compromises. There’s no such thing as a best practise, a definitive process, or a methodology that works for all contexts. I don’t even understand how words like agile and process or methodology can have ever been put together in any mind.

Agile and lean are approaches that work the best according to what we know today. But they are just containers of values and practises that constantly evolve. They don’t tell you what words to use in discussions, or what lines of code to write for every single situation. They need to be adapted to every context. Scrum and lean startup by the book might be good enough to design a simple shopping cart, but not necessarily to design a car, a distributed NoSQL database, or a nuclear plant software complying with all required regulations.

You always need more than what you can learn in trainings or books. Your situation is always unique and new. If it’s not new, why bother doing it? If you didn’t need to adapt the framework, why would scrum define the basic team around the opposite powers of dev, backlog, and test? Why would lean be based on kaizen in the first place? A set of definitive rules, a process, would be enough.

In the end, you always need to adapt the way you build, collaborate, understand each other, find solutions, analyze gaps, code… and to get better at it. Always. There’s no end to getting better. Because you can always be better, and because the context keeps evolving. And that’s what I love about it. You always can and must do better. Everytime you’re not trying to get better, you’re getting a bit closer to the death of your team, project, organization, company. The world won’t wait for you.

So perfection is the goal, but you can never reach it. How exciting! That’s what the best approaches known so far explain to us right from the beginning. You need to find the right compromise, for every situation: test coverage vs test campaign maintainability, code performance vs readability, taking the time to discuss and understand each other vs saving meetings, writing just enough documentation vs focusing on working software, self-organizing teams vs facilitating their progress, releasing features vs stabilizing the product… There are hundreds of parameters you need to balance. And you always need to tweak the buttons as the situation evolves.

In the next posts, I’ll try to talk a bit more about the tradeoffs we are faced with day by day. What the books don’t say, and the concrete problems they don’t give solutions to. I’ll try to describe the approach we’re taking to oil the wheels. I hope it will save me the shrink.

Estimating, what is it good for?

We just had a very interesting discussion about estimates. We started this discussion with re-assessing the weight of our story points, but it lead us to talking about why we need estimates.

I think the ideal goal of any framework or methodology, as for managers, is to disappear and lead to anarchy (but not chaos). Therefore, I look for any occasion to get rid of iterations, estimates, follow-up, or any other method or tool. I’d like to get to a minimum process focusing on communication, and then make it disappear as it becomes natural.

Estimates can be useful, but they are expensive. And we weren’t clear about where they were useful. So we tried to think out of the box, and rebooted our thinking about the reasons why we needed estimates.

In a recent post about transitioning to agile, Dean Stevens said:

[We can] explain some benefits of an Agile Transformation this way:

  • Business wants predictable and improved throughput.
  • Customers want quick lead times for quality product.
  • Teams want a good working environment where they can contribute and succeed.

I thought the ultimate goal of estimating was to come up with a long term planning (few months) with some confidence. From my experience, customers demand some planning for their requirements.

But I was explained that getting to that end to sell something was a commercial failure. Long term planning is never reliable. It is not when it comes from the provider, and it is not when it comes from the customer! I know you rarely deliver exactly what you planned a few months ago. And does your customer still exactly needs what he requested 6 months ago? Is he still ready to use it now that you’re delivering him?

Negotiating such plannings in a precise way is a poker game, where everybody looses.

You can still have a high level long term planning, but be clear on the fact that it is nothing more than an unclear wish-list based on the knowledge you have today. Don’t constrain your development process for this objective. Manage long-term planning outside of the development process, using tools like portfolio management or epic/theme backlog, where you estimate your epic/themes in comparison with other [done] epic/themes, and not development items like user stories.

The agile community agrees on the use of the discussion around estimates. I totally agree with it. Estimating is a great way to make sure we have a discussion. It’s a great way to detect comprehension gaps between team members (devs, qa, business,…). It also allows devs to agree on smaller chunks of the story (tasks) to run in parallel in order to swarm. By having this discussion, and the objective of a consensus around estimates, we have a better chance to align our comprehension of the why, the what, and the how.

But estimating is only one way to get there. We could get to the same point with a different objective, like colorful or timely drawings, a list or a number of tests, mock-ups,… So again, if the goal of the estimate is only the discussion, if it’s a game among others to run during the iteration planning, we should not make the rest of the process heavier for it.

What about the process? Through estimates, you get a  throughput. By making this throughput predictable, you make the process and the backlog predictable. By optimizing your throughput, you optimize your whole process. By detecting deviations in the throughput, you detect exceptions to your standard process (i.e. deviant deviations), and thus optimize it. And so on…

But there are other ways, like following up the cycle time, the WIP, and thus bottlenecks. You can monitor your throughput as a number of items, and not a number or story points. If your items are within a reasonable size, the standard deviation of your throughput and cycle time should be acceptable (compared to estimating error or your constraints). Agreeing whether an item is of a “reasonable size” is estimating. But I believe it is a lot lighter than formal methods like planning poker.

It leaves us to the short-term planning. Estimates are useful to know what you’ll do in the next few weeks. Like for long-term planning, I’m not clear on the goal of this. Making sure you respect due dates is definitely a fair objective. Motivating teams with an objective to reach is another one. Prioritizing considering constraints and slack is also important. Having a prioritized backlog where teams can pull work without bothering a PO is great. Fine, we’ll suppose you need to know what you’ll be able to produce in the next few weeks. And drum roll…

BUT is estimating useful to have a prioritized development backlog and a short-term planning? It is if your throughput and cycle time standard deviations are too important. If they are, you need to sort your items by categories where these standard deviations are acceptable.

As a conclusion, what you need is buckets where cycle time and bandwidth occupation are close enough for all items. If you only have one bucket, it’s perfect. If you have more than 4 or 5 buckets, you should really think about it. Estimating is knowing which bucket fits your item, or splitting an item so that all sub-items can fit one of your buckets.

For different levels of plannings, you can have different buckets. For example, you may have development buckets for the development backlog, and release buckets for the long-term backlog. Those buckets are not of the same nature. Neither are the items they contain. Don’t try to fit development items in long-term backlogs, or long-term items (epics/themes) in the development backlog, without further analysis.

As a conclusion conclusion, monitor your throughput, your cycle times, and their standard deviations, identify your buckets, and make them the least possible.

What do you think?

Coding dojo – a great team builder

Lately another team organized a coding dojo. The feedback was great.

The idea of a dojo is quite simple. You set an objective for about 1h of dev (games are cool). You begin with a pair of developers. All others must be able to see them and what they do (e.g. using a projector). Developers are asked to say what they’re doing, and apply pratices like TDD. The navigator should also do his job, of course. Every 5 minuts approximately, the pair changes: the pilot leaves, the navigator becomes pilot, and someone else comes in as the navigator.

Just looks like a game, right? But it is helpful for many aspects, not only for technical stuff. In this team particularly, most of the devs attending the dojos were not very experienced in software engineering and agile. Of course, they learned technical things from their peers, like:

  • patterns
  • IDE features and shortcuts
  • coding technics
  • refactoring technics

As everybody could experiment these in a safe environment and witness their benefits, it was also a great way to foster XP practices like:

  • pair programming
  • unit test first
  • always refactor

And last but not least, most feedbacks were, quite surprizingly, also about observing how the team interacted:

  • what devs knew or not
  • how they analyzed problems or found solutions
  • how they interacted with their pilot/co-pilot

I was surprized about the outcome of this experiment… not because it didn’t meet my expectations, but because it went way further.

Thinking about it, the coding dojo is close to the optimal compromise between safety and realism for software engineering. You can try new things, observe their benefits, observe your teammates. You can do it without bad consequences but with hands on, in a very short but effective time. I will definitely use it for passing dev messages.

Has anyone tried coding dojo before? Do you have any feedback about it?

Blow up agile methodology

In my last surreal list I hid a bonus non-sense: agile methodology. I already referred to it in previous posts, but hearing these words together makes me feel more and more upset. As part of my therapy, I need to exorcise this, so here we are.

Agile, lean, or even Scrum, Kanban, or XP, are not methodologies or processes. By this, I mean that that they are not a predefined set of practices or metrics that you should apply by the book. It may sound like a puny fight about words, but it’s way more important to me. If you consider they are rigid, you will find some flaws for sure.

Agile, lean, you name it, are a set of principles. They can help you point out common problems, and explain their approach on solving them. They often do this by making the traditional software beliefs upside down, by the way. It doesn’t mean they’re not counter-intuitive, because, at least from my stand point, they’re putting back common sense in the loop.

Even if we talk about applications of these principles, we’re still not talking about methodologies. We refer to Scrum, XP, or Kanban, as frameworks. Why is it so?

Because a central aspect of these is Continuous Improvement. How can we improve if our process must be constant. The way we organize, communicate, share, validate, and so on, is a central part of our performance. It must be adaptable to the situation. And guess what: it is!

Scrum talks about retrospectives, but is not precise on its content or outcome. Lean says that “your lean process should be a lean process”. All they say is that even your practices should change regularly. You must introspect, evolve, in all directions, and adapt to your situation.

Some aspects of our work are not explained in detail by these approaches. And your context is different than others for sure. If it’s not new to some extent, it’s useless. Lean comes from manufacture, and can not be applied to software as is. So you must adapt these principles. And adaptation is central to them. The agile manifesto states “individuals and interactions over processes and tools” and “responding to change over following a plan”. You can’t sacrifice your performance for sticking to an unadapted “process”.

And there’s a great aspect in this. Many things still need to be discovered. By trying, thinking, widening your horizon, you will find some great ideas that serve the whole community. You might even discover a profound flaw in these principles and propose a totally different approach that would be a revolution for the industry. You might invent the new heavy metal or techno and become a rock star. That’s what people did when they proposed the principles we’re working with today. And as odd as it may sound, they are still not perfect. So keep trying, break the walls to make sure you become and remain hyper productive, and share your experience with us.

What’s your opinion?

The world is red as a dead-end

A project well planed will be delivered as requested.

A project that fails is a badly planed project.

Only precise contract and specifications can ensure the customer gets what he expects.

To maximize value, concentrate on features.

A good test campaign at the end of the project will solve issues.

A good engineer can produce reliable estimates.

Management is more important than production.

An experienced manager has the overall point of view, and can thus produce even better estimates.

The agile methodology is about sprinting on features.

Hardware is expensive.

Always start with a good architecture specification and development.

You can’t work with a customer that always changes his mind.

To maximize productivity, make sure your team includes specialists.

To maximize productivity, make sure your specialists work on what they do the best.

An automatic testing framework is a lot more expensive than running manual test campaigns.

An automatic testing framework can automatically test any software.

Clear specifications make everyone understand what must be done how.

Explain that work is a value. Working leads to employees happiness.

To maximize productivity, use your resources at least 100%.

Human resource.

Developers must maximize their time on their IDE.

Software engineering is an industrial process.

The manager’s schedule is so tight, engineers must understand and fit his constraints.

You always work better if you’re quietly isolated.

Your project will be more profitable if you minimize resource cost.

Projects are cheaper if you outsource development or QA.

A precise set of specifications allows developers to produce the expected software.

Problems occur when employees don’t follow the process tightly.

The process must be the same for every part of the company.

You achieve software consistency by sharing a framework that covers every aspect of the platform.

To sprint on features, you can’t share code with other teams.

The manager is the single point of decision.

The manager must control the process is tightly applied.

To modify a piece of code, you must ask the architect.

The more you work, the more you produce.

To control your company expenditures, plan a budget yearly and tightly.

I think it’s more than enough for today. Who wants to go on?

Kanban experiments from the trenches

This article is gonna be too long again, so no introduction, right on… oh yeah, hi!


Our team is made up of 3+ software engineers, 1 requirement analyst (or PO [Proxy]), 1/2 of a team leader, in Paris, and 2 QA engineers in India… I know, please don’t insist. Until this project, all agile projects were managed with a purely iterative, scrum-like, approach in our company. With 5 people in Paris and 2 in India, we couldn’t apply these principles anymore. It was the perfect reason for me to try lean formally. I already was very interested in lean, all I needed was a good excuse. And it allowed me to confirm a few problems I had with agile.

Once upon a time, we mapped our value stream, i.e. our development process, into a Kanban board. As we are a distributed team, we use an electronic tool. As we are already using VersionOne in our company, which somehow enables kanban, VersionOne is our reference kanban board.


The steps of our stream are:

  • (None): The prioritized backlog.
  • Study: The goal of this step is to make sure everyone agrees on what needs to be done. We study and estimate the story, split it if it is too big, split it into tasks, define tests. I’ll also come back to estimates, as it is not supposed to be part of kanban. We defined this step so that we make sure  everyone is involved in the story definition and agreement.
  • Ready for dev: A queue.
  • Dev: We do the thing right, and we verify tests will pass.
  • Ready for test: Another queue.
  • Test: We verify the right thing was done. Stories can be blocked here, get back to Ready for Dev, or we can open tickets (issues, defects, whatever). I won’t get into detail here, because we are still experimenting. But I have a point of view, and we can talk further about this step and issues if you’re interested.
  • Validation: Strange step, but an interesting one. Everyone loves Reviews, i.e. demonstrations of what we did. Therefore, we formally kept this in the stream, by adding a step that is a mix of a queue (items waiting for review) and something that we do (the review). But as we consider the review as an atomic event compared to the cycle time, a queue is ok.
  • Accepted: Done done.

Definitions of Done

For every step that is not a queue, we have a Definition of Done, or a Definition of Ready for the next step

Study done, or ready for dev

  • The story is estimated, and is no more than 8 story points.
  • Acceptance criteria are defined, understood, and agreed upon by everyone.
  • Tasks are defined and estimated.
  • A coarse-grained test plan is defined.
  • Everybody agrees to switch it to ready for dev.

Dev done, or ready for tests

  • All tasks are finished.
  • The CI is green.
  • Developers verified the test plan should pass, and the dev shouldn’t cause regressions.
  • Refactoring to be done is done.
  • “Enough” unit tests are written
  • Developers agree that the code is ok.

Tests done, or ready for validation

  • The test plan passed and is green.
  • Exploratory tests were done.
  • “Enough” tests were automated.
  • Everybody agrees that the story is ok for validation.
  • If an issue is found, but is quick to fix, the story remains in tests until a fix is provided. If only non-blocking issues are found, defects are opened and prioritized, and the story is accepted. Otherwise (i.e. blocking issues that can’t be fixed quickly were found), the story gets back to ready for dev.

Validation done, or ready for business

  • The story was demonstrated during a Review (same as in Scrum).
  • Everybody attending the presentation agrees. If people disagree, then we discuss in order to see if the story should be put back somewhere in the stream, or if another backlog item should be created and prioritized.

WIP limits

We have set WIP (Work In Progress) limits for all steps, including queues (because VersionOne allows to see red columns when WIP limits are exceeded). WIP limits in queues allows to control that queues remain at a reasonable size. I won’t talk much about this, its very classical, and I don’t have any particular comment on this so far… except maybe that limiting WIP definitely looks like a freaking good idea.

Explanations and comments

We focused on trying to make sure everyone is on the same page: QA is always involved, we foster communication at every step, we try to make sure we all agree at every step… And that’s what we actually do. We use phone, IM, and email a lot. Every day, we run a daily stand-up meeting remotely, on the phone. We spend as much time as necessary to ask or answer questions. It is very costly, but it’s the price to pay.

What we want to avoid at all cost is that stories get back into the stream, e.g. from test to dev. But we only avoid it if it makes sense. If a story actually has issues that we consider as blocking, we actually put it back to dev. We want WIP limits to have a meaning. Therefore, when engineers actually work on an item, we want one of their slots to be actually occupied. And we don’t cheat on what a blocking issue is. If an acceptance criteria is not met, or if we just don’t agree, by default, the story is not accepted, and it goes back to dev.

We kept many things from iterative, like story points and tasks. There are several reasons for this choice:

  • We can somehow estimate/plan, based on our previous experience and metrics. We had a velocity and so on.
  • We can easily fall back to past approaches if kanban doesn’t fit, or if conditions change.
  • It’s not very expensive to maintain.
  • It might still be relevant (see below).

I hope it’s just gonna be transient. We should be able to get a lot of metrics from our kanban board very soon. In particular, I’d like to see lead times, throughputs, times within steps, variabilities, roundtrips, depending on story types, sizes, and so on… I hope to be able to see what figures influence cycle times and throughputs, and get rid of the others (hoping dropped figures don’t become meaningful when we get rid of some bottlenecks). From this analysis, I hope to confirm, for example,  that estimates don’t really matter for the cycle time or the throughput, given story sizes are in the same order of magnitude (i.e., in our project, from 1 to 8, which is already huge from my point of view).

What needs to be improved

We need first figures: cycle time, throughput. This is a priority to be able to estimate. The goal of estimating is to be able to say: when is this item supposed to be done? “This item”, of course, can be anywhere in the backlog. The estimate should be something like:

estimate = (backlog size / throughput) + cycle time

where backlog size is the quantity of backlog that needs to enter the stream before “this item”. So we desperately need info about cycle time and throughput.

That’s not a surprise, but organizing remote retrospectives is complicated, not to say impossible. For now, we introspect informally, by talking. And we insist on the fact that everyone should propose ideas, and we listen carefully to every point of view. And regarding continuous improvement, that’s about it. But we know that we need to formally provoke introspection. We count on travels, in any direction, to provoke this, but it didn’t happen yet. Finally, we might use A3 as well when it is time to do so.

For the moment, we have issues feeding queues, because study takes a lot of time. We already made some trade-offs in our process, but we still need to improve this. Once again, it’s not a surprise: making sure we understand each other from different hemispheres takes a lot of time (touch AND lead time).

Test automation was not a deep part of our culture. Especially in pure QA teams, like we have in India. Flex automation is a pain in the ear (censored). And, of course, short term business is the short term priority. As a conclusion, we can say functional test automation is still to be done.

Improve, again and again. We must always have this in mind. Our process must never be stable, cause there’s always room for improvement, and conditions always change. The best process is the one that improves continually.

What we learned

Lean confirmed 3 problems I had with iterations, because it solved them. All of these issues are related to the fact that everything must always have the same pace  in iterations.

For example, if we “pre-plan” (or present the stories to the team, or whatever) 3 days or a week before the actual iteration planning, we must take no more than this time to think about all stories we’ll take for the iteration. But on big or more or less clear projects, questions will rise. If stories are not clear for the planning, they can’t be taken for the iteration, or they must be replaced by a spike. Causes are multiple: technical issues, functional or technical dependencies between stories or teams, functional questions, you can’t get the customer on time, PO is not enough available at that moment, QA or developers think about cases the PO had never thought about, etc… If everything was clear at the first time, we wouldn’t need to include everyone. So having a pre-defined agenda for understanding a story has been a problem on every iterative project I worked on.

Another example? Stories need to be planned for the whole iteration. But it’s sometimes difficult, because a 2-week iteration, for example, doesn’t fit all cases. For some set of stories it’s not enough, for some it’s too much. The first case is only sub-optimal, it’s not necessary a problem. The second can be a real problem. Let me explain. In an iteration, I have several stories, related to a common theme because we want our reviews to be as epic as possible. We plan and estimate them, and the iteration starts. Generally, the first story will contain some tasks to prepare the field for all stories, because they relate to the same part of the code, and a bit of architecture and tools is needed. While we do this, we discover a lot things. Sometimes good ideas, often difficulties. They can be technical or functional by the way. So once we’ve done this first story, we often realize we should re-plan all the other stories. But we never do, and the velocity never gets predictable enough to be trustworthy.

Kanban solves this by getting rid of sets of stories to prepare. We prepare stories just in time, so that we can wait for discovering the first story before studying the others.

Another problem is that some necessary steps of the iteration are not formally defined in iterative approaches, because they don’t belong to the iteration. I’m specifically referring to pre-planning, but, on big projects, there might be other steps like integration testing. We can add them specific ceremonies for these steps, but I always had issues with them. The team is focusing on its iteration, and it’s too often difficult to interrupt them.

In my past projects, the pre-planning was too often postponed, if run in any way. And the team could not think about the stories of the iteration to come anyway, because they had to finish the current iteration anyway. Finally, planning was always improvisation, and estimates were crap.

When I worked on projects where integration testing was needed, we had a dedicated team for this, which worked in iterations with 2 or 3 day offset compared to other teams. And when issues were found during integration testing, it was too late for dev teams because their iteration was over and the stories accepted anyway.

With kanban, you understood it, we solved our problems by adding the study step. This step is totally part of the story development, so it is actually and always done. And it takes the time it must take. If we had multiple teams with integration testing, we could a multi-tier board with integration testing. It doesn’t change the problem that much, but, at least, you see it specifically.

The other solution brought by kanban is removing iterations. While I agree iterations can be great for motivating the team, I’ve personally never seen the benefit of them so far. Committing a content for the iteration is too often a fantasy, and we know quickly that our commitment is not a real one. We discover too many things during the iteration, which is great, but which changes the rules of the game too soon. And having a pace driven by iterations is as regular as following a stream, so it doesn’t really wakes us up once 2 or 3 iterations are done.


So far, everybody is happy about kanban. We mapped the reality of our work. The team has a great visibility. We’re gonna get great metrics for management very soon. I’m sure we’ll be able to make release plannings and timely estimates. Everybody clearly knows what they need to do just by following the board.

As you can see, I’m not selling any church: we haven’t applied kanban by the book, we are recording a lot of figures compared to what we should do, and we will check what needs to be kept. Hopefully, I will get to the same conclusions as David J. Anderson. If there are differences, I will be able to explain objectively why we changed things. I’m really excited about it!

Many things need to be improved at the moment, but I’m sure we are using the right approach for our project.

And to conclude this conclusion, I realize this article is really from the trenches: muddy, too long, deadly. If you reached this point, you can be proud, soldier. We are fighting for a good cause my friend.

Who wants to criticize this, suggest improvements, or just exchange about it? I’m excited about seeing your point of view.

I talked with an alien

This summer, a life ago, I was with my girlfriend in Bilbao. As we were trying to find our way back to our hotel, we bumped into someone that could finally help us. Of course he only could speak Spanish. Of course, none of us could. Anyway by dancing and singing we could make our point. When he took the lead, he could explain us where to go as he could. Quite surprisingly, we understood each other easily without a word in common.

We generally hear that 50 to 80% of human communication is non verbal. When you’re abroad, it can go up to 100%. And you can still understand each other about simple things. You generally don’t believe this until you can experience it. So it is quite complicated to explain to people who don’t [want to] believe you that a conf call is not enough. OK, you can still say that your brain needs to work a lot more to make up missing aspects when you don’t face the people you’re talking with, which makes you dangerous when driving with a phone. Everybody already experienced this. That’s not much compared to the price difference between a conf call and plane tickets, but who knows. If someone has a good game or argument to suggest, please don’t hesitate.

In the meantime, yes, encourage face to face meetings, eat and drink with your peers, try decreasing the number of your emails and phone calls, have fun! I’ll also try to make my posts shorter and save my time to talk with you face to face.