Category: agile

People are better in the unknown

We are currently discussing about adding a team in another country to our product (single collocated team in Paris). This is the kind of discussion we’re having:

Program Manager: “We need to deliver Feature A in 6 months, and I don’t want to allocate more than 40% of our throughput to it. 40% need to remain for the expedite, and 20% for the support. Feature A alone would take about 80% of your current throughput, so we need a plan to double your throughput.”

Dev Director: “We have issues hiring here. And even if we can, it takes about 6 months from opening the job offer to having the dev in the office in Paris. So we think we’d rather open the job offers in Romania where it takes about 2 months instead of 6.”

The first thing that strikes me in this kind of conversations is that we consider people as resources (I didn’t coin the term in the discussion to avoid offending these virtual personas). When I say resource, I mean a linear resource, with a predictable and somehow linear behavior. I’m not offended being compared to a keyboard or a chair when I’m referred to as a resource. I’m just always surprised argumenting with someone with experience and responsibility, who never realized that 1+1 is not 2 when talking about people’s throughput. 1+1 might be 1, it might be 3, or 5, or sometimes 2, it depends.

A team has a complex behavior, and it’s very hard to predict it. If you add X% people to a team, you won’t add X% throughput. In general, you’ll even lose Y% (Y not being necessarily <100) for a given time. Even if you consider interactions, you won’t be able to come up with an equation that can predict a team’s output.

While I was thinking about it, I bumped into two awesome [series of] articles: people are resilience creators not resources by Johanna Rothman, and the principles not rules series, by Iain McCowatt. They helped me put words on my point of view.

People don’t have a linear behavior. They learn, they socialize, they create bounds, interact, and create together. We are not predictable. Let’s just realize it and deal with it. And you know what? That’s what we are great at! We are great in the unknown, at adapting collectively.

We won’t be more predictable or efficient by following a process or a precise plan. Or at least not very often. Actually only in cynefin’s simple quadrant. And when we’re adapted to bound processes and predictability, we’re a good candidate for being replaced by a machine, which would do better than us. Think about automated tests being great, for example, but mostly for known knowns. The organization’s interest is not to get predictable behaviors out of people. At best, you may get somehow predictable throughputs out of stable teams, but you don’t want to have more.

Back to our original discussion. Whatever the plan, we don’t know what will happen. Given the time frame, the business context, and the code base we’re working on, we are quite sure creating a team in a different country, in a different language, will have a negative effect for the release. But we’re not sure about it. We think creating a team in Romania will have a more negative impact than growing the team in Paris, but we don’t know about it. We think it might have a positive effect on throughput after some ramp up period… I could go on for pages.

The thing is, the system doesn’t want to be sure about any of these assumptions. It’s not the system’s interest. If the system could predict these assumptions, then people would have predictable behaviors, and it would be a bad thing for the organization.

So let’s start with a belief (e.g. “we can’t hire in Paris”), a value (e.g. “face to face collaboration”), a hypothesis (e.g. “a remote team could improve the throughput of this project”), a strategy (e.g. “scale that team”), and experiment/iterate on it.

A veto killed the team

You’ve worked for several days on a complex problem with a couple of teammates. You are proud of your approach and your code, and the three of you high-five when the CI server and testing confirm everything works fine. But the guy shows up, has a look at your code, and tells you that this detail is not the way we work here. You ask him for detail, try to convince him your solution is OK, but he doesn’t agree, and he tells you how to rewrite that part of the code. You know he’s considered right, so you just do as he says. The consequences are huge. You get 2 additional days of refactoring to comply to the rules, and you end up with a code you’re not so proud of.

Next time, you know you will ask the guy first. You will wait for his approval up-front, take his instructions into account, and do as he says. As you ask him, he will tell you all the details about how he would have done it. You know you’d better take notes of every detail, cause you don’t want to rewrite your stuff next time. And if anything doesn’t work as expected, you know you’ll ask for his advice.

You got a veto.

Veto is a smell: it happens once something is somehow decided, discussed, or done. Which means waste.

Veto tends to prevent teams from empowerment. If you need the approval from someone who is likely to put a veto, the result is very quick: people wait for detailed specifications, or micro management, from this guy. He will be the only brain in the team, and the bottleneck.

Counter-measures:

  • Allow mistakes. A team may be wrong when it decides something, which is OK. If you accept mistakes, you may have good surprises. What you consider as a mistake up-front might be a great solution.
  • Limit batch size. Small batch size means lower risk. Lower risk means higher resilience to mistakes.
  • Make sure vetoers coach before instead of using the card after. With great power comes great responsibility. Everybody’s job is to get useless. If you have the veto power, ask questions, like “what if”, “how will you know”, “how should we mitigate”… Get along with people, coach them.
  • Make sure the vetoers put vetos on actual issues with high risk and high impact, not potential issues, not actual issues that will be detected right on, not actual issues with low impact, not actual issues with known counter-measures.
  • Make sure your team is made up of opposite powers. This is the only guaranty to balance. If the vetoer shuts a silent minority, you have a smell. Only with attention can you detect it. If you have as much power as the vetoer, pay attention, so that veto only occurs in extreme situations.
  • If alignment is needed between teams, explain why and discuss about it. Note that alignment is far less needed than you may think.
  • Limit veto power.
  • Include people with veto power from the start of decisions (which is rarely possible, as powerful enough people are not available).
  • If nothing else can be done, remove the vetoer from the team. Every time I saw a necessary person get out of team, willingly or not, the outcome was surprisingly better. At least, you can run the experiment and see how it works.

The veto is a very dangerous tool. If you use it in your team, you must use it with extra care, only if all other tools failed, and only if the consequences of not using it are too serious. If you couldn’t convince people of your perspective before, or make sure they were able to take what you consider as the right decision on their own, you may consider it as a failure. But you shouldn’t use dictatorship to cheer up. Your team mates don’t have this tool.

Everybody has great ideas, and their own perspectives. Take benefit of that. Help the teams become autonomous, you’ll get great results.

Sweet WIP limit

Like many teams we adopted WIP limit. And like most of them we could see the benefits of this practice right away.

WIP stands for work in progress. Limiting WIP thus means limiting the number of items you work on at the same time within the team. When your team reaches the max number of items it can work on, it just stops starting new items.

By limiting the WIP we create slack. Which is exactly how I could finally introduce it in the team: “what if, when we reach the limit, we just did nothing?” Though it’s not as simple as this, fear of the void is the main issue to overcome when you want a team to adopt WIP limit: ultimately, when you reach the WIP limit, you stop working on the top priority items. Instead, you help to keep the items you started flowing towards their done doneness. But when you can’t help anymore, you just do nothing. Actually, the rules, when we reach the WIP limit, are:

  • You don’t start any additional item.
  • You help finishing started items.
  • If nothing can be done, you can do whatever you want, as long as it can be interrupted anytime to work on the project’s priorities.

So we created slack. And, as often described, the benefits to slack are huge:

  • Started items are done in a quicker way.
  • As a consequence, there is less uncertainty.
  • More reactivity.
  • Less interruptions.
  • Less WIP means less dependency issues within the team.
  • We have a tendency to pair more. Thus more quality, more tests, more refactoring.
  • More knowledge sharing because we help more each other.
  • We facilitate more other team’s flow.
  • We develop more tools that help us speeding up our flow.

Let’s look at the concrete stuff. We did it in a slightly different way than the academic way, or other teams on the project. Our flow is quite standard for a dev team.

Flow

Usually, you would have a WIP limit on Dev and Dev Done, and another one on Test. We tried something else. As we are a team of 4, we set a WIP limit of 3 around Dev, Dev Done, AND Test (actually it’s 2 backlog items + 1 bug at least major, but it doesn’t matter here).

Flow with WIP limit

We did this for several reasons:

  • As we have no dedicated tester in the project, we often test our backlog items ourselves. 
  • When a backlog item is rejected in test, it goes back to dev directly.

A WIP limit works because it’s applied to a production unit: a team. It’s normal to apply the WIP limit to everything the team covers. And even when testing isn’t in our scope, we are still supposed to take it into account, as the item being tested can still come back to dev. Testing doesn’t always answer yes. Kanban comes from manufacturing. In manufacturing, when an item isn’t valid, it is usually dumped, or sent to a specialized team. In our case, when an item isn’t valid, it consumes a slot to the dev team that produced it.

After a couple of iterations with this rule, I can confirm that the fear of the void is still the main impediment to overcome. We are cheating the WIP limit here and there, especially when managing bugs. We are still tweaking the WIP limit in order to make it fully functional.

I’ll keep you posted when I have some significant news. In the mean time, if you have any experience to share, please feel free to help.

Known or unknown unknowns we know we don’t know or not

In this video Adrian Howard talks about how to replace academic user stories with hypotheses to validate. In short, he proposes to ask more questions about what you need to do. You even need to question why, as this role, Bob wants that so that blah blah, or if he really wants or needs it. This is a great proposal. He justifies this by the fact we need to focus more on what we know we don’t know: the know unknowns. We tend to consider user stories as a specification. He wants us to ask more questions, even what the user story asks the team to implement.

The hypothetical user story is not in contradiction with my previous post about iterative experiments. I actually thought about this possibility, without getting as deep as Adrian Howard did. But I thought it wasn’t enough for our case, where we don’t know what we don’t know. Anyway, thinking about it, and putting it parallel with known/unknown knowns/unknowns, helped me understand something about what we do. When we tackle a topic, we get through several phases:

We explore. In this phase, we need to put our brains around why we need to do something, what we want to do, and how we should do it. We mainly don’t know. Even what we don’t know. We need to experiment and learn, about anything: the scope, the user, the stakeholders, the technology, the constraints… As we don’t know what we don’t know, we can’t have a precise goal to reach. As we can’t rely on the result, we timebox. As we want to learn, we focus on what we learnt at the end, and re-assess where to go from there.

We validate hypotheses. In this phase, we know that we don’t know stuff. We need to clarify what we know we don’t know. The best way we know to validate a hypothesis is the lean startup way: as cheap and scientific as possible.

We know what to do, how to do it, and why. Yes, this might also occur. In this case, a user story is the best when there’s a user with a story. Acceptance tests are also great. A specification might also do the trick.

Note that these phases are not sequential. We may start with a hypothesis to validate or technical backlog items to install a CI server. Neither do these phases describe the lifecycle of a project. A project is a sequence of releases, epics, topics, and each of these require exploration, hypotheses, and more linear implementation phases. Finally, you may need to experiment about what to do and be more confident about the technology. In other words, the why, the how, and the what may be tackled in different ways. As a consequence, these phases will be interlaced in a project, from the beginning to the end (I’ve never seen an end, but I heard it happens).

Now we know we need these three types of backlog items, so we need to know when to apply them. The problem is that we can’t know when we don’t know enough. One of these categories is about UNKNOWN unknowns. You may always pick the iterative experiment, but it’s not really the more efficient. And not the more predictable at all, in terms of planning. So you need to pick the right tool when you start with a backlog item. And this is were I will leave you with your guts. I don’t have a formula for choosing what a backlog item will mainly be about. After all, if you make the wrong choice, you’ll learn from your mistake. This is the beauty of doing small iterative things.

So what do you think? What do you know I don’t know I don’t know, or don’t know I know? Please share your experience.

Why technical user stories are a dead-end

As said in my previous post, our team works in the intermediate layers of the product. I didn’t get into details, but one of the reasons why we need horizontal slices is that we have a huge job to do to improve performance of a 4-year old very technical product. The task will take several months. We have a direction to go, but we don’t know how to get there in detail. We need to iterate on it anyway, as a tunnel is the biggest risk of development. The common way of doing it is to iterate on small user stories. This is where problems begin.

First of all, user stories start with a user, and a story. We have none of these. We just have a big technical tool to implement. If we wanted to write user stories for each iteration, with a relevant title, each of them would be something like “as any user, I would like the product to be more reactive, because fast is fun”. Maybe we could suffix titles with 1, 2, …37 to make a difference between them. Some would prefer stating technical stuff like “as Johnny the developer of XYZ module, I would like more kryptonite in the temporal convector, because I need a solar field”… and would call it a user story. You see my point.

And we need a good title for a user story… sorry… a backlog item. Because it’s the entry point for the PO, the customer, the
developers, the testers,… to understand WHAT is to be done and more important, WHY it is so (which is why I prefer beginning with the why in the title). Stating how we do stuff is irrelevant, because it’s only a dev matter. It doesn’t help understanding each other in any way. Which means we don’t have any clue about how we will agree on its doneness. In other words it’s a non-sense to commit on such a story. So we have a first dead-end: we need good descriptions , but we can’t provide them in a “user story” way.

The performance project is big because the new engine, that “caches” the default one, needs to implements a lot of rules of the default engine. And releasing it is not relevant until it implement enough of these rules. Dividing the work in rules has no sense, first because we need to build the basis of the new engine, which is already a big work. Secondly because implementing a rule in the new engine is not equivalent to implementing it in the default one. The algorithms, the engines, the approaches, are different. What is natural in one environment is a lot of work in the other one, and vice-versa. In addition, the work is so huge no one can wrap his brains around the godzillions of details we need to implement. Finally, we don’t know in detail the rules implemented by the default engine. Said another way, we continuously discover what needs to be done, along the way. Even in each backlog item, we are discovering so much we can’t estimate precisely enough at the beginning, or even make sure we agree on the same thing to be done. Not to mention longer term planning (which we will see in another post). Why denying it? Discovery is our everyday reality.

So we have no user, no story, no clear short-term scope, but we agree on the fact we need small items to iterate on. But wait! Agile has the solution: spikes! We timebox a short study, which we might iterate once or twice, and we know how to go on. But that’s what we already did for 3 or 4 iterations with a team of 4 developers, we are progressing, and we still don’t know clearly what we will do in the next iteration. So spikes might sound nice, but when you are at the bottom of a technical Everest, it needs to be taken to the next level. We are not picking a logging library among 3 candidates.

So following the basic agile rules, we’re stuck. But there is a solution we’ll talk about in the next post.

Code re-use can be as bad as code duplicate

Code duplicate is one of the top evils in software development. But as strange as it may sound, code re-use can be a pain in the eyes too. Let me explain.

Our team works on intermediary layers of the product. Yes, sometimes horizontal slices are the way to go. Our product is about monitoring. It’s made up of a home-made database, an intermediate layer that defines and computes analytics, a framework to define specific applications, and the applications built on top of the platform. Our team works on the analytics. The main thing here is that we build frameworks. It’s actually what I always did. The problem with frameworks is that you never can anticipate every use case. They are here to provide tools, not end-user features.

For this kind of code, BDD-like tools are difficult to apply. BDD needs you to know precisely enough what you’re going for when you commit on a backlog item. And it requires to come up with a limited set of features to test. With such technical code, though, you can hardly state the new required behaviors as user stories. And the main thing you want is the whole to remain stable and efficient. Before validating new features, you want to test non regression. After a while, it becomes difficult to test it manually. And automated tests are not perfect by nature: the more stuff to test you embed in the automated test, the less maintainable and fast the campaign will be. You thus make trade offs, which test is all about. You can hardly test a re-used tool correctly.

We can even get down another step. A piece of code that is re-used by a flip load of modules is a nightmare to maintain because we never really know how it’s used. Even if you run a set of unit (or automated in any way) tests you rightfully trust, you are not sure the piece of code is used the way you test. It’s particularly true for code that’s been here for a while: when an additional module uses your code, it’a miracle if the guy implementing the calling code thinks about adding the automated test that guarantees the called code does what he expects. And please allow me to look somewhere else with a strange face if you tell me that 100% code coverage guarantees your test base is nuclear bomb proof.

A few years ago, when we were having regression issues on our code base, I was surprized to realize we wouldn’t have such problems if we had not shared a particular piece of code. It hurt a basic rule I had believed in for years. And then I realized I was only experiencing the need to think about another trade-off we always need to balance: sustainability vs sustainability.

Perfection is a verb

Ok, this sentence sounds like a fortune cookie. Maybe I’m turning buddhist, but I love it. And it  applies especially well to software development. As a craft based on human interactions, software development is a set of compromises. There’s no such thing as a best practise, a definitive process, or a methodology that works for all contexts. I don’t even understand how words like agile and process or methodology can have ever been put together in any mind.

Agile and lean are approaches that work the best according to what we know today. But they are just containers of values and practises that constantly evolve. They don’t tell you what words to use in discussions, or what lines of code to write for every single situation. They need to be adapted to every context. Scrum and lean startup by the book might be good enough to design a simple shopping cart, but not necessarily to design a car, a distributed NoSQL database, or a nuclear plant software complying with all required regulations.

You always need more than what you can learn in trainings or books. Your situation is always unique and new. If it’s not new, why bother doing it? If you didn’t need to adapt the framework, why would scrum define the basic team around the opposite powers of dev, backlog, and test? Why would lean be based on kaizen in the first place? A set of definitive rules, a process, would be enough.

In the end, you always need to adapt the way you build, collaborate, understand each other, find solutions, analyze gaps, code… and to get better at it. Always. There’s no end to getting better. Because you can always be better, and because the context keeps evolving. And that’s what I love about it. You always can and must do better. Everytime you’re not trying to get better, you’re getting a bit closer to the death of your team, project, organization, company. The world won’t wait for you.

So perfection is the goal, but you can never reach it. How exciting! That’s what the best approaches known so far explain to us right from the beginning. You need to find the right compromise, for every situation: test coverage vs test campaign maintainability, code performance vs readability, taking the time to discuss and understand each other vs saving meetings, writing just enough documentation vs focusing on working software, self-organizing teams vs facilitating their progress, releasing features vs stabilizing the product… There are hundreds of parameters you need to balance. And you always need to tweak the buttons as the situation evolves.

In the next posts, I’ll try to talk a bit more about the tradeoffs we are faced with day by day. What the books don’t say, and the concrete problems they don’t give solutions to. I’ll try to describe the approach we’re taking to oil the wheels. I hope it will save me the shrink.