Just in case. These words, and all their cousins, predict disasters.
We add features, code, processes, just in case. I prefer adaptability to new requirements, rather than anticipating every detail.
We create laws that paralyze whole sectors, to protect ourselves from limited harm caused by one black sheep. I prefer detecting problems, rather than preventing their hypothetical causes.
We create rigid processes to make sure of what they create. It may be useful in cynefin’s simple or complicated domains, not in the complex one. I prefer allowing good surprises, rather than avoiding bad ones.
We monitor proxy indicators because it’s easy. Sir Tony Hoare, the creator of null, who named it his billion-dollar mistake, added it because it was easy to do so. I prefer adding something because the need and the solution were validated, rather than because it’s easy.
What a relief when you remove some process, a 300-line method, or one third of the backlog. You can feel instantly your cognitive load decreasing, the blood weight diminishing in your brains. Afterwards, you can’t even remember why they were here in the first place. Let’s anticipate. Verify that what you add is useful. Remove what is not. Slim it down.
The whole series:
A process tells us what to do in a given context. It states pre-conditions for applicability, post-conditions of success, variations, attention points. A process always exist, be it implicit or explicit. A process has multiple nested levels of granularity, to produce, understand production, update the process, manage conflicts, communicate…
Company culture is everything making mandatory or forbidden, encouraging or discouraging, people reactions in some context. We can thus consider a company culture as a process. As behaviors, we can list and modify it (more or less easily is you want the change to stick).
The process is huge. We don’t want to make it more complex by adding arbitrary clauses to it. We need to factorize it, make it as little as possible. If the process becomes too complex, we need some additional process to interpret it. We don’t need that.
A process can accelerate things by:
- Guiding people, so that they don’t need to search for the right way to go.
- Avoiding people to keep wondering the same things again and again.
- Helping people do the things right first time.
A process can also ruin your life when:
- It complicates more than necessary what needs to be done.
- It prevents you from doing what needs to be done, unless you work around it.
An important quality of a process is its adaptability. Situations vary, and evolve. So a process must remain plastic. So we’re back to the same considerations as code: simple things are easier to adapt.
- When the process is a 200-page document, nobody will get back to it to adapt it to the new context.
- When the process requires 200 persons to synchronize in order to update it, nobody will make the effort.
- When the process is implicit, nobody can discuss it.
In addition, the process has to be more or less prescritive, depending on the context. The cynefin model, the more useful I know, helps us understand this, and choose a strategy to handle a situation, depending on its nature:
- In the obvious domain, where you can easily predict consequences from the context, define and follow a list.
- In the complicated domain, you can predict consequences given some analysis. Ask experts to tell you what to do.
- In the complex domain, the systems has too many too dynamic relations to be predictable, and they change when you touch them. State hypotheses, and validate them through experimentation. This domain is the most frequent in software. In the complex domain, you need a more abstract process. This process will give you clues to design experiments, gather feedback, verify you didn’t overlook some perspective.
- In the chaotic domain, it’s fire. Get out of there as quickly as you can.
- To which we add the cliff, from obvious to chaotic, where you violently loose your illusions. You thought your situation was comfortable, and competition shows you a very different reality. Falling to chaos is hard, and you need to get out while you’re still on kodak’s board.
- And the disorder, where you don’t know where you stand. It could deserve a whole series of its own.
The process must take all these domains into account, in a relevant way. Follow ikea instructions in the complex domain, and you will make too many useless errors. Experiment on methods to assemble an ikea furniture, and you’ll waste more than the traditional week-end of family chaos.
Note that lean proposes a very useful tool to handle this: the standard. The standard is the best way we know today to do something. It is the support for continuous improvement, because it documents our current knowledge, and constantly evolves from there. We get back to it as soon as we observe a gap with the objective (i.e. often), in order to understand what can be improved. It documents pre and post conditions, variations, attention points. Standards can be involved in anything, given its level of abstraction corresponds to the task at hand.
- Take the time to understand cynefin. My current level of understanding took me a few minutes of epiphany, and several years of deeper study. Every second was worth it.
- Adapt your procedures to the context.
- Make your procedures explicit.
- Don’t add useless procedures.
I feel the most like a good dev when I delete stuff. But attachment to code is the most painful challenge to overcome when I try to help my colleagues adopting an ingineering culture. I.e. an experimentation culture. For reasons I don’t understand, devs crave their code, each line of it, since the first minute of its existence (at least for reasons I don’t understand *anymore*, as I probably had the same attachment to code in a life I don’t remember). And the more devs write code, the more they are attached to it.
Organizations need to adapt to needs. Societies evolve. People grow up, their needs change. Our comprehension of these needs, and of our work environment, gets more relevant. We can not freeze the world during the months or years product development requires.
Our job is to modify code. We thus need a code that is plastic enough to adapt to these changes. In a problem resolution activity like dev, this is what I call quality: the capacity to keep a satisfying pace over a desired period. To be agile, this capacity is not negociable. As devs, it is our duty to always enforce this capacity, without asking anyone.
We find code plasticity in patterns, modularity, high cohesion and low coupling, and so on. But you need to be aware that, each time you choose a pattern, you pick a compromise, you make on axis harder to loosen another one.
Take the DRY principle, for example (Don’t Repeat Yourself). It is universally accepted as a base principle. I hardly hear it discussed. Still, it participates in creating dependencies, source of evil. This is a compromise to understand. In addition, two pieces of code looking alike are not the same code. Think semantics before factoring code. Martin Fowler popularized the rule of three, suggesting you should write code at least three times before extracting some common stuff. Neal Ford teaches us that “The more reusable something is, the less usable it is”. I recently discovered the acronym WET, Write Everything Twice. Let’s not remain stuck on DRY, and think about compromising.
Design patterns are also compromises to understand. They favor plasticity along one dimension while sacrificing another. For example (provocation on purpose), heritage allows multiplying embodiments of a given class of behavior. But it makes modification of this class of behavior harder.
Even if you don’t see which axis you’re making less platic, you need to be aware that a pattern is an indirection. Indirections get in the way of seeing detail, while making high level understanding easier. With indirections, you better scan principles, less details. This is a compromise, it has good and bad sides. You need to be aware of it.
To make code more adaptable, my first track is to not write it. The easier code to modify is no code. I always try to add as little code as I need. To add indirections only if they bring value. To delete code when it brings less value than complexity.
Note that we have tools to help us limit the quantity and complexity of code:
- With TDD, you only write the code that allows to pass one test, no more.
- With DDD, you split the problem in bounded contexts. Each of these contexts is thus smaller. As you don’t put code in common between bounded contexts, you limit the number of dependencies, and the overall complexity.
- With BDD, you better assess the problem boundaries. You avoid useless hypotheses.
What a delight to refactor no code. We call it greenfield, and it makes the eyes of every nerd shine. This is an extreme. It gives an idea of what you get when tending towards that ideal: serenity, hapiness, smiles.
If you want to approach that ideal, limit the quantity of code to modify:
- Don’t write code “just in case”.
- Don’t add patterns before code complexity requires it.
- Don’t anticipate too much code flexibility. Wait to know which axis needs freedom of movement.
- To support all this, learn refactoring and emergent design.
Let’s think about this enterprise philosophy hit song:
- I want to satisfy the needs of users, buyers, managers, stakeholders.
- Therefore I want features.
- Therefore I want to optimize the production of features.
- In other words I want to maximize the production of features.
Imagine that big bank. It wants to solve problems for its advisors, or even its customers. With some help from specialists at every step, the bank formalized requirements in a backlog, organized RFI and RFP, selected a software package, started a project, set up constraints to make sure everything is delivered on time and budget, mounted and unmounted every corresponding team.
At every step, the bank relied on the result of previous investments, results of which were validated, and thus considered right. If everything done before is right, we only have one thing to measure when developing: the speed of production according to specifications. I may be user stories, man.days, lines of code, green tests in a campaign, whatever.
We maximize production of features because we don’t want to come back to what was already validated, and because we don’t know how to do all the steps at the same time. In other words, because it is easy. As a consequence, we measure the accumulation of things on top of accumulated things. This logic is the cause many a failed project.
Jeff Patton teaches us that the goal is to optimize outcome (i.e. changing behavior) to improve impact (i.e. consequences for our organization), while minimizing output (i.e. production).
How to do that concretely? As said earlier, we maximize output because it is easy. Therefore solutions will be hard. This is good news: it is an occasion to take some advantage over your competitors.
We must move quietly. Take the time to verify that what we throw out the door is useful. Consolidate fundations before adding new floors.
As Jim Benson says, “Software being ‘Done’ is like lawn being ‘Mowed”. Software gets interesting when it gets into users hands. It is finished when it is decommissionned. Make this official, by adding a validation/understanding/learning/feedback step at the end of the value stream.
A feature to release is not code to produce. It is an outcome hypothesis. John Cutler insist on talking about bets. Once the code is released, we need to evaluate its consequences, and decide where to go from there:
- It’s perfect we can stop iterating.
- We should try modifying this or that.
- We need more info.
- Let’s deactivate or remove it.
- And so on.
By the way, if you doubt, like you should, of features usefulness, you should limit the number of experiments you run in parallel. An experiment takes time to reveal its secrets. This delay is actually an interesting topic to think about. Among other things, it helps understanding why our so-called experiments are not scientific. Cynefin rather talks about probes.
Limit your WIP (Work In Progress, i.e. the number of things being worked on), in all steps of the stream, including studying/prioritizing. By doing so, you will avoid preparing items when the next step is not ready to take it. Your backlog will thus remain under a reasonable limit.
- The first step of the stream is a prioritized list of problems to solve, ideas, wishes, unverified asumptions. You study these topics by priority, when you have the capacity to take them. That is to say, when it’s useful to think about them.
- Then you can think further about them: “what for”, split, explicit the goal and constraints, share understanding…
- Then development if needed, test, deployment, and so on.
- And then, validate the need, gather feedback to iterate.
So, step by step, you push features from the last step: impacting the world.
Of course, it only works if items are small. How small? If you’re beginning, it’s never small enough. If you have more experience about flow, and you know why it’s too small, then improve your process to decrease the transaction cost, and then make items smaller.
Let’s fix the logic described above. Instead of:
- Users have needs.
- So we must produce features.
- So we must optimize feature production.
- I.e. we must maximize feature production.
- Users have needs.
- We might satisfy those needs with features.
- So we must optimize feature production and impact.
- I.e. we must minimize feature production while maximizing impact.
When you use a product, you are delighted when you can see right away the feature you need. It’s a nice surprise to use it feature fluidly, the way you expected. It’s a change, compared to those products crawling under menus of sub-menus to propose all potential options, endless forms to support every possibility, just in case.
Propose the product you love using:
- Verify features usefulness.
- Do less things in parallel, and finish them.
- Focus on users.
It’s never too late to do things properly. It’s always time to validate the hypotheses induced by upfront investments, however huge they are.
We consume too much. We eat, throw away, heat, send e-mails, spend, earn, too much. We need to learn how to do more with less.
We have limitless backlogs. We look for ways to produce more faster. Production is our main indicator. As a company or a country measuring its income growth to invest more to grow more, we measure our production to release more to have more features to earn more. Simple isn’t it?
We miss two parameters here:
- Complexity doesn’t grow linearly with size. It tends to grow in a chaotic and explosive way. Sometimes independantly of growth. And surely in an unpredictable way. The worst news is, complexity doesn’t have a maximum. It is not capped by your capacity, anyway.
- You can’t predict how the system you’re creating will evolve. It is a kid, living its growth in a chaotic way. You can’t predict consequences of the evolution of your system, as soon as it gets a little bit complex.
I can only see one way of keeping that under control: move slowly, carefully, checking how the system evolves while you touch it. In other words, evolve frugally.
Because frugality diserves loads of ink, and I’m paid by the article, I’l try exploring this topic in 4 steps:
Most agile posts I see are about finding the right bargain about this question:
When should I gather requirements/specify/test/merge to main line/document/integrate/deploy/communicate progress to customers/<insert any phase gate you’re used to and concerned about when you think about iterative/continuous software development>?
The answer is: always!
If it hurts, do it often. It there is risk, do it often.
I won’t go through every possible question, you’ll find them in every consultant’s blog. There are two short answers to all of them:
- It depends: from your situation to the ideal one, there is a gap. You must deal with it and find the right trade-off.
- Do everything more often. Every time you delay something, like working on a branch, you don’t decrease risk: by delaying risk, you increase risk.
These answers come together It’s an and, not a xor.
The ideal situation is: every character you change in your code works, brings value, is crystal clear to developers and users, and instantly available to your stakeholders. This is ideal. But we can be ambitious: it is the goal. Everything we do must tend towards this goal.
In the previous post, we saw why we couldn’t apply user stories or spikes for a big technical epic we’re working on: developing an efficient engine in addition to the default one for a subset of what the default one supports. With user stories, we have no control on what we’re doing, and we can’t commit on what we’ll deliver. Ultimately, it does not allow to show progress, because it doesn’t correspond to what we do.
The key here is that we are progressing. We know it. We could implement a basic engine, and validate that the performance improvement is what we could expect from it. We have an overall idea of big chunks of stuff that will need to be done to make it functional. We just can’t make it real according to agile by the book. But there is a solution: as this is what we do, let’s make iterative expirements come true.
I’d like to introduce a new type of backlog item: an experiment iteration. It’s a kind of spike slice, or lean startup applied to technical stuff:
- Define an objective, that you may or may not achieve.
- Timebox it. This timebox is the main acceptance criteria. The backlog item is over when the timebox is over, period.
- Just do it until the timebox is over.
- Validate non-regression.
- When it’s over, the key is that you check what you learned. This is what you do in the validation/test/acceptance column. What you learned might be:
- Ideally, a backlog.
- Worst case, give up and rollback. You must always keep this option open.
- We added this and that tool or feature and it’s ok.
- We should add this to the non-regression campaign.
- We couldn’t support that case.
- This or that piece of code must be re-designed.
- That part will take time.
- We need to check what we want to get regarding this.
- You should check if this works as you expect.
- Decide where to go next, based on what you learned.
Note that code branching or conditional compilation are not valid ways to avoid risks of regressions. They are only ways to delay risk, and thus to increase consequences. All experiments should be implemented in the trunk, using feature branching if necessary.
The main difference with a spike or a user story is that we focus on learning. It is transparent to everyone. You will be expected to implement the objective, because it is not the priority. We also make discoveries about the product and the code base more transparent, because there is no limit to what you might declare as learned. It might also save you some time during retrospective or andon meetings, because you already introspected on many technical topics.
Iterative experiments should be run when large chunks of software are to be implemented without knowing where to go in detail. It’s not too big to fail, i.e. too big to succeed, but too big to know all details in advance. They should lead to more deterministic work like an actual backlog.
What do you think? Have you ever focused on what you learned? Do you have some feedback about it?