Naming code

dés de génération de noms arolla

Once you set aside dad jokes, there is one main universal problem remaining in software: naming things. Naming is modeling. It is core to our job. It is difficult, and subject to endless discussions. Thankfully, because it is what makes us hardly replaced by robots. I’ll list a few heuristics I use, that I rarely see in the wild.

Naming scope

We name things to differentiate them. If there was only one thing in the universe, naming wouldn’t exist.

We name things to differentiate them in a context. We don’t need to differentiate things beyond the context we consider them. In a room, we can talk about the table without differentiating it from all the tables in the world. The discussion also helps us know we are talking about furniture, and not the table of contents of the book sitting on it, or Mendeleev’s table hanging on the wall.

Speaking about code, it means we only need to differentiate names within a given name space. The variable i is perfectly valid inside a small loop. We don’t need to differentiate a variable of a function from all variables of all functions.

I’ll introduce here the concept of scannability, that I got from Geepaw Hill. To make it short, we spend most of our time reading code. In particular, finding which piece of code has the impact we’re looking for. Among all concepts readability covers, I try to optimize code so that devs can quickly find where code does this or that. One must be able to skim through code, enter its sub parts recursively in an effective and efficient way, in order to find the impact they’re looking for.

In particular, one must be able to read code as naturally as possible. Code must tell a story, with simple words and sentences, short paragraphs, in the business language.

Because I like scannable code, and thus short names, I try to avoid details that are not necessary to differentiation in the context.

Corollary : the smaller the context of a name, the shorter the name. The bigger the context, the longer the name.

Corollary : when the context of a name grows bigger, the name should grow longer as well. When the context shrinks, the name should shrink as well.

That is why I try to reconsider the content of a piece of code when its scope changes. In particular, I reevaluate all names the code contains, exposes, or interacts with.

For example, if a variable i is OK in a small loop, it’s relevant to rename it as rowIndex and columnIndex inside 2 nested loops. It can even become fields of a Coordinates instance when variables reach the class fields.

For example, I can call a class RandomLoadBalancingStrategy because I need to differentiate it from RoundRobinLoadBalancingStrategy or RandomNamingStrategy globally. But locally, in the load balancer, naming the variable strategy is enough, once the strategy was selected.

Business names

We all like names allowing us to guess what things correspond to, through cultural or synesthetic association. But many names are arbitrary. For example, we all know what a dog or cat is, despite the fact that these names have no relationship with the aspect or the sound of these animals.

DDDers try to use business words in the code. Still, we can need additional words. In particular, we often map words like group, category, space, set, etc… on very precise dimensions of the business or the technical solution we implement. But in my experience, no objective criteria helps us differentiate those words outside of a given context. And that’s OK, because jargon has great value in mutual understanding within a group.

However, beware of jargon. When a group’s jargon is used with the rest of the world, conventions can be harmful when they are implicit.

For example, each team has its own definition of the role of a BA. Let’s suppose team tiger considers BAs as testers, and team lego considers them as domain experts participating in use case distillation. Each team has its jargon. If these teams need to collaborate, and thus agree on a collaboration protocol, there will be much misunderstanding if they don’t make these central roles explicit to each other.

Naming by contract or content

We can name a piece of code by

  • What it does, i.e. it’s content. For instance, saveIfNotExists or persistIfValid
  • The reason why we want to use it. For instance, persist or save.

I prefer the second option by far. First because it makes names shorter, as my dishonest example proves undoubtedly. We can also invoke a more theoric point of view, like the fact that reducing coupling encourages encapsulation, so that the caller doesn’t need to know the interns of the called code.

But I prefer option 2 for more concrete reasons:

  • Names based on why instead of how tend to be more stable. They don’t change, or less, when the content of the code evolves.
  • By focusing on the caller when naming things, we favor story telling, and thus scannability.

I’m not the only one to prefer that option, but I can’t find which hero gave me these arguments. Clues are welcome.

Hungarian notation from frameworks

We often see suffix conventions in code bases. For example, in spring, REST servers are suffixed with Controller, services, as in DDD services, are suffixed with Service JPA entities with Entity, json DTOs with DTO, and so on. But these suffixes make humans stumble when they try to scan code.

From my point of view, such suffixes look like the Hungarian notation we had in the 2nd millennium. In this notation we used to prefix variables with single letters to make their type explicit (e.g. s for string, i for integer…). Though changing people is hard, teams got rid of this habit, as languages, development environments, culture, and teams setup, evolved.

Similarly, I think the benefits of framework suffixes are very limited, typically in an IDE. We could choose to highlight other dimensions of things in their names, like their reliability or performance. And not repeat information we may already have in folders, name spaces, class hierarchies, tags or annotations, or any other source of information that is already there. Yes, I’m looking at living documentation.

I think this convention comes from the universal way spring-like framework tutorials are written. It has become second nature for everyone. These suffixes can be useful, but they are certainly not useful in every case I bump into.

In general, in my code, I only introduce Hungarian suffixes when painfully necessary, and I can’t find a decent alternative. Not before.

And I do sometimes. For example, in hexagonal architecture by my book, despite entities and domain models being in distinct packages, environments may struggle managing homonyms. We often have to deduplicate names for classes that are mapped one for one, independently of their packages. Unless the domain or the architecture guided names towards natural differentiation, which finally occurs more often than not when the domain makes its way into the code.

As I rarely code on my own, and teams generally cling to Hungarian suffixes, I clearly won’t die on that hill. We often have higher priorities in terms of changing points of views.

Finally, I try to get rid of words that bring nothing more to the table than “I didn’t take the time to pick an actual name”, like tools, utils, and all words from the sarcastic and genius arolla dice. When one finds these words in names, they almost always realize that just deleting them doesn’t change a bit of their meaning. And if you encounter a IUtilsManagerTools, which can thus be replaced with nothing, it’s really time to pause and think.

Of course, this section falls down when these empty words take a strong, clear, and explicit meaning in a formal jargon, a convention.

Wrapping up

Naming is an important part of modeling business and architecture. It also allows our code to be habitable, for our future selves and colleagues. It is a delicate art, that makes us navigate on a steep and windy ridge. No rule is absolute. We try to optimize along axes like code scannability, mutual understanding among the team and with users, or model plasticity. We iterate, we discuss, it depends™. My heuristics help me make decisions and structure discussions with the team.

What are your naming heuristics?

Consider information where it is rather than bend reality to my will

This is the 17th post from my sincere dev manifesto series.

Taylorism and lean are very close. Taylorism is scientific management. It’s proposition was to analyze how we do work and improve it in scientific way. Rings a bell to lean folks right? But taylorism also split people between brains and arms. Some analyze and think, while others apply conclusions stupidly with their sweat. There’s a quote I can’t find saying something like “I needed pairs of arms, which unfortunately came with brains”. Taylorism was a great progress towards paying people fairly, and building an infrastructure for humans. But it had this limit in brains quantity, and left workers close to slavery.

Lean changed that by pushing analysis and decisions as close to the information as possible. That is, on the field, where people work. Problem solving has now infinite potential, and the scientific approach is now actually based on every single piece of detailed data on the field.

OK, dev is not about bolts and nuts. But it is about problem solving. And if you observe traditional environments, you will notice our tendency to push decisions towards an imaginary world, rather than reality:

  • specs in written documents
  • decision boards in meeting rooms
  • authorizations from management
  • asking users what they want
  • pre-production environments
  • reporting up the hierarchy

Though some of these are necessary for organization alignment, we must not sacrifice actual data for our superstition for climbing upwards. Every time data is communicated, gathered, interpreted, summed up, presented, told, heard, understood, a portion of it is lost, or even replaced by something else. Reality quickly becomes fiction.

I tend to lean towards the anarchy team, so I prefer autonomy and facing reality:

  • test in production
  • observe users in their habitat
  • help people who need information write documentation themselves
  • help workers learn problem solving themselves
  • push autonomy as close to the gemba as possible
  • iterate on hypotheses validation
  • top-down TDD as much as possible
  • observability over integration tests
  • gather usage metrics
  • measure before optimize

Reality is where the reliable information is. I try to push the cursor towards this side of the spectrum.

I commit to pushing gateways away in order to observe reality, rather than rely on fortune tellers.

Share code rather than pass pull requests

This is the 16th post from my sincere dev manifesto series.

There’s been a lot of traction lately for trunk based development vs pull requests. I’ve also been wary of pull requests for a long time.

A long time ago, in subversion days, there was a product called crucible. It allowed to do pre-commit or post-commit code reviews.

  • Pre-commit reviews were basically the same as what we do today. The code waited for approval before reaching the mainline.
  • Post-commit reviews were done directly on trunk (i.e. main or master branch). You selected the commits you wanted to add to a review, and off we went for an asynchronous discussion.

Believe me, the incentives from post-commit reviews were a lot more in favor of flow than phase gate pre-commit reviews.

Since then, open source became the model everybody could see. In open source, it is perfectly relevant to tightly filter what you allow in your code base. You generally don’t know the person who’s proposing a PR, and you don’t want to be the victim of a supply chain attack, or to see your product downgraded.

Almost nobody knew about post-commit reviews when git, git flow, or github, took over the word. So it went to limbo. Everyone knew the agile manifesto. Some of us even knew its principles. XP had been around for quite some years. CI/CD was seriously making its way to mainstream. Yet, git flow and PRs were such a tsunami that no one looked for alternatives.

But within a team, you want reactivity and flow. And phase gates are their worst enemies. We all know the downsides of phase gates:

  • less reactivity
  • less trust
  • less initiative and autonomy
  • less boy scout rule compliance. When there’s a risk someone else is working on the same piece of code, we don’t rename things, move code around, split files, etc, to avoid painful merges. And the longer dev is parallelized, the higher the risk of touching the same code in parallel.
  • more context switching and waste

And these are exponential when branches last longer.

Trunk based development (TBD for short) is the solution, and it’s pretty simple: push at least once a day to the main branch. You can’t do that if an asynchronous review is in the way of merging your code for hours or days.

TBD requires tooling, like careful data model and contract management, or feature toggles to isolate unpolished features from users. It requires some continuous effort. But the alternative, putting code in a fridge until it is perfect and the environment is ready for it, is even more painful.

Hey but if the new behavior is hidden behind a feature toggle, why pushing it to the main branch? Because at least, you share the same code with your team mates, it is compiled, packaged, automatically checked, and that’s already a decent part of lowering the risk and encouraging continuous code improvement. Besides, it’s quicker to rollback a change by deactivating a toggle than reverting a commit and going back to the CI/CD pipeline.

More that anything else, TBD requires trust. You need to trust devs to push code that will not violate your team’s constraints. Code quality and patterns, of course, but also performance, security, you name it. And that’s exactly what a team is supposed to be: a bunch of people trusting each other’s work. So we need to help all teammates step up quickly and share the same feel for our constraints.

Hello collaboration. When you want faster homogeneity in understanding, there’s only one tool: working together. Ideally, work as much as you can as an ensemble, the whole team on one keyboard. If you can’t, rotating pair programming is a decent alternative. If you can’t, well, you will need explicit, clear, strong, and evolving rules for asynchronous understanding, with all the negotiation, time, and effort it implies.

A team shares more alignment than a department. This is why we call it a team. It is made to collaborate daily. So why splitting it into individuals by building walls between them?

By the way, if you really want to stick to pull requests, I can at least share my rules for accepting a request. I try to avoid blocking code as mush as I can. So I click the green button if the product is at least slightly better than before without increasing the risk significantly. If the new code brings any benefit, if it does not significantly impacts performance or security of an existing feature, even if it is not pixel perfect, then I validate. The code is an imbalanced evolving creature, so so be it.

I commit to encouraging synchronous collaboration with my team, rather than abide to passive agressive phase gate pull requests.

Ignore rather than respect estimates

This is the 15th post from my sincere dev manifesto series.

Estimates are more or less elaborated guesses. They are not commitments or indicators.

They can be about cost, risk, expected benefits. They are the best we can get about these three properties of a task. For example, we can estimate:

  • The date a feature might be ready.
  • The cost of a task.
  • The expected value of a feature.
  • The opportunity cost of not doing something.
  • The risk of adding one more feature or line of code.
  • The risk of missing a deadline.

Estimates are everywhere. For example, even if you don’t use man.days or story points, deciding whether a ticket is small enough is an estimate. If you use lead time, you estimate that the tickets in progress look enough like the ones in the past. Discussions about refactoring are only about estimating cost, benefits, and risk, even when they contain actual stories.

Estimates can have several benefits:

  • The act of estimating helps understanding the problem and its constraints.
  • Estimates help taking go/no go decisions, or prioritizing.

Once these decisions are made, the estimates that helped take those decisions are useless, if not harmful. Knowing we are three times slower than estimates while developing is useless. Doing better than estimates is a waste. Past estimates are now obsolete, and considering them when discussing future decisions is like planning a trip to the moon from a flat earth.

Once we committed, knowing we don’t progress at a rate that would allow meeting our commitment is useful. It is a signal for the need to discuss or renegotiate commitments. But this is a new estimate, based on the current context.

I commit to ignoring estimates once they are communicated and digested.

Leave a test campaign that accelerates refactoring rather than be exhaustive

This is the 14th post from my sincere dev manifesto series.

I don’t try to increase code or branch coverage. I don’t try to balance the test pyramid. I try to have a harness that allows me to modify code with more confidence more quickly.

I consider a modification as done when it brought its value without increasing risks. When it passed all stages to reach users hands. The quicker I can detect risks increasing, the cheaper I make modifications.

The harness is the main tool helping me accelerate problem detection. The harness goes from compiler indicators to team understanding, and includes code analysis tools, linters, all types of automatic checking, code reviews, coffee machine discussions, testing, letting my mind wander, and more.

The big topic we always wonder about while writing automated checks is granularity.

  • A test that only embeds an isolated class runs quickly. But when I need to update the code, if I also need to modify tons of test stubs, I waste time.
  • On the other hand, I recently experienced a very smooth refactoring, with confidence, because I had written tests at a higher granularity level than the team was used to.

I only know one heuristic to choose the right granularity level, and I only learned it in 2022: test interfaces you are ready to commit to (talk in French sorry).

Note that you have fewer problems choosing the right granularity level in a functional core corresponding to the right aggregate of the right bounded context: there is only one, side-effect free, functionally relevant, autonomous, quick to run, level of granularity. That looks like an ideal, right? Between that ideal and a spaghetti plate, there is a whole range of code bases, including hexagonal, micro-services, or multi-layered architectures. They all come with their own trade-offs.

One last thing. Feedback speed is the whole test campaign’s job. The team must maintain a consistent campaign. It is one of the many artifacts you need to cherish in a pragmatic way. Please don’t add a automated test for every ticket you implement. Modify the existing campaign so that it tests each aspect of the product in a necessary and sufficient way, and only add tests if it makes sense.

I commit to adapting each test to the level of granularity that looks the most stable to me, and to maintaining a consistent and frugal campaign.

Promote testing rather than monkeys

This is the 13th post from my sincere dev manifesto series.

Professional testers make a big difference between a product that makes people lives easier and a sausage factory. They also make a difference between tests and checks.

Checking is confirming the software gives the expected results in a given scenario. It’s what we get when we practice BDD for example. We agree on actual examples, with values for inputs and outputs, for anticipated use cases. These are the scenarios we want to check again and again to uncover functional gaps. To avoid computers making fun of us, we automate checks as much as we can. Checking is only one of the many tools testers use to extract information from our creature and its context.

Testing is a journalist’s job. Testers prepare test sessions on given topics, gather information, compile it, and render the results in a way that is relevant to given audiences. Testing can be checking expected behaviors, but also/mainly giving information about unpredictable things, like performance, ergonomic, security, limits of the system, instabilities, surprises… Tons of things that can’t be just checked.

Testing is as manual as developing with an IDE and a computer, continuous integration and deployment with analysis tools, etc (i.e. it is manual, or rather intellectual). You can automate testing as much as development, management, or proof-reading this article (i.e. it can’t be automated). Testers use many automated tools to gather information, but also define strategies for using tools wisely, select information to extract, and compile this information in an intelligent way to get in touch with target audiences.

Testing is central to all problems of all stakeholders. It is about all constraints of the environment, technical, political, functional, you name it. Testers need to understand details and generalities. In my opinion, it is the most difficult job in software. And probably the most valuable, because testers help you understand the most.

I commit to promoting the job of testing, and promoting our need for testers that are professional, competent, influential, and well payed.

Disclaimer: though my wife is a professional tester, I’m writing this article under neither threat nor greed.

Leave plastic or disposable rather than solid code

This is the 12th post from my sincere dev manifesto series.

“When you don’t have the time to do things right, you have the time to do them twice”

I can work on code in one of those 2 states: plastic enough, or disposable.

Plastic code has several of these properties:

  • it represents the team’s current understanding of the domain
  • it is highly cohesive and barely coupled
  • it is scannable
  • you get its intentions
  • it has a sufficient harness to detect regressions quickly
  • it has clear and maintainable contracts
  • it can be debugged
  • you can observe what it did at runtime
  • you can reproduce most use cases

These properties are subjective and debatable. Members of the same team might not have the same boundaries between maintainable and unmaintainable code. Anyway, I need to trust code enough to help it evolve and adapt to requirements to come. Then I can commit to maintaining it.

Disposable code is immutable, read only. When its requirements evolve, I can throw it away and rewrite it. Writing disposable code is a totally valid option, and I use it on a regular basis. To make sure the option works, the organization of the code must allow it. The contract linking this code to the rest of our creature must be clear and stable.

Code that is neither plastic enough or disposable is crappy code. People usually refer to it as technical debt. I also do, when the debt is taken explicitly. If the code is written in a crappy way without explicit agreement, it’s plain incompetence, negligence at best.

Only we developers can decide whether the code is plastic or disposable enough. We are responsible for the quality of our creature. The level of quality we adopt in our code base and practices is nobody’s business but ours. Like surgeons washing their hands or sterilizing tools. Quality needs to baked in every practice, including estimate. It is not extra work, but day to day work. It is not negotiable.

The practices we use to get quality code are not negotiable either. They may be different between team members, or between contexts for a single member, or between modules of the same application. For example, I might explore with a REPL and copy paste the result to disposable code in chaotic domain, and use TDD in complicated or complex domain. It depends™.

I commit to writing code that I’m confident to maintain, or to organizing it such that I can delete it without blinking.

Explore limits rather than follow a cult

This is the 11th post from my sincere dev manifesto series.

Like some Melanesian people during WWII, we live among cults. Cults generally start with decent points of views, and get generalized too broadly. They become subjects of jokes, but have their truths. It’s all about limits and constraints. Heuristics are true within contextual limits. Beyond these limits, you can have fun.

A few examples:

  • Every person is different. Just forget about MBTI or astrology.
  • Cross functional teams don’t embed HR or finance. Some functions remain transversal.
  • All models are wrong, some are useful. By definition, models are a compression of the world, they omit dimensions on purpose.
  • The levels of your test pyramid depend on your technologies, your dependencies, your tools, your knowledge. There are more than 2 dimensions. Level boundaries may even change from test to test.
  • DRY (Don’t Repeat Yourself) is true within a bounded context, and should be used with much more care across bounded contexts.
  • I start with WET (Write Everything Twice) before DRYing relevant things. After all, I’m more careful with coupling and dependencies than saving lines of code. Effectiveness over efficiency.
  • DRY applies to semantics, not syntax. You don’t extract a plus function because a piece of code adds 2 ints and another one concatenates 2 strings. But the syntax of a granularity level might the semantics of the smaller one…
  • Every design pattern allows easier evolution along a dimension, while making another dimension more rigid (if not all dimensions when the rigid dimension is observability).
  • Software without side effects is useless. There has to be some side effect somewhere. Also applies to my favorite tool: immutability.
  • Start with why. Also, the why of a granularity level is the how of its parent. There’s always another why, which is why science doesn’t explain things, but describes, at best predicts them.
  • If taking people out of an ensemble programming team doesn’t change the outcome, how long can we keep them out of the ensemble? Why can’t we just take them out of the team? What is the lower limit of the ensemble? The optimal team setup?

There’s always a limit to the optimal application zone of heuristics. These limits are multidimensional and contextual. It’s a trade-off.

Like for architecture decisions, when taking any kind of decision, you always need to ask yourself what you loose, not only what you get. You always loose something when you draw a line in the sand.

I commit to exploring beliefs boundaries, challenging their applicability conditions, and promoting itDependsing™.

Become useless rather than a hero

This is the 10th post from my sincere dev manifesto series.

I will never be essential. Whether you say bus factor or lottery factor, nevermind. These numbers are the number of people your team can loose before meeting a disaster. You don’t want to have a value of 1. I won’t be responsible for a value of 1. 1 is very frequent, and is the necessary hero of your team. If she leaves, you’re lost. Managers love heroes, because heroes always save the day. But they condemn tomorrow.

Necessary heroes are bottleneck. They are a source of stress for themselves and their environment. They decrease everybody’s engagement. And yet, I have a secret: everyone does fine when they leave. It might hurt for a while, and then you find ways. But still, we don’t want this period of pain. “If it hurts do it more often”, right? So find ways more often: every single day.

We must share our practices and know-how, but also our comprehension of the domain, the environment, and our creature.

When I code, I want anybody to understand the code, and be able to modify it in a quick and reliable way.

When I manage, I want people to climb up the delegation scale, so that they can take more and more decisions closer to where the information is.

When I coach, I want people to be able to continue progressing on their own when I leave.

When I design, I want people to talk to each other without my intermediation, to take more and more decisions at small, and then bigger and bigger scales, and guarantee smooth evolutions of our creature.

We need everyone to take more and more decisions, with the right level of alignment in mind. So we need to:

  • Communicate goals, contraints, values, on and on and on, so that everyone knows and understands them.
  • Help people verify they are taking decisions that respect alignment constraints
  • Help people put in place an environment where a mistake is not a big deal.
  • Trust people to put the machine back on track when the result of the decision was wrong.
  • Encourage everyone to not do the work they know how to do. The people doing the work should be the ones that don’t know the work by heart, with the help of people who know. In medicine they have a 3-step process to learn gestures:
    1. Observe
    2. Do
    3. Teach
  • Have poka yoke all around. Errors should be hard to do, if not impossible.
  • Of course, pair, mob, discuss, BBL, lean coffee, run kaizen workshops in safe-to-fail environments, dojo.

You want to be hero? Grow a team of heroes.

In medicine they have a defined practice to learn simple gestures and practices, something like observe-do-show. We need such ingrained practices in teams. The difference with medicine is that mistakes don’t hurt people, so we don’t need to observe most of gestures. We can do it together instead, undo, redo, backup, restore. For example, a rule I love is “if you know how to do it, you can’t do it”. When you know how to do something, you must find someone to teach, you being the shadow, until enough people know. Lean has this skills training matrix. Pick whatever practice you want, but deliberately spread knowledge out of experts’ heads.

I commit to making sure nobody sees the difference when I’m not here.

Do nothing when it’s time rather than obstruct pipes

This is the 9th post from my sincere dev manifesto series.

Every time a task is blocked, we stop… and in general, we move to another task. Our biases make us scared of the void. We need to feel useful, by taking tickets on the board, by doing something. Every time we do so, we violate a more or less explicit agreement we have in the team: we agreed that the blocked task was a priority in the first place, so our duty is to concentrate our time and effort on unblocking it.

When we move tasks with less priority while higher-priority tasks still ask for moving forward, we don’t only take capacity from high-priority tasks. We also add noise to the system, further reducing our capacity in general.

Fear of the void is central to flow management and WIP limits. WIP limit means: we agree to have no more than N things in a given state at a given time. When we reach the limit in a state, we agree to stop pulling things to this state. Because we know that when we push more water into a pipe that it can handle, it spits it back at us. “Stop” is the word to remember here.

In general, I see people stop… pulling tickets, and anticipate tickets from the previous column anyway. You can see this when there’s a rush to open pull requests as soon as a slot is freed by an item moving to the next column. Several tickets were already developed on devs machines.

When some constraint capacity is full, we can do more than our basic job, by:

  • Helping accelerating downstream tasks. We all have preferences, but when we are more useful providing a suboptimal help, we just need to do it.
  • Helping colleagues traditionally working in the same column as us, by pairing or mobbing.
  • Preparing future tasks, with the constraint that we must remain interruptible anytime, and be able to go back to priorities when the stream has room for them.
  • Learning, training, understanding.

All of these mean, from the backlog’s point of view, do nothing. It’s the right thing to do, and it’s always a struggle, with yourself and with your managers. You can always try to explain that items flying around are immobilized cash, who knows.

The theory of constraints says that to optimize the flow of a system:

  1. Identify the system’s constraint(s).
  2. Decide how to exploit the system’s constraint(s).
  3. Subordinate everything else to the above decision.
  4. Elevate the system’s constraint(s).
  5. Go back to step 1.

Oh by the way. In lean, when the WIP limits does not block tasks anymore, we decrease them on purpose, in order to expose the next problem to solve, and thus understand and improve.

I commit to leaving room for high priorities, by doing nothing when they already take the capacity of a constraint.