
The great Systems Thinker, Russell Ackoff, had a provocation that stayed with me.
A system is not the sum of its parts. It is the product of their interactions.
He used a simple example. Take the best engine from one car, the best transmission from another, the best brakes from a third. You will not get the best car. You will get no car at all. The behavior lives in the interactions, not the parts.
That provocation raises a question. If optimizing parts is dangerous, what makes an optimization safe? In today’s post, reversibility is where I want to begin.
The Problem with Parts:
The temptation is familiar. Something is not performing well. Fix it. Move on to the next thing. The assumption underneath is that a collection of locally optimized parts produces an optimized whole.
An organization composed of optimally performing parts will itself likely perform suboptimally, particularly where the interactions between parts carry the behavior of the whole. In tightly coupled arrangements, optimizing a part in isolation may improve that part while degrading everything around it. In more modular arrangements, the risk is smaller. A rough measure of coupling is the degree of interaction between parts. The more the parts interact, the tighter the coupling, and the more a change in one part propagates into others. The question worth asking is always which kind of arrangement you are actually dealing with. If you get it wrong, it is not necessarily a failure of intelligence. Barry Clemson’s darkness principle reminds us that no observer can fully see the whole from within it. The blind spot is structural, not personal.
The implication here is that optimization is not neutral. It is always optimization with a particular boundary drawn around it. And the choice of boundary matters enormously.
The Reversibility Criterion:
What makes an optimization dangerous? The answer is reversibility. An optimization becomes dangerous when you cannot undo it without catastrophic loss. When you cannot course-correct. When the act of being wrong forecloses the ability to learn from being wrong. This is essentially a functional question, not a structural one. It shifts the question from “how large is the unit?” to “what do you lose if you cannot go back?“
This matters because optimization, by principle, narrows the state space. You are collapsing toward more efficient configurations. Optimization here refers to any deliberate effort to improve performance toward a defined goal, whether through formal mathematical methods or managerial decisions to consolidate, streamline, or cut. The argument is that this pursuit, regardless of form, tends to reduce variety as a structural byproduct. Ashby’s insight was that variety is what allows a regulator to absorb disturbance. An organization that has optimized away its variety has optimized away its adaptive capacity. It performs well in the conditions the optimization assumed. It performs catastrophically in conditions it did not.
Beyond a point, optimization produces fragility because it consumes reversibility, the freedom to be wrong without breaking. Not as an accident but as a structural consequence. This is not an argument against all irreversibility. Some commitments are irreversible by design and valuable for exactly that reason. A constitutional right, a safety standard, an ethical line, these foreclose options deliberately, to protect something more important than flexibility. The distinction worth making is between irreversibility that protects variety elsewhere and irreversibility that consumes it. The danger lies in the second kind.
Stafford Beer, working from Ashby’s insight, argued that viable organizations need internal slack, what he called relaxation. Relaxation is variety you have not yet collapsed. You preserve it by restraint, not by engineering it back in. Some may argue the solution is to optimize for redundancy instead. But this is not a coherent escape. Optimizing for redundancy is still an act of optimization. You are making a fixed bet about what kind of redundancy you need. That bet is itself a reduction of variety. You do not recover relaxation through a second act of optimization. You only displace the fragility elsewhere.
Reversibility is the check on this. It is the condition for remaining capable of learning rather than locked-in. John Dewey, the American pragmatist, argued that the ability to revise a belief is not a weakness in inquiry but its essential character. A belief that cannot be revised is not held with confidence, it is held captive. An irreversible optimization forecloses inquiry in exactly this sense. It is efficient in the short run and anti-adaptive in the long one.
Two Complications:
But reversibility is not a simple property. It has at least two complications that matter.
The first is hysteresis. Let us look at an example here. Take a stress ball and squeeze it hard in your fist. It visibly deforms under pressure. The shape changes, it compresses. But the moment you open your hand, it bounces back to its exact original shape. The deformation was real but reversible. Now take a lump of wet clay and squeeze it the same way. It also deforms under pressure. But when you open your hand, the clay stays crushed. The shape does not return. The path you took to deform it has fundamentally altered the internal state of the material. Removing the pressure does not restore the original condition. That difference, between the stress ball and the clay, has a name in physics and control theory: hysteresis.
Organizations are like that too. The institutional knowledge that is distributed across a supplier network does not return when you decide to diversify again. The workforce skills that erode do not reconstitute on demand. The reversal is theoretically available. It is not available in the timeframe that matters.
The second complication is from a second order cybernetics standpoint. Reversibility is observer-relative.
A factory closure is reversible at the level of capital reallocation. The investor redeploys elsewhere. At the level of a regional community, it is not reversible. Social and political aspects such as the tax base, the accumulated skills, the social fabric of a place do not follow the capital back. The decision-maker experiences optionality. The community experiences lock-in. The same event reads differently depending on where you are standing.
The same logic applies at the scale of nations. When politicians speak of making a country great, they are treating the nation as the unit of optimization. But the country is itself part of a larger interdependent whole, other nations, shared ecologies, global supply chains. And within the country, greatness is defined from a particular vantage point. Some populations bear the cost of the optimization others experience as gain. The question is not whether a country is too big a unit to optimize. Bigness is not the issue. The issue is whether the boundary drawn around the country excludes the parties who bear the cost of what is being optimized inside it.
This means the question “from whose viewpoint” is not a secondary clarification. It is load-bearing. An optimization is safe only if it is reversible for all parties who bear its consequences, across the timeframe they actually inhabit. Not the timeframe of the optimizer.
This is not an argument for permanent optionality. Keeping every door open is its own kind of fragility, the slow failure of an organization that never commits enough to learn anything. Reversibility is a capacity to revise, not a reason to defer. The point is to act in ways that preserve the ability to correct course, not to avoid acting at all.
The Boundary Question:
Von Foerster’s ethical imperative was pointed:
I shall act always so as to increase the number of choices.
Irreversible optimization is a violation of this. It purchases present performance by selling future possibility. What looks like efficiency from inside the boundary looks like the foreclosure of options from outside it.
The boundary of an optimization is always a claim about what counts as the relevant whole. That claim is made by someone, from somewhere, with a particular set of interests and a particular blind spot. The second order cybernetic question is not just whether the optimization works. It is who drew the boundary that made it look like it did.
When it comes to complex networks, there is always a “loser”. The boundary does not make that fact disappear. It just decides who gets to see it.
Perhaps that is why Churchman and Ackoff moved away from operations research toward systems thinking, not because the math was wrong, but because the math kept leaving someone out.
Full Arc:
We started with Ackoff’s observation that behavior lives in interactions, not parts. The danger of optimization is not in the act itself but in what it forecloses. Hysteresis tells us that the path taken leaves a mark that cannot simply be reversed. Observer-relativity tells us that what looks like a clean optimization from one vantage point may be an irreversible loss from another.
These are not separate problems. They are the same problem seen from different angles. The boundary you draw around an optimization determines who experiences reversibility and who experiences lock-in. It determines whose variety counts and whose does not.
When it comes to complexity, this is mainly a structural condition rather than a moral one. The external environment has indefinite variety. It produces conditions the optimization did not anticipate. Optimization by principle reduces internal variety. When internal variety shrinks and external variety does not, the gap falls somewhere. Whoever absorbs that gap becomes the loser.
Which raises a question that the move from operations research toward systems thinking has perhaps always been circling: is a zero-harm optimization possible at all?
If there is always a loser, the question that follows is not how to eliminate the loss but how to remain responsible to it. In an earlier post, I looked at this through the minimax principle: the most humane path is not to maximize benefit but to minimize the worst possible outcome. Von Foerster’s imperative connects the two ideas. To increase the number of choices available to people is to preserve their ability to recover, to redirect, to begin again. That is not an engineering problem either. I explored this further in Minimizing Harm, Maximizing Humanity.
Stay curious and Always keep on learning…
Discover more from Harish's Notebook - My notes... Lean, Cybernetics, Quality & Data Science.
Subscribe to get the latest posts sent to your email.