On Viability as Truth:

The ideas discussed here will form part of the second edition of my book, Second Order Cybernetics. The second edition will include a first half where I go into the introduction of cybernetics and related ideas. The post is slightly longer than usual because of that.

In today’s post I am exploring what I think is a fundamental question in epistemology: what does it mean to say something is true? I want to approach this through the lens of cybernetic constructivism, and I will start with a question about pi, which feels fitting given that today is March 14.

What is pi? For most purposes, 3.14 is a very accurate value. But pi is an irrational number. It goes on forever. Should a truthist spend a lifetime reciting its digits? Where should they stop and say: this is pi?

That question is the crux of viability. A correspondence theory of truth has no stopping rule. It is committed, in principle, to the full expansion. But nobody does that, because it does not prove viable for any actual purpose. 3.14 works for most engineering calculations. More decimal places work for orbital mechanics. The stopping point is always set by the task at hand, not by some ideal of complete accuracy.

A correspondence theorist might reply that they can simply know pi is infinite without reciting it. But that reply concedes the point. To know that pi is infinite is already to accept a stopping rule. The question is what determines where you stop. Viability puts the stopping rule at the heart of truth. Correspondence theory treats it as a practical afterthought.

What I find striking is what the history of pi actually shows. The Babylonian engineer used 3. The medieval architect used 22/7. NASA uses 15 decimal places. The mathematician holds the concept of an infinite irrational ratio. Mathematically, there is a unique value of pi. The description that has proved most robust across the widest range of tests is the one that survives. That is viability selecting the winner. Classical correspondence theory has little to say about why our descriptions of pi changed over time or differ across contexts. It treats these shifts as practical side-effects rather than as central to what truth is.

Coping, Not Copying:

Once you notice that every stopping rule is task-relative, you are already moving toward a cybernetic picture: knowledge as regulation under constraint rather than as static mirroring.

Let’s look at the example of a lock. A lock can be opened with a key, a lockpick, or a hammer. All three work. None of them is the one true solution. The opening of the lock is the criterion, not the method. This is what viability means in practice. It does not collapse into the view that anything goes. The lock still has to open. But multiple paths can satisfy the criterion. This is the notion of pluralism.

Correspondence theory assumes that an accurate copy of the world can be made in the mind. But this assumes we have direct access to the world in the first place. Cybernetic constructivism starts from the opposite standpoint. We do not have direct access. What we have is a context, an observer embedded in a situation, acting under constraints. The cybernetician is aware of the trap of relativism that follows if epistemic constructivism is taken too far. The answer therefore is not to abandon external reality but to let it push back. External constraints are not obstacles to truth. They are its guide.

What emerges from this is not a copy of the world but a structural coupling with it. The focus shifts from copying the world to coping with it. It is worth pausing on the word itself. In Old English, “trēowþ” meant faithfulness and keeping one’s word. The narrowing toward factual correctness came much later. Its deepest root means firm and solid, from the same family as tree and trust. This suggests not a mirror of reality but something that holds fast under strain. Even in its language history, truth is less about a disembodied view from nowhere than about reliability in a lived relationship.

Viability, as the cybernetician uses the term, is not a lazy shortcut to truth. It is a different account of what truth is for. Knowledge is not a mirror held up to reality. It is closer to a tool. A tool fits or it does not fit, relative to what you are doing. 3.14 is not first true and then useful. Its usefulness is the ground of its truth-claim.

As I was thinking about this, I could not help but notice how well this connects to a tradition that predates cybernetics. The American pragmatists, Dewey and James in particular, argued that the truth of a belief is inseparable from its consequences in practice. Dewey called knowledge a form of inquiry, not a form of contemplation. James spoke of the cash value of a truth, meaning what practical difference it makes to the person holding it. Cybernetics inherits this instinct and gives it a more precise mechanical account, grounding it in feedback, constraint, and structural coupling rather than in human psychology alone.

Readiness-to-Hand:

This is also what Heidegger points to with the notion of readiness-to-hand. A hammer is not first an object you contemplate and then use. It is defined by its role in a practice. You notice the hammer as an object only when it breaks or goes missing. Otherwise, it is simply part of how you are engaged with the world. In the same way, a number like 3.14 is not something you verify against reality and then apply. It is already inside the practice. Its truth is its grip on what you are doing.

This brings in a second aspect of viability. It is always relative to what we have on our hands. The constraints of the task determine what counts as sufficient. A cybernetic observer is always embedded in a context of action. What counts as a sufficient description depends on what the observer needs to do next. This is why cybernetics reaches for viability rather than correspondence.

Viability, in more precise terms, is about maintaining the ability to act when facing constraints. It is not about removing constraints. It is about sustaining the capacity to continue within them.

King Sisyphus:

The myth of Sisyphus makes this idea vivid. In Camus’s retelling, Sisyphus is condemned to roll a boulder up a hill, only to watch it roll back down, again and again, for eternity. The constraint does not change. The boulder always returns. But Camus insists we imagine Sisyphus happy. And this is not a small thing.

The punishment depends on Sisyphus experiencing it as a punishment. The gods are not only constraining him physically. They are constraining him through his own suffering. If Sisyphus finds joy in what he does, the punishment loses its force, because it ceases to be punishment at all. The rock still rolls back. But the meaning of that event changes entirely.

There are two constraints at work here, not one. The physical constraint belongs to the hill and the boulder. The second constraint belongs to the relationship between Sisyphus and his situation. This to me is what Ashby points to when he treats constraint as a relation between an observer’s possible descriptions and what actually happens. The gods were counting on both. They chose this task because they expected Sisyphus to find it intolerable.

When he finds joy instead, the second constraint dissolves. The punishment ceases, not because the rock stops rolling, but because the meaning of the rolling changes. Sisyphus is no longer measuring his situation against an impossible standard, namely the boulder staying at the top. The rolling back is no longer a failure. It is just the next push.

This is also where the line between reframing and delusion holds. Sisyphus dissolves the relational constraint, the one the gods constructed, while remaining fully exposed to the physical one. The rock still rolls back. Delusion works differently. It does not reframe what you measure yourself against. It avoids the test altogether. Reframing and avoidance look similar from the inside but they are not the same thing.

There is something else here worth noting. The gods defined the task as punishment. Sisyphus, by finding joy, becomes his own observer. He steps outside the frame the gods imposed. In second order cybernetics terms, he observes his own observing and chooses differently. Viability, in this reading, is not just functional persistence. It is the capacity to reframe one’s relationship to a constraint.

The Madman of Naranam:

I want to bring in a figure I find even more striking than Sisyphus for this purpose. Consider Naranath Branthan, the madman of Naranam from Kerala. He too rolled a large rock up a hill and watched it roll back down. He did this again and again. But he was not being punished. Instead, he did it for fun. The stories say he would laugh with joy and clap his hands as the rock rolled down. Naranath is not denying the physical constraint. The rock rolls back and he knows it will. His reframing does not change what happens. It changes what he measures himself against. He stops applying a standard that was never appropriate to begin with. He needed no outside observer to imagine him happy. His reframing was self-generated.

The story of Naranath meeting the goddess Kali makes this even more vivid. Kali offered him any boon he wished. He asked only to move his elephantiasis from one leg to the other. He did not ask to be free of it. He had understood the absurdity of demanding that the world be other than it is. That understanding is viability in its deepest form.

Camus’s formulation still requires an observer. We must imagine Sisyphus happy. Naranath does not wait to be imagined. This is, perhaps, the deeper cybernetic point. Viability looks like a property of the organism. But the Sisyphus example reveals it as partly a property of the relationship between the organism and its observer. Naranath collapses that distance. He is the observer of his own condition, and he chooses joy without being told to.

The Community Problem:

If viability is partly about how an individual organism relates to its constraints, the question arises how communities do this collectively. Charles Sanders Peirce proposed that truth is what a community of inquirers would converge on given enough time. But this raises a prior question. Who gets to be in the community? A group that agrees the earth is flat is also a community. Peirce’s community of inquirers works only if the community remains open to the tests imposed by its environment. Without that, coherence collapses into shared delusion. The flat earth holds together perfectly well as a coherent position until you try to launch a satellite. At that point the context decides, not the community.

This is also where viability parts ways with a purely psychological reading of coping. A belief that works by insulating itself from defeat is not viable in the cybernetic sense. It is a closed loop. Popper called this falsifiability: a claim should be viewed as knowledge only if it is open to some possible observation that could defeat it. Viability makes the same demand from a different angle. A viable belief is one that remains genuinely exposed to the push-back of its constraints. Structural coupling requires contact. A placebo, for example, can open the lock once or twice. But a belief that cannot be tested by the lock at all is not knowledge. It is insulation.

Coherence and Its Limits:

When you encounter something genuinely new, you cannot easily make sense of it. This is because you do not yet have a network of understanding into which it can fit. The new observation has no hook to hang on. A claim does not arrive in isolation. It either coheres with the network of details you already hold, or it does not land at all. Reading widely and across diverse domains is one way to build a network rich enough to receive new things.

From this standpoint, truth has two criteria working together. First, it must cohere with what you already understand. Second, it must sustain your viability rather than erode it. The two are not equal. Coherence is necessary: a completely incoherent set of beliefs cannot support action at all. But coherence is not sufficient. A perfectly coherent but closed network will eventually fail you. Viability is the deeper criterion. Coherence is subordinate to it.

Correspondence theory assumes that truth is already out there, waiting to be found. Viability inverts this. Truth is not found. It is enacted in the relationship between an observer and their constraints. The question is not whether something matches reality. The question is whether it allows you to continue.

Frames and Open Questions:

Consider a rocket launch. The rocket either reaches orbit or it does not. That criterion is external and unambiguous. Political and social evaluations are harder because the criteria are themselves contested. Two groups disagreeing about a leader are not just disagreeing about a fact. They are disagreeing about what counts as a good outcome. That is a disagreement about values, not just evidence. It is a disagreement about which outcomes count as success.

I have written about von Foerster’s ethical imperative before in the context of Ackoff. Von Foerster offers one possible criterion that sits outside tribal agreement. His ethical imperative is to act always so as to increase the number of choices available. Choices here means systemic variety and the capacity to respond to constraints not yet encountered, not merely an expanded menu of options. That is at least a criterion that does not depend on who is in the tribe. But even that criterion has to be chosen by someone. The honest cybernetic answer to who decides the context is: the observer does, always. But the observer decides the frame. The universe decides whether the frame survives. These are not the same thing.

Even if truth is a property of propositions, any ascription of truth is made from within a perspective. Every act of saying this is true is an act by some observer under constraints. There is no view from nowhere. Existentialism arrived at this conclusion from a different direction. Kierkegaard argued that the deepest truths tell the individual what to do, not what to know. Nietzsche went further, famously claiming that there are no facts, only interpretations, and that we cannot see around our own corner. Pragmatism made a parallel move, arguing that truth is not a fixed target but a process, something that happens to an idea as it is put to work.

Cybernetic constructivism inherits both traditions and gives them a more precise account of what the corner is made of and what putting an idea to work actually means. What we can ask is whether a perspective is richer, more generative, and more honest about its own assumptions than another. Viability is not a compromise on truth. It is a more honest account of what truth has always been doing.

One question remains open. This post has argued that the universe decides whether the frame survives. But what happens when a community is so digitally insulated that it never has to launch a rocket? When algorithmic walls are thick enough, the physical pushback never arrives. A flat earther, for example, does not need to test their belief if they never have to navigate a satellite. This is perhaps the sharpest challenge viability faces in the present moment.

The answer the cybernetic constructivist would give is that insulation is itself a constraint. A community that never tests its beliefs against the world is not viable in any durable sense. It is merely coherent. And coherence, as this post has argued, is the subordinate criterion.

The Full Arc:

This post began with a simple question. Where should a truthist stop and say this is pi? That question opened into something larger. Truth, from a cybernetic constructivist standpoint, is not a mirror held up to reality. It is a relationship between an observer and their constraints. The stopping point is always set by what you need to do next.

The lock example showed that viability is not relativism. Multiple paths can work, but the lock still has to open. The Peirce detour showed that agreement is not the criterion either. A community can agree and still fail when the context changes. Sisyphus and Naranath showed that viability is not merely functional. It is the capacity to find a relationship to your constraints that allows you to keep going. Naranath, offered any boon he wished, asked only to move his elephantiasis from one leg to the other. He had understood the absurdity of demanding that the world be other than it is.

And the coherence point showed that for something new to register as true, it must first find a hook in what you already understand, and then sustain rather than erode your ability to act. A community that never tests its beliefs against the world is not viable in any durable sense. The universe does not argue. It simply decides which frames survive.

The universe is patient.

Stay Curious and Always keep on learning…

If you liked what you have read, please consider my book “Second Order Cybernetics,” available in hard copy and e book formats. https://www.cyb3rsyn.com/products/soc-book

Notes:

In referencing the work of Martin Heidegger, I want to acknowledge the deeply troubling fact of his affiliation with the Nazi party. This aspect of his life casts a long and painful shadow over his legacy. While I draw on specific philosophical ideas that I find thought-provoking or useful, this is not an endorsement of the man or his actions. Engaging with his work requires ethical vigilance, and I remain mindful of the responsibility to not separate ideas from the broader context in which they were formed.

What Is the Right Word?

In today’s post, I want to sit with a question that keeps surfacing in discussions about artificial intelligence and cognition, and one I am increasingly convinced is itself part of the problem. The question is: does AI know? The reason it may be the wrong question is not that the answer is obviously “no”, but that the word “know” is doing work it was never designed to do when we apply it to these entities.

The World Perturbs but Does Not Instruct:

In an earlier post, I looked at what cybernetic constructivism means for how we understand knowledge and experience. One of its central claims, following Maturana, is that the world perturbs but does not instruct. Meaning is not something delivered from outside and stored inside. It is enacted through structural coupling, through the accumulated history of an organism maintaining its viability in a world that pushes back.

Before carrying this idea forward, it is useful to note that Maturana developed it specifically to describe living biological systems, organisms that produce and maintain their own boundaries and whose continued existence is genuinely at stake in every encounter with the world.

Let’s look at a simple physical example to clarify the principle before we apply it to anything more complex. Take a golf swing – a club meets a ball with a certain force and angle, and the golf ball soars through the air. Now take the same swing, the same force, the same input, and apply it to a basketball. The basketball, unlike the golf ball, skids away at an odd angle. In both cases, the action was identical. What was different was the internal structure of the object receiving it. The world did not send two different instructions. It applied one perturbation to two different structures, and each responded entirely according to what it was. This is what it means to say the world perturbs but does not instruct. The perturbation is necessary but not sufficient. What shapes the response most is the structure that meets it.

Maturana’s claim, applied to living organisms, is that this structural responsiveness is not incidental but constitutive. Cognition just is this ongoing history of structurally shaped encounters with a world that pushes back. The environment cannot tell the organism what to do or what to become. It can only disturb, and the organism responds according to its own organization. The inanimate example of the golf club shows this in its starkest form, where the relationship between perturbation and response is straightforwardly causal. We have the same input, the same structure, the same output, each time. But as organizational complexity increases and as we move toward living organisms, something shifts. The response becomes less strictly causal and more dispositional. This means that even with a relatively stable internal structure, the output can vary depending on the current state and history of the organism.

One further nuance is that the structure does not treat all perturbations equally. Some disturbances are absorbed as structurally significant and others pass through without consequence, but this selectivity is not a filter applied from above. It emerges from the organism’s own history of structural coupling, from the accumulated shape of all the encounters that have formed it, which means that what counts as an informative perturbation is itself a product of living rather than an abstract sorting mechanism. The world does not simply push; the organism constitutes what counts as a push worth responding to, and it does so because of what it has already been through.

This framing makes the question about AI and knowledge considerably more precise, because it separates two things that are easy to conflate: linguistic competence in the domain of coordination, and the kind of structural shaping that emerges only in the domain of perturbation.

Language as Residue:

Large language models exist almost entirely in the domain of coordination. They have been trained on what Michael Polanyi might call the articulable fraction of human knowledge, everything that humans have stepped back from experience to render into language. But as I have written before, language without world is a serious philosophical problem, not merely a technical limitation.

Polanyi’s observation that we can know more than we can tell connects to Heidegger’s distinction in a way that sharpens the argument considerably. When we are in the ready-to-hand mode, absorbed in work, flowing through a task, we are doing things we know how to do without being able to say how we know. The carpenter handles the grain, the nurse reads the room, the experienced driver navigates traffic. The knowing is in the doing, and it is there precisely because we have not stepped back from it. The moment we step back, when the hammer breaks or we are asked to explain what we just did, we enter the present-at-hand mode, and that is when language becomes possible. We can now describe, analyze, articulate. But something has already been lost in the transition.

The limit Polanyi identifies is not practical but structural. To render the ready-to-hand into language, one has to leave the mode in which it was alive. The describing and the doing cannot occupy the same moment. The writing is produced from outside the coping, looking back at it, and what it captures is necessarily partial.

Let’s consider some examples to look at what happens when a human being produces a piece of text. A carpenter writes about joinery after years of handling wood, feeling grain, making mistakes, discovering through the resistance of the material what works and what does not. A nurse writes about patient care after countless encounters where something felt wrong before she could name it, where the knowing was already in her hands and her attention before it ever became a sentence. I have come to see that in every case, the writing is a residue, what remains after the person has stepped back from absorbed, ready-to-hand engagement and attempted to render some portion of it into language. The coping came first. The describing came second. And the describing, however careful, never fully captures what the coping contained, because the ready-to-hand does not survive the translation into the present-at-hand intact. Something is always left behind in the shift from doing to saying.

This is precisely what an LLM is trained on, the saying, not the doing. It receives the descriptions that people produced when they stepped out of engaged coping and tried to articulate what they had been doing. It never receives the coping itself, because coping does not involve retrospection and narration. Coping is absorbed, forward-moving engagement, and retrospection and narration require stepping out of that absorption.

The absorbed craftsman, the skilled nurse reading a room, none of what they are doing leaves a trace in language while it is happening. What they are doing only enters language afterward, partially, imperfectly, in the present-at-hand mode that Heidegger describes as a kind of breakdown of the original engagement. The LLM is trained entirely on those breakdown products, which means its entire organization of apparent knowledge is built from the layer of experience that was already one step removed from the living of it. The burn came before the word “hot,” and the word is all that was passed on.

One might object here that language is not merely a dead residue, that a beautifully written sentence about grief can make a reader’s chest tighten, that text has genuine generative power. This is true, but it actually confirms rather than challenges the argument. When a reader weeps at a sentence about loss, that response is internally generated. The text does not transmit the grief. It acts as a perturbation on an entity that already carries a body, a history of loss, a nervous system shaped by its own structural couplings with pain and absence. The reader brings the world to the text. The text triggers something that was already there. When an LLM processes the same sentence, there is no accumulated history of loss for it to trigger, no body shaped by living through something the words are pointing at. The perturbation lands on nothing.

One might press this point further and note that LLM outputs do in fact vary: ask the same question twice and you will often receive a different answer, because the generation process involves sampling from probability distributions over learned token sequences rather than deterministically retrieving a fixed response. This variability has a surface resemblance to the dispositional quality of biological response. The difference, though, is in what the variation is grounded in.

In a living organism, dispositional variation is an expression of a history of structural coupling, of a self with a particular past that is still alive in how it meets the present. In an LLM, the variation is stochastic, a property of the sampling algorithm applied to probability weightings derived from training data. The output shifts not because something in the entity is responding from within a lived history, but because the decoding process introduces controlled randomness. The resemblance is real enough at the surface level that it deserves to be named, but naming it clearly is also what shows why it does not close the gap.

When the Body Enters the World:

The question that seems worth sitting with is not does AI know, but what happens when we try to ask whether an embodied AI, one that acts in a physical environment and receives consequences, might be doing something that begins to resemble knowing. This is a genuinely different question from the one we have been asking about LLMs, and it deserves to be treated as such.

Embodied AI is not simply an LLM with motors attached. An entity that learns locomotion through physical trial and error in an environment, whose internal organization is being shaped by a history of navigated perturbations rather than by gradient descent on text, is at least potentially accumulating something in the domain of perturbation rather than purely in the domain of coordination. This seems worth taking seriously. It is not nothing. The question is what, precisely, it is.

Two Orders of Seeing:

Here is where the distinction between first order and second order cybernetics brings more clarity, and it is a distinction that most discussions of AI and systems thinking often quietly step over.

First order cybernetics, the tradition of Wiener, early Ashby, Shannon, approaches a “system” from the outside. The observer maps inputs, outputs, and feedback loops, and asks whether the system’s behavior matches a target. From this vantage point, the question of whether an embodied AI knows is a functional question: does it respond to its environment in ways that are indistinguishable from a system we already credit with knowledge? If a system navigates novel terrain, corrects errors, develops what look like anticipatory responses, and does all of this without explicit programming for each case, then within this framework there is no principled reason to withhold the attribution of knowledge. What walks like a duck and quacks like a duck is, for the first order cybernetician, a duck. This is a functionalist position, and it has genuine value within its own terms.

Second order cybernetics challenges this answer, not by denying the observations but by interrogating the observer. Heinz von Foerster insisted that the observer is always part of the system being observed, and that any account of a system which pretends to a view from nowhere is concealing its own constitutive role. When we say the embodied AI “knows,” we are making a distinction, and that distinction is drawn by an observer who is themselves a cognitive entity with a particular structure, history, and set of concerns. The duck is not a duck in itself as an observation. The duck is a distinction that a particular kind of observer draws within a particular kind of world.

A first order observer can say: the behavior matches the target, therefore attribute the property. A second order observer has to ask: who is doing the attributing, from within what structural coupling, and what does that attribution make invisible?

What it tends to make invisible is the question of autopoiesis. Maturana and Varela proposed that living cognitive entities are self-producing, that they continuously generate their own boundary, their own distinction between self and world, through the very processes that constitute them. A cell produces the membrane that defines it. A nervous system maintains the organization that makes it a nervous system. Cognition, on this account, is not something that happens in a system; it is inseparable from the system’s ongoing self-production. The knowing and the being are not separable events.

Current AI systems, including embodied ones, do not produce their own boundary in this sense. The hardware, the objective function, the definition of successful performance, all of this is determined externally by designers. The system’s closure is assigned, not generated. One might point to emerging architectures that modify their own weights, prune their own connections, or adapt their own structure during deployment, and ask whether these represent an intermediate case. The observation is fair and the edge is genuine. What such systems do not yet do, though, is generate the conditions of their own existence from within, which is what Maturana and Varela meant by autopoiesis.

A Record Is Not a History:

Polanyi’s account of tacit knowledge seemed to carry an implication about machines. If tacit skills like riding a bike cannot be fully articulated, and machines can only work from explicit instruction, then the tacit domain would remain closed to them. You cannot write down the rules for balancing on two wheels in a way that produces a cyclist. But robotic systems can now ride bikes. They do not follow rules. Through reinforcement learning and similar methods, the robot receives feedback and adjusts its parameters over many cycles of trial. Something has been acquired without articulation.

But observing a robot ride a bike raises a question that Polanyi might have considered even more fundamental than the acquisition problem. Each ride seems, in an important sense, new to the robot. And even if the robot accumulates weight updates across sessions, even if something of each ride persists in the network, what accumulates is a record, not a history. For a human, riding a bike becomes part of who you are. The first wobbling attempt, the moment it clicked, the rides since, these do not sit in a log. They have shaped a body and a self that carries them forward invisibly. Merleau-Ponty would say the body has a memory that is not representational, that the competence lives in the flesh, in the motor schema that fires before any conscious intention forms. The human cyclist is not retrieving a stored procedure. They are the person who learned to ride, and that learning is still active in how they inhabit their body and their world.

Heidegger’s notion of Gewesenheit, having-been-ness, is useful here. It is not a claim about data storage or persistent memory. Some artificial systems do maintain functional continuity. Weight updates accumulate, parameters persist, prior experience shapes future outputs. But that is not what Gewesenheit points to.

What it points to is the past not as stored information but as something the self is still living out, always already in the service of its projection forward. The having-been and the being-toward are a single movement. A robot that accumulates weight updates has functional continuity. It does not have this forward-carrying structure. What accumulates is a record, not a history.

There is a distinction here that the word “knowledge” obscures. Acquiring tacit competence is not the same as incorporating it into a continuous self. An embodied AI can do the first without the second. That gap may be more philosophically revealing than the acquisition question ever was.

There is a further dimension to this that the bike-riding example makes vivid, and it is one that tends to get lost when embodiment is treated simply as the addition of a physical body to an otherwise cognitive system. Merleau-Ponty’s central argument in the Phenomenology of Perception is that the mind is not a separate entity that inhabits or operates a body. The body is not the vehicle for a mind that does the real cognitive work from somewhere inside. Cognition is irreducibly embodied, which means that the body is not an instrument of thought but its very medium. When the experienced cyclist leans into a corner, she is not executing a mental calculation about angle and velocity. The body already knows what to do before any representation forms. The knowing and the bodily doing are not two events in sequence. They are a single event.

Von Foerster made a similar point when he said, “If you want to see, learn how to act.” Perception is not passive reception of a pre-given world that the mind then processes. Perception is constituted through action. The acting and the seeing are intertwined, and an entity that cannot act, cannot be perturbed in ways that have consequences for its own continuation, cannot perceive in the relevant sense either.

One might object that modern embodied AI systems using deep neural networks do not separate perception and action sequentially. Sensory input and motor output are processed simultaneously through shared weights. That begins to look like the unified loop von Foerster is describing. The observation is fair. But the question it raises is not architectural. It is autopoietic.

The issue is not whether perception and action are processed in parallel. It is whether that processing belongs to a self-producing system whose own continuation is at stake in every cycle. In a living organism, the perception-action unity is inseparable from the fact that the organism is continuously producing the very system that perceives and acts. The unity is not a design feature. It is a consequence of being alive in Maturana’s sense.

The deeper question remains. Is anything at stake for the robot in its own navigation? Does it maintain its viability, or does someone else maintain it on its behalf? A system whose boundary is externally defined and maintained is not viable in its own right. It is viable by proxy. That distinction matters when we are asking what kind of knowing such a system could be doing.

The Full Arc:

So what is the right word for what an embodied AI accumulates through its history of structural coupling, if knowledge is not quite right?

One candidate is operational trace. It is the structural residue left by perturbations navigated, shaped into patterns of response that fall short of knowledge but go beyond mere computation. It is a phrase rather than a single word, but perhaps that is fitting. What these entities accumulate may resist the compression that a single word implies.

It is not a record, which is passive and retrievable. It is not a representation, which stands for something external. A trace is active. It shapes what comes next from within the structure that carries it. What it lacks is existential continuity, the grounding in a self that lives it forward as a living entity would.

This is not a dismissal of what embodied AI is doing. It is an attempt to describe it better. What these developments may be teaching us is that the word “knowledge” contains at least three things we had not clearly separated: acquiring adaptive competence, incorporating that competence into a continuous self, and grounding both in a self-producing entity that maintains its own existence. Embodied AI can approximate the first, cannot achieve the second, and structurally cannot reach the third. This is not because of a technical limitation, but because no amount of architectural sophistication generates a self that lives its past forward, produces its own boundary, or maintains its own existence from within.

The distinction is about what these processes are, not about their value. The argument is not that operational traces are inferior. It is that they are different in a way the word “knowledge” obscures. Keeping that distinction clear is what allows the question to remain open.

Simply put, the accumulation of data without the accumulation of scars is not knowledge. The word “scars” is rhetorical, not precise. It points toward what the argument has been building throughout. Knowledge involves a self that has been genuinely marked by its encounters.

Finding the right word is not a terminological exercise. It is a way of keeping the question honest.

Stay curious, and Always keep on learning…

If you liked what you have read, please consider my book “Second Order Cybernetics,” available in hard copy and e book formats. https://www.cyb3rsyn.com/products/soc-book

Notes:

 The ideas discussed here will form part of the second edition of my book, Second Order Cybernetics. The second edition will include a first half where I go into the introduction of cybernetics and related ideas. The post is slightly longer than usual because of that.

In referencing the work of Martin Heidegger, I want to acknowledge the deeply troubling fact of his affiliation with the Nazi party. This aspect of his life casts a long and painful shadow over his legacy. While I draw on specific philosophical ideas that I find thought-provoking or useful, this is not an endorsement of the man or his actions. Engaging with his work requires ethical vigilance, and I remain mindful of the responsibility to not separate ideas from the broader context in which they were formed.

An Introduction to Cybernetic Constructivism:

In today’s post, I want to offer an introduction to Cybernetic Constructivism. The ideas discussed here will form part of the second edition of my book, Second Order Cybernetics. The second edition will include a first half where I go into the introduction of cybernetics and related ideas. The post is slightly longer than usual.

I will be drawing on ideas from Martin Heidegger, Heinz von Foerster, Humberto Maturana, Francisco Varela, Maurice Merleau-Ponty, Gaston Bachelard, and Ludwig Wittgenstein. Each one has contributed a specific piece of the picture, and together they help clarify what Cybernetic Constructivism means and why it matters.

Before we proceed, it is worth saying what this post is not.

It is not an argument for solipsism. It is not a denial of constraint. It is not an attempt to dissolve the world into language.

Instead, it is an introduction to a framework, and a clarification of what that framework actually claims. Most criticisms of Cybernetic Constructivism arise from misunderstanding one central idea: informational closure.

Informational Closure:

As I have written before, each of us is informationally closed. In other words, meaning is not something we passively receive in a pre-formed state. It is worth pausing on the word itself. Information derives from the Latin “in-formare”, meaning “to give form”. But we must be clear about what this means for us as organisms. We do not simply take a formed meaning inside us. Changes occur in our nervous system through interaction with the world. There is no pre-formed content in what occurs. It is within us, through our structure, history, and embodied coping, that we actively give form to those perturbations. The world does not deliver meaning. We enact meaning out of these interactions. We will look at a simple example to explain this further.

When light strikes the retina, no meaning travels with it. What propagates through the organism is electrochemical activity, the same type of electrical impulse regardless of whether we are looking at a red rose or touching a hot stove. Even more striking, similar electrical stimulation routed to different cortical regions produces entirely different qualitative experiences. In one region it may produce visual phenomena. In another, tactile sensation or pain. The energy carries no label telling the organism what kind of experience to generate.

This is what von Foerster called “undifferentiated encoding”. It is what Maturana meant when he said the world perturbs but does not instruct.

Informational closure does not mean sensory isolation, or undifferentiated noise, or the absence of constraint. It means something precise: the organism’s state changes are determined by its own structure. External events do not instruct the organism what to become. They perturb it. The response is determined internally.

It does not deny constraint. Instead, it denies instruction. That is the distinction worth holding onto.

A natural objection may arise here. We will look at another simple example to explain this. Fiber optic cables also transmit pulses of light, and yet they clearly carry structured information. Does this not show that a signal can contain meaning independent of the receiver?

The key distinction for the fiber optic example is agreement. A fiber optic system works because sender and receiver were built around a shared code. We decided that a particular pattern of pulses means the letter “A”. The receiver was engineered in advance to interpret that pattern in a specific way. No such pre-agreed semantic code exists between the world and the organism. One can say that the organism’s brain sits in a dark, silent skull receiving electrical spikes, and it must generate significance from those spikes according to its own organization.

Consider how differently color is experienced across species. Bees see ultraviolet light that is entirely invisible to humans. Dogs have limited color discrimination compared to humans. Humans with typical trichromatic vision experience a richer color range than those with color blindness, whose photoreceptors are organized differently. All of these organisms inhabit the same physical world, encountering the same electromagnetic radiation. And yet each enacts an entirely different experiential reality from it. Color does not exist as transmitted content in the world. It arises through the specific organization of the organism encountering it. Same energy and different structures result in different worlds. The wavelength distribution exists independently. The qualitative experience of color does not. The organism is not a passive receiver. It is an active participant in the generation of its own experience.

The Question We Should Be Asking:

Critics of Cybernetic Constructivism often ask: if we are informationally closed, how do we know reality as it really is?

That is not the most productive question.

The more fundamental question is: how do organisms remain viable under informational closure?

Viability, the capacity for continued existence under constraint, is the explanatory anchor. Rather than asking how the mind mirrors the world, we ask how a structurally determined organism manages to persist within a world of real constraints it does not mirror in representational form.

Structural Coupling and Evolutionary History:

We are not blank slates constructing a world from nothing. We are the result of millions of years of successful structural couplings between organisms and recurring environmental constraints.

A useful way to think about this is differential retention under constraint. We will use an example here. Imagine objects passing through a field of openings of varying sizes such as a toddler’s shape sorter. Only certain structures pass through. The openings were not designed for them. They simply constrain what continues. Over vast stretches of time, this process sediments into the biological organization we now embody. Evolution did not write a semantic code into the organism. It filtered structures.

A reductionist might look at all of this and conclude that evolution is simply a slow form of instruction. That the environment has, over billions of years, written its code into the organism. But this misreads what is actually happening. Evolution does not instruct. It eliminates. There is no agent out there watching over organisms, and directing them toward viability. There is only the constraint, and what remains after the constraint has done its work. The organism that survived did not decode a message from the world. It was simply not removed. No meaning was transferred and no code was agreed upon. What remains is fit, not understanding. There was no instruction. Only elimination, and what survived it. To call this instruction is to smuggle representationalism back in through the side door, which is precisely what Cybernetic Constructivism is questioning.

The Earliest Life Forms:

It helps to think about the earliest organisms, which had no eyes and no ears, only chemical gradients and membrane perturbations. Their experiential reality, if that term can be cautiously applied, would not have been a world of objects or articulated shapes. There would have been no color, no sound, no texture as we know them. Just a field of intensities, regions of attraction and repulsion, without a subject standing apart from that field. As we saw with the color vision example, color is not delivered from the world. It is generated through the organism’s own structure. The earliest organisms had no such structure to generate it with.

Over evolutionary time, couplings grew more complex. With the emergence of nervous systems came multimodal integration. What we now call a flower is a historically stabilized enactment within a long lineage of viable coupling. But this does not make the flower fictional. It means that the flower as lived, with color, scent, texture, and cultural resonance, is inseparable from the history of coupling that makes such experience possible.

Our world feels rich because our coupling is rich. That richness has been built over an immense span of time. This is what generates the sense of unmediated experience of reality.

A Rose and the Accumulation of Meaning:

Consider encountering a rose for the first time. The initial encounter is visual, a red form against green. The next encounter adds touch, bringing softness and texture. Later, scent enters, and the pattern stabilizes further. Each interaction is integrated according to the organism’s structure and history. Over time, the rose becomes meaningful in a way that is neither purely internal nor purely external.

It may be tempting to treat the rose as a self-contained whole. But the rose is connected to a plant, the plant to the soil, the soil to a broader ecological network. The flower bed is in a yard, the yard within a landscape, the landscape within larger climatic and geological constraints.

Where, then, does the rose end?

The world does not present itself with highlighted boundaries. Distinctions are drawn relative to purposes, practices, and modes of engagement. The rose is not unreal. But the unity we call rose depends on how we carve the field of relations.

It is here that critics often invoke nominalism, claiming that if boundaries are drawn rather than discovered, then wholes must be mere names. But that move collapses something that is worth keeping distinct. The constraints are real. The enactment is real. The distinction is ours. We will come back to nominalism shortly.

There is no semantic packet traveling from flower to organism. But neither is there a free-floating projection generated entirely from within. The rose and the organism, in recurrent interaction, enact a stable experiential domain together. To experience a rose at all requires sedimented capacities shaped by prior coupling, and those capacities carry a history that stretches far beyond any individual lifetime.

The lived rose is a temporally sedimented pattern of embodied interaction. The word rose is a stabilizing coordination within language. These are not the same kind of thing, and confusing them is where much philosophical difficulty begins.

Structural Coupling: The Shoes:

Maturana offered an example that makes structural coupling concrete. Imagine two identical pairs of shoes. One is brand new. The other has been worn daily for months. Although they began with the same design and material, the worn pair now bears the history of its interactions. The leather has softened where the foot repeatedly pressed. The sole has compressed in patterns that mirror the gait of its wearer.

At the same time, the foot has changed as well. Skin has thickened into calluses. Subtle adjustments in posture and movement have developed. What we call a good fit is not a property that was inserted into the shoe from the outside. It is the outcome of recurrent interactions in which both shoe and foot have changed according to their own structures. Their congruence is historical.

This is structural coupling. The shoe does not instruct the foot how to change. The foot does not instruct the shoe. Each responds to perturbations in ways determined by its own structure. Over time, their changes become mutually coherent. If a blister appears, it is not given by the shoe but generated by the foot under frictional perturbation. The environment participates in triggering change, but the specification of that change lies within the structure of the system itself. Fit, comfort, or injury all emerge from this history of recurrent interaction. Not from transferred information.

The organism does not represent the world. It has been shaped by it. Structural congruence should not be confused with representation. Representation implies an internal model that stands in for an external object. Structural congruence is historical fit without internal mirroring. The organism does not contain a picture of the world. Instead, what remains is a historically shaped pattern of viable responses in ongoing coupling with the world.

Skilled Coping and the Body:

Before we describe, we are already navigating. Before we theorize, we are already responding. This is what Heidegger called the ready-to-hand. The world shows up not as objects to analyze, but as a field of practical engagement, situations to navigate, breakdowns to manage.

Maurice Merleau-Ponty deepened this insight. For him, the body is not a machine receiving instructions from a mind. The body is already immersed in a meaningful world before any explicit thought occurs. We reach for the cup before we decide to reach. We adjust our footing on uneven ground before we consciously register the unevenness. Meaning is lived before it is spoken.

The distinction between inside and outside, subject and object, arises when we step back and reflect. It is a product of reflection, not a feature of lived engagement.

The distinction between subject and object feels so natural that it is easy to mistake it for a primary feature of experience. But Heidegger’s account suggests otherwise. The split emerges secondarily, when engagement breaks down and we are forced to step back and examine what we were previously just using. Before the breakdown, there is no detached subject observing an independent object. There is only absorbed coping. Maturana pushes this further. If the organism is informationally closed, there is no view from nowhere, no transparent window between a subject and an independent world. The observer is never outside what they are describing. This has consequences beyond philosophy. Most of our institutional and scientific language is built on the assumption of a detached observer describing an independent world. We design organizations, write policies, and build models as if the describer stands outside what is being described. Cybernetic Constructivism does not reject description. It asks us to remember that every description is made from somewhere, by a structurally determined organism, under constraint.

Viability in this realm is unforgiving. It means maintaining structural coupling within an environment of real constraints. If we misjudge the cliff’s edge, gravity does not negotiate. The world does not wait for our descriptions.

Communication Under Informational Closure:

When we step back to describe, categorize, and theorize, the world shifts into what Heidegger called the present-at-hand. In lived experience, relations are fluid and entangled. Language gathers them into unities. It stabilizes and smooths what was rough and resistant. This is where we speak of “systems” and wholes. The constraints belong to the world. The enacted coupling belongs to lived engagement. The bounded unity we call a system belongs to word.

Bachelard explored this through language and imagination. Words do not simply label reality. They reconfigure how we experience it. A term like “system” does not merely point to something pre-existing. It organizes a field of relations, highlights certain patterns, and backgrounds others. Language renders the world navigable in thought. But if we treat the conceptual map as identical to the terrain, we obscure the very dynamics it was meant to represent.

A common objection arises at this point. If we are informationally closed, how can language function at all?

Language is not a conveyor belt of inner content. It is coordinated behavior among structurally coupled organisms. When we speak, we perturb. Each listener reorganizes according to their own structure and history. Meaning emerges in the coordination, not in the transmission. The coordination is real. The transmission is not.

The persistence of the package metaphor illustrates why this is difficult to grasp. Because communication has stayed viable across generations, words began to feel like discrete objects carrying meaning from one mind to another. Viability created the impression. The package is a useful metaphor, but it should not be mistaken for a mechanism.

Wittgenstein provides more clarity on this. Meaning resides in shared practice, not in private inner images. To understand a word is not to possess the same experience as the speaker. It is to participate competently in a form of life. Public language feels objective precisely because it has been stabilized through repeated interaction among similarly structured organisms. But it never escapes informational closure. It is made possible by it.

All communication is imperfect. Or to put it bluntly, all communication is miscommunication at some level. No two organisms share identical structures or histories. Coordination is always approximate. And yet it works, because evolutionary tuning and shared histories of structural coupling create enough alignment to stabilize linguistic practices. Viability in the realm of words is social and symbolic. One can hold false beliefs for an entire lifetime, that the earth is flat, that unseen agents govern the skies, and still survive, so long as those beliefs do not disrupt embodied coping. Linguistic coherence is not the same as biological viability.

Word is not World And World is not Word:

At this point a reader might object: is this not just nominalism?

Nominalism is one of the oldest positions in philosophy, going back to medieval disputes about universals. The realist says that categories like redness, humanity, or system exist independently as real features of the world. The nominalist says no. Only particular things exist. The rose exists. The apple exists. But redness is just a name we apply to both because they produce a similar effect on us. Categories are linguistic conveniences, not real features of the world.

That is not the position here, and it is worth being clear about why.

The nominalist denies that universals exist independently of particular instances. Categories like redness or humanity, on this view, are names we apply to collections of particulars, not independently existing features of the world. What is being argued here is something more layered. The concern here is not to describe reality as it is in itself, but to clarify how reality becomes meaningful within lived engagement. The world has real constraints and causal patterns that exist independently of our vocabulary. Organisms are genuinely coupled with those constraints through embodied coping. Gravity does not depend on what we call it. Structural failure does not wait for a description. Language operates as a third layer. It aids in stabilizing and organizing the patterns already enacted in coping into named unities like system, organization, or institution.

So, the Cybernetic Constructivist position has three distinct layers. There is the world of real constraints. There is embodied enactment, the skilled coping through which we are already engaged with those constraints before any description occurs. And there is linguistic articulation, the words that stabilize what has been enacted into shareable, revisable conceptual unities. Nominalism collapses all three into the third layer. Naive realism collapses all three into the first. The position here keeps them differentiated.

The unity we articulate, the system we describe, belongs to the third layer. It is not arbitrary. It is rooted in real constraint and enacted through genuine coping. But it is not the same kind of thing as the constraint itself.

There is a world that resists us in practice. There is word that organizes that resistance in theory.

Bachelard reminds us that words smooth the terrain of experience. They make it easier to navigate in thought. But a smooth map is not the same as the rough terrain it represents. When we mistake the map for the territory, when we treat our descriptions as if they were the world itself, we risk losing touch with the very constraints our descriptions were meant to help us navigate. And that loss of touch can impact our viability in ways that no amount of conceptual tidiness can repair.

A Note on Scientific Status:

A common objection to constructivist positions is that they are not falsifiable in the Popperian sense. Cybernetic Constructivism does not operate at the level of first-order empirical hypotheses, and it is worth being clear about why.

This is not a hypothesis about the contents of the world. It is a second-order description of how structurally determined systems generate and stabilize explanatory frameworks under constraint. It operates at the level of epistemic condition rather than empirical prediction. In the same way that the concept of a scientific theory is not itself falsifiable, nor is the commitment to methodological naturalism, Cybernetic Constructivism sits at the meta-level. It does not compete with empirical theories. It seeks to clarify the conditions under which theories arise, stabilize, and change. It does not replace falsifiability. It situates it.

Any act of falsification presupposes structurally determined cognition. It requires an observer, a perturbation, a reorganization, and a judgment that a hypothesis no longer holds. So, the framework is not refuted by the demand for falsifiability. It explains how falsification is possible in the first place.

The real worry behind this objection is usually something else. If everything is constructed, does that mean nothing can be wrong? It does not. My position is that the appeal to constraint and viability is precisely what prevents that collapse. Organisms that construct worlds incompatible with persistent environmental constraint do not endure. The space of viable constructions is narrow, not infinite. This is closer to evolutionary pragmatism than to relativism.

And this framework, while not falsifiable in the Popperian sense, is not immune to evaluation. It can be assessed on coherence, explanatory power, and practical utility. Does it clarify more than it obscures? Does it help us navigate the terrain more honestly? Does it open up more productive questions than it forecloses? Those are legitimate standards, and this framework is willing to be held to them.

With that clarified, we can return to the broader picture and end of this post.

Final Words:

Each thinker in this post contributes a piece of the same picture. Von Foerster showed that meaning is generated within the organism rather than imported from outside. Maturana and Varela showed that the world perturbs but does not instruct, and that structural coupling stabilizes viable experience over time. Merleau-Ponty showed that meaning is first lived in the body before it is ever spoken. Heidegger showed that we are already coping before we describe. Wittgenstein showed that meaning lives in shared practice, not in transparent transmission. Together, they point toward the same conclusion. We do not mirror reality in representational form. We have histories of viable interaction with it, and that has been enough.

Cybernetic Constructivism is not naive constructivism. It does not say everything is made up. It is not realism in the naive sense of claiming direct access to things as they are in themselves. It is not nominalism in the dismissive sense of mere words. It is an account of how organisms generate stable experiential worlds through structural coupling under constraint.

When we name something a “system”, we are not discovering a pre-existing object. We are making a distinction that organizes our experience relative to what we care about. That distinction can be enormously useful, but it can also become a trap. Once the word is in place, it begins to feel like a thing. It acquires weight and independence. We start to treat the linguistic map as if it were the terrain itself. Organizations fail this way, and so do theories and policies. The words become internally coherent, the descriptions become persuasive, the maps become elaborate and refined, and all the while the terrain has shifted beneath them. The language was viable. The coping was not.

And here is the unavoidable reflexive point. This post is also not a piece of pure information. The intent here is to provide a perturbation, a disruption to the reader’s equilibrium. These words are not a final answer. They are a knock on the door, inviting you to reorganize, to shape, and to see how you give form to your own understanding.

Maturana spoke of what he called “aesthetic seduction” as the only honest way to share ideas. He did not want to convince or persuade through pressure. He wanted the beauty of the ideas to speak for themselves and invite the reader to reorganize their own understanding. He noted that any attempt to persuade applies pressure and destroys the possibility of listening. This post is offered in that same spirit. What remains outside these sentences is the resistant constraint of lived engagement, gravity, friction, posture, breath, the subtle adjustments of your body as you read. None of that is inside the words.

These sentences are word. The terrain is world.

Stay Curious, and keep forming your own meaning…

If you liked what you have read, please consider my book “Second Order Cybernetics,” available in hard copy and e book formats. https://www.cyb3rsyn.com/products/soc-book

Note:

In referencing the work of Martin Heidegger, I want to acknowledge the deeply troubling fact of his affiliation with the Nazi party. This aspect of his life casts a long and painful shadow over his legacy. While I draw on specific philosophical ideas that I find thought-provoking or useful, this is not an endorsement of the man or his actions. Engaging with his work requires ethical vigilance, and I remain mindful of the responsibility to not separate ideas from the broader context in which they were formed.

References:

  1. Heidegger, M. Being and Time (1927)
  2. von Foerster, H. Understanding Understanding (2003)
  3. Maturana, H. and Varela, F. The Tree of Knowledge (1987)
  4. Merleau-Ponty, M. Phenomenology of Perception (1945)
  5. Bachelard, G. The Poetics of Space (1958)
  6. Wittgenstein, L. Philosophical Investigations (1953)

When Does a System Exist? The Myth of the Given System:

In today’s post, I want to look at a question that seems almost too simple to ask: when does a system exist? For this I will be drawing on ideas from Wilfrid Sellars, Ludwig Wittgenstein, Heinz von Foerster, and Martin Heidegger.

We talk about “systems” all the time. The healthcare system. The education system. The traffic system. We speak as though these are objects sitting in the world, waiting to be observed, measured, and fixed.

Let’s try a thought experiment. Reach out your hand and try to touch the healthcare system. You can touch a hospital bed. You can hear a monitor beeping. You can speak to a nurse. You can shake a doctor’s hands. But where is the system? You cannot knock on it. You cannot hold it in your palm.

And yet we speak about it as if it were an object in front of us. We say things like “The system is broken” or “The system needs fixing.” The language gives it a kind of independent standing, as if it were a malfunctioning engine sitting somewhere, waiting for a repair technician.

This way of speaking rests on a hidden assumption, the assumption that there are structures in the world simply waiting to be perceived and described. The philosopher Wilfrid Sellars called this the “Myth of the Given”.

Wilfrid Sellars:

The “Myth of the Given” is the idea that the world presents itself to us in raw, pre-interpreted form. That before language, before concepts, before any act of distinction-making, there are simply facts delivered to the mind. Knowledge, on this view, is built upward from this unmediated foundation.

Sellars argued that this is a fallacy. He noted that sensing is not the same as knowing. A sensation may trigger a belief, but it does not justify one. Knowledge belongs to what he called the space of reasons. This is a shared space where claims can be offered, challenged, and revised.

He used the example of a tie shop clerk to make this point. Under the store’s electric lighting, the clerk says “This tie is blue.” Later, when another clerk points out the effect of the lighting, he revises his claim to “It looks blue.” The sensation itself has not changed. What has changed is his understanding of the norms governing correct description. There is no raw, unmediated given beneath our experience. There is already a learned practice of making and correcting distinctions.

This matters for how we talk about “systems”. When someone says “The system is broken,” that statement is not a neutral observation. It presupposes standards. It implies that something is failing relative to shared expectations. The claim invites agreement, disagreement, and justification. It is a normative claim, not a factual report.

Ludwig Wittgenstein:

Ludwig Wittgenstein made a related point in his reflections on rule-following and meaning. He asked whether meaning could be grounded in something purely private, such as an inner sensation. His answer was that it cannot. Without shared, public criteria for correctness, there would be no way to distinguish between actually following a rule and merely believing one is doing so.

Meaning lives in shared forms of life, not in private inner episodes. When we describe something as a system, we are not reporting a neutral fact. We are placing ourselves within a shared field of expectations and norms. The word “system” already carries commitments about what ought to be happening.

Heinz von Foerster:

Heinz von Foerster made a parallel observation about cognition. He pointed to what he called “undifferentiated encoding”. The interpretative framework does not encode categories like red or green directly. It encodes differences in stimulation. The differentiation into meaningful categories happens inside the organism itself.

This is due to what cybernetics calls “operational closure”. An entity responds according to its own structure, not according to instructions delivered from the outside. Von Foerster noted that the environment contains no information as such. Information arises only through the distinctions the organism is capable of making.

Sellars shows there is no raw knowledge. Wittgenstein shows there is no private anchor for meaning. Von Foerster shows there is no raw information. Each, in a different register, removes the idea of a pre-given foundation beneath our experience of the world.

But even together, they leave something unaddressed. Even if knowledge and information are constructed, we might still imagine ourselves as observers standing outside the world, organizing it from a neutral distance. This is where Martin Heidegger becomes essential.

Martin Heidegger:

Heidegger does not begin with perception or information. He begins with care.

For Heidegger, we do not encounter the world as neutral spectators. We are already involved. We are embedded in projects, concerns, and purposes. The world shows up in relation to what matters to us.

His example of the hammer is well known and I have written about it before. When you are skillfully hammering a nail, the hammer withdraws into the activity. It is ready-to-hand, transparent to your purpose. So is the nail, and the wood. None of them present themselves as objects. You are simply absorbed in the task.

The hammer only becomes visible as an object when something goes wrong. It breaks. It slips. It is missing. At that moment it becomes present-at-hand. You step back and look at it.

The object appears through breakdown.

The Myth of the Given System:

Now consider: what if systems work in exactly the same way?

You do not experience the healthcare system while care flows smoothly. You experience patients, conversations, treatments, and nurses. The coordination is ongoing and transparent. The practices are simply happening. Nobody says “the system.”

Now imagine a bed is unavailable. Or an insurance policy blocks a treatment. Or a drug is out of stock. Suddenly someone says: “The system is broken.”

Notice what has happened. A concern has been frustrated. And from that frustration, a boundary is drawn. A pattern of practices is gathered together and named. What was ready-to-hand, invisible in its smooth functioning, has become present-at-hand. The system has appeared.

The system is not given. It is disclosed through breakdown.

The traffic jam is another good illustration of this. Physically, there are cars and asphalt. But the system only appears relative to care. To the commuter, it is a failure of punctuality. To a vendor walking between the vehicles, it is an opportunity. To a city planner, it may be evidence of demand exceeding capacity. The cars are real. The asphalt is real. But the system is not the same kind of thing as the cars. It is a configuration that becomes visible when care encounters resistance.

To speak of the system as though it were an independently existing object, like the cars or the asphalt, is to extend the Myth of the Given into organizational life itself.

Once we see this, the ethical implications are hard to avoid. If systems arise through care, then system boundaries reflect what we care about. When we define healthcare in terms of efficiency, we foreground throughput. When we define it in terms of community wellbeing, we foreground relationships and continuity. Every articulation reveals some concerns and conceals others.

Second Order Cybernetics:

Second order cybernetics has always insisted on this point. The observer cannot be separated from the observed. When we define a system, we are not pointing at something that already existed independently of us. We are making a distinction. And as George Spencer-Brown noted, to draw a distinction is to create a world.

This is the shift that second order thinking invites. We do not ask “What is the system?”

but:

“What do I care about such that this shows up as a system at all?”

That is not a technical question. It is an existential one. Every system in this regard is a human system.

Final Words:

When we draw a boundary and call it the “system”, we are not pointing to a thing that exists independently of our distinctions. We are organizing experience around what matters to us. We chose to draw the boundary there, instead of here. We chose include that, instead of this. And we do so because our care was interrupted at that point, and not somewhere else.

This does not mean the consequences are unreal. When care is blocked, when treatment is denied, when livelihoods are threatened, when exhaustion becomes chronic, viability is genuinely at stake. To say that systems are articulated through concern is not to dismiss suffering. It is to take it more seriously.

If one’s care is endangered, one cannot remain in abstraction. Action then becomes necessary. But what kind of action?

If we treat the system as an external machine, we search for mechanical fixes. We adjust policies, change incentives, replace personnel. These may be necessary. Yet they often operate within the same set of distinctions that produced the difficulty in the first place.

Second order thinking asks a question in a deeper dimension. What distinctions are we relying on that make this outcome appear inevitable? What counts as success in this configuration? What has been excluded from the circle of relevance?

Instead of saying “The system is broken,” we might say “The current way we are coordinating our practices is undermining viability.” That wording does not place us outside the problem. It acknowledges participation without assigning simplistic blame.

Participation does not mean individual control. Many structures are vast and historically sedimented. But even within constraint, we contribute through compliance, resistance, redesign, conversation, refusal, and collaboration.

Ethics enters precisely here. Not as a moral add-on, but at the level of boundary drawing itself. Every time we define what the system is, we define what matters. Every time we measure performance, we privilege certain forms of viability over others.

Second order cybernetics does not provide a formula for what to do when care is threatened. It does not eliminate conflict between competing concerns. What it does is remove the illusion that responsibility lies elsewhere.

If viability is at risk, the task is not simply to repair an external object. It is to examine, collectively, how we are distinguishing, organizing, and sustaining the patterns that shape our shared world.

That work is slower. It is less dramatic. It requires conversation rather than diagnosis.

It keeps us inside the picture.

Stay Curious and Always keep on learning…

If you liked what you have read, please consider my book “Second Order Cybernetics,” available in hard copy and e book formats. https://www.cyb3rsyn.com/products/soc-book

Note:

In referencing the work of Martin Heidegger, I want to acknowledge the deeply troubling fact of his affiliation with the Nazi party. This aspect of his life casts a long and painful shadow over his legacy. While I draw on specific philosophical ideas that I find thought-provoking or useful, this is not an endorsement of the man or his actions. Engaging with his work requires ethical vigilance, and I remain mindful of the responsibility to not separate ideas from the broader context in which they were formed.

References:

  1. Heidegger, M. Being and Time (1927)
  2. Sellars, W. Empiricism and the Philosophy of Mind (1956)
  3. Wittgenstein, L. Philosophical Investigations (1953)
  4. von Foerster, H. Understanding Understanding (2003)
  5. Spencer-Brown, G. Laws of Form (1969)

I Do Care, Don’t You?

In today’s post, I want to clarify what it means to care as a human being and why that clarification matters for how we think about “systems”. This is a continuation of my previous post, in which I clarify the notion of “care” from a Heideggerian and existential standpoint. I hope this provides further food for thought for system thinkers.

We often speak of care as if it were a choice, a sentiment, or a moral stance that one may adopt. However, if we examine our lived experience more closely, we discover that care is not optional and that it is not something we add to life from the outside. It is the structure through which life shows up for us as meaningful in the first place. To say that we care is not to say that we are kind or benevolent. It is to say that the world already matters to us before we decide that it does. This is not a psychological observation about emotions. It is a clarification of the existential structure within which any emotion, evaluation, or decision becomes possible.

Care as the Structure of Involvement:

We do not first encounter a neutral world and then assign value to selected parts of it. We are always already involved. We wake into concerns, obligations, projects, relationships, unfinished tasks, and implicit expectations that were operative before we began reflecting on them. Even indifference is directed toward something, and even withdrawal is a response to what has already failed to meet our concern. Our lives are therefore not collections of isolated objects but structured fields of relevance within which certain things stand out as significant and others recede into the background.

Let’s consider some simple examples to make this concrete. The floor can trip us, a comment can embarrass us, a machine can fail us, and a relationship can wound us. None of these events are neutral occurrences to which we later attach meaning. They strike us because they intersect with something that is already at stake. Care names this condition of being exposed to what matters, a condition in which our projects, identities, and possibilities can be supported or undermined.

Thrownness – Finding Ourselves Already In Progress:

This exposure is not chosen. Heidegger referred to it as thrownness. We find ourselves already in a world that is underway, shaped by histories, institutions, languages, and practices that we did not select. By the time we begin asking who we are, we have already been formed.

One way to grasp this condition is to imagine being placed under a spotlight on a stage where a play is already in progress. Other actors are speaking, the audience is present, and the plot has been unfolding long before we arrived. We have no script in hand, no rehearsal, and no complete understanding of the role we are expected to play, yet we cannot step off the stage or suspend the performance. We must respond within a scene whose conditions we did not author.

We are thrown into particular families, political orders, economic systems, and cultural expectations, and our responses both inherit and reshape what is already there. Care does not arise after this situation; it is what this situation feels like from within. It is the lived sense that something can go well or badly and that our participation matters.

Finitude and the Logic of Care:

Thrownness alone does not fully explain care. There is a deeper structural condition at work, namely our finitude. Our time is limited, our possibilities can close, opportunities can vanish, relationships can end, bodies can fail, and plans can collapse in ways that cannot always be reversed. The fact that our being is not guaranteed introduces weight into our decisions and seriousness into our commitments.

To clarify this, let’s imagine a being that is all-powerful, all-knowing, and immortal. Such a being could not be surprised, could not be overpowered, and could not ultimately lose anything irretrievably. No possibility would close permanently, no project would be exposed to final failure, and no decision would carry irreversible consequence. In such a condition, urgency would dissolve because everything could be deferred without cost. If nothing can be lost and no time limit constrains action, then nothing presses upon the being with genuine necessity.

Care belongs to creatures for whom something can go wrong and for whom time is not infinite. It emerges from exposure to loss, from the risk of non-being, and from the fragility of our projects. Finitude is not an unfortunate defect added to an otherwise complete existence; it is the condition that gives existence weight. We care because our possibilities are limited and because what we love, build, and pursue can be threatened or taken from us.

Anxiety and the Withdrawal of Meaning:

There are moments when the taken-for-granted coherence of our world loosens and the structures that normally guide us begin to feel uncertain. Projects that once seemed obvious lose their grip, and the background sense of stability that supports everyday action becomes less secure. This experience, often described as anxiety, does not simply signal fear of a specific object. It discloses the fragility of the meaningful framework within which our lives unfold.

 In such moments, we recognize that our projects rest on no final guarantee and that the systems we inhabit are sustained only through ongoing commitment and coordination among finite beings. Meaning is not fixed in advance; it must be enacted and maintained. Care becomes visible precisely because what matters can withdraw, because the ground beneath our projects can shift, and because we ourselves are not permanent fixtures in the scene.

The Structural Tendency Toward Forgetting:

Although anxiety can disclose our condition, we do not live in constant existential intensity. Most of the time, we are absorbed in routines and roles that allow coordination to function smoothly. We rely on procedures, metrics, categories, dashboards, and models that stabilize expectations and guide action. This absorption is not a flaw but a practical necessity for collective life.

However, there is a structural tendency toward forgetting embedded in this stabilization. The structures that were originally created to respond to particular concerns can begin to conceal their own contingency. A model that once addressed a specific risk can come to appear as the world itself. A procedure designed to manage uncertainty can harden into inevitability. The “system” can present itself as though it were independent of the concerns and vulnerabilities that gave rise to it.

Let’s think about how this happens. Every “system” begins as a response to care. A policy is written because someone fears injustice or disorder. A metric is created because someone worries about waste or decline. A reporting structure emerges because coordination is fragile and failure is costly. These structures are attempts to stabilize patterns of concern under conditions of finitude. Over time, however, the origin of these structures recedes from view. Care crystallizes into durable forms, and those forms take on the appearance of objective reality.

When we forget this origin, the system appears neutral and self-sufficient. When we remember it, we see that what looks like neutrality is the sedimentation of past judgments about what matters.

Every System Is a Human System:

We often speak of “systems” as if they were external mechanisms that we observe from the outside, as if we could diagram them from a neutral vantage point. Yet every “system” is described from somewhere, and every model is drawn from within a field of concern shaped by thrownness and finitude.

Even the aspiration to objectivity is animated by care. The language of efficiency reflects awareness of limited resources. The language of safety reflects recognition of vulnerability. The language of growth reflects anxiety about stagnation or decline. Consider what this means in practice: when we choose to optimize for speed, we are implicitly saying that time matters more than some other dimension. When we prioritize safety, we are acknowledging that harm is possible and unacceptable. These are not neutral technical choices. They are expressions of what we care about.

The great cybernetician Heinz von Foerster emphasized that the observer is never fully separate from the system observed and that descriptions participate in the very processes they describe. He wrote:

The essential contribution of cybernetics to epistemology is the ability to change an open system into a closed system, especially as regards the closing of a linear, open, infinite causal nexus into closed, finite, circular causality.

Systems thinking is therefore circular in structure. We talk and think about “systems” because we care about certain outcomes, and those “systems” subsequently shape what we notice, measure, and prioritize. In this sense, every system is a crystallization of existential orientation. It stabilizes certain cares while marginalizing others, amplifies some signals while attenuating others, and renders some consequences visible while leaving others in shadow. To design a “system” is to formalize a pattern of care under conditions of limitation.

Boundary Judgments and Responsibility:

Boundary judgments, which determine what counts as relevant and what does not, are never merely technical decisions. Every boundary protects something and leaves something exposed. Every inclusion privileges a perspective. Every exclusion renders something less visible.

Consider a simple example: when we define the boundaries of a project, we decide which stakeholders matter, which outcomes we will measure, and which risks we will monitor. These decisions are made by finite beings trying to navigate a world that resists them. Responsibility is not an external addition to design but an acknowledgment that design is already an act of care structured by finitude.

We cannot step outside care in order to build a neutral system. Even indifference distributes concern elsewhere. To say that we do not care about one dimension is to care more about another. The question is never whether care is present. The question is how it is structured, and whose finitude it protects.

Systems Under Existential Exposure:

If every system is a human system, then systems thinking itself must remain exposed to the fragility it seeks to manage. We build structures to stabilize coordination, create procedures to reduce uncertainty, and develop metrics to guide action because without such efforts collective life would fragment. Yet no structure eliminates finitude, no optimization dissolves vulnerability, and no model exhausts the world it seeks to represent. The territory can still resist the map, time can still close possibilities, and loss can still occur.

To remember that every system is a human system is to remember that it rests on the shoulders of beings who were thrown into a scene they did not script and who will one day leave it. We design nonetheless, not because we are omnipotent, but because we care.

Final Words:

Systems thinking is not a technical escape from the human condition; it is one of its expressions. We build “systems” because we are vulnerable, we measure because we can fail, we optimize because resources are limited, and we coordinate because fragmentation threatens what we care about. The temptation is to imagine that better design will eventually free us from exposure, yet exposure is the source of design itself. If we were all-powerful, all-knowing, and immortal, there would be nothing to manage, nothing to protect, and nothing to lose.

We are finite beings improvising within an unfinished play whose beginning we did not witness and whose end we cannot avoid. In The Myth of Sisyphus, Albert Camus presents the image of a figure condemned to push a stone up a hill only to see it roll back down. He wrote:

The struggle itself toward the heights is enough to fill a man’s heart. One must imagine Sisyphus happy.

The incline does not disappear and the labor does not end, yet the task is not meaningless. It is lucid engagement with a condition that cannot be abolished. Systems thinking resembles this labor. We redraw boundaries knowing they are provisional, we optimize knowing conditions will shift, and we construct maps knowing the territory exceeds them. Finitude does not dissolve, but neither does care. In choosing to care lucidly and without fantasies of omnipotence, we do not escape the human condition. We inhabit it deliberately and responsibly.

Stay curious and Always keep on learning…

I highly recommend the NLM companion video for this post –
https://youtu.be/lxZcn33-NpM

If you liked what you have read, please consider my book “Second Order Cybernetics,” available in hard copy and e book formats. https://www.cyb3rsyn.com/products/soc-book

Note:

In referencing the work of Martin Heidegger, I want to acknowledge the deeply troubling fact of his affiliation with the Nazi party. This aspect of his life casts a long and painful shadow over his legacy. While I draw on specific philosophical ideas that I find thought-provoking or useful, this is not an endorsement of the man or his actions. Engaging with his work requires ethical vigilance, and I remain mindful of the responsibility to not separate ideas from the broader context in which they were formed.

When the Map Becomes More Coherent Than the Territory:

In today’s post, I am inspired by the ideas of cybernetics, Heidegger, and Taleb. I am looking at what I think is the largest danger of large language models (LLMs).

LLMs are extraordinarily proficient in the domain of language, and that proficiency has quietly created a philosophical problem that most technical discussions fail to notice. They speak fluently, respond coherently, adapt style with ease, and generate text that fits seamlessly into human patterns of explanation and reflection. The danger does not lie in what they get wrong, but in how convincing they are when they appear to get things right. Fluency triggers attribution, and attribution tempts us to confuse linguistic competence with lived understanding. We humans are prone to jump to attributions. We seek purpose in anything and everything.

This confusion is not accidental. Throughout human history, language was never free-floating. It was always grounded in lives that could fail, bodies that could be injured, and situations that demanded response. To speak well was already evidence that one had survived something, endured something, or at least stood in a chain of experience anchored in the world. With LLMs, that historical coupling has been severed. What remains is language without a life.

Cybernetics offers a useful distinction here by separating the domain of coordination from the domain of perturbation. The domain of coordination is where language operates. It is the space of symbols, signs, instructions, and representations that allow systems to align behavior. The domain of perturbation is where the world asserts itself. It is the space of forces, constraints, breakdowns, and consequences that threaten the continued viability of a system. Living systems exist in both domains simultaneously. LLMs exist almost entirely in the former.

Please note that Maturana and Varela talked about languaging as a form of cognitive existence. To me, languaging is existing in both domains simultaneously. This is often missed in the discussion of AI or AGI. The domain of coordination must not be conflated with the domain of perturbation and of lived experience.

Hunger, pain, cold, loss, and social rejection are not messages waiting to be interpreted. They are disturbances that demand coping. They do not coordinate with us and they push back. A living entity must respond to these perturbations in order to remain viable, and it is precisely this necessity to cope that gives rise to meaning. Language emerges as a secondary achievement, a tool for coordinating responses to disturbances that have already been encountered. The “burn” comes before the word “hot.”

LLMs start in the inverse order and they stop where they start. They begin with language and never leave it. They operate in a closed probability space in which words refer only to other words and coherence is rewarded independently of consequence. They do not encounter resistance. They do not face breakdown. They do not have to repair themselves in response to failure in any existential sense. When they are wrong, nothing is at stake for them. No structure is threatened. No viability is endangered. There is no loss to mourn and no urgency to learn.

The existential ideas from Heidegger are indispensable here, because he refuses to separate understanding from involvement. For Heidegger, human existence is not primarily a matter of representation or cognition, but of being-in-the-world. We do not stand outside the world describing it. We are thrown into it. We do not choose the conditions of our arrival, the historical moment, the social structures, or the biological constraints we inherit. We find ourselves already entangled in demands that must be met before they can be explained.

Thrownness is not merely a description of origin. It names the condition that makes understanding possible at all. Because we are thrown, we must cope. Because we must cope, we care. Care, for Heidegger, is not an emotional add-on. It is the structure of existence itself. To be human is to have something at stake, to be concerned with how things turn out, because how they turn out matters to whether we can continue at all. Simply put, LLMs do not care.

Coping is not a flaw or a limitation. It is the source of cognition. We understand the world not by representing it accurately in advance, but by engaging with it practically and discovering, often painfully, where our expectations fail. Heidegger’s notion of ready-to-hand captures this vividly. Tools disappear into use when coping is successful. They become visible only when something breaks. A hammer reveals itself as a hammer not when it works smoothly, but when the handle snaps and the task can no longer continue.

This breakdown is not an interruption of understanding. It is its condition. Reality teaches by resisting us. Please note that I am taking poetic license here and my use of words should not be confused with attribution. Plans collapse, models fail, and language stops working. And in those moments, distinctions begin to matter. We learn what is important because something went wrong. Only cognitive entities that can be broken can learn what matters. This is simply the living condition.

LLMs cannot break in this way. They are not thrown and they do not cope. They do not have to maintain their own viability in the face of an indifferent world. They do not care, not because they are unethical or incomplete, but because care arises only where existence is at risk. This is why appeals to giving them more data or richer representations miss the point. The difference is not quantitative. It is categorical.

This also clarifies why certain philosophical slogans become dangerous when misapplied. Wittgenstein’s line, “The limits of my language mean the limits of my world,” is often invoked to suggest that language generates experience. But Wittgenstein was speaking about humans, beings already embedded in the world, already coping, already affected. Language limits what can be articulated about experience. It does not produce experience itself. To apply this claim to a system that has language without world is to collapse experience into description and mistake coordination for contact.

I have often noted the temptation to treat information as a commodity. We are prone to think that if language is sufficient for understanding, then knowledge can be accumulated without exposure, transferred without risk, and optimized without consequence. But cybernetics resists this move. Understanding is not stored in representations. It emerges in systems with histories of interaction, failure, and recovery. Meaning arises where distinctions have consequences.

This is where I bring in Taleb’s notion of “skin in the game,” which aligns naturally with both cybernetics and Heidegger. Knowledge without exposure to consequence is brittle. Assertions made without risk are cheap. LLMs produce language without skin in the game by design. The danger arises when humans, who do live in the domain of perturbation, begin to orient themselves toward that language as if it carried the weight of lived understanding.

Final Words:

The real risk is not that machines will become more human, but that humans will begin to forget what makes their own understanding possible. When linguistic fluency is privileged over coping, when clean summaries are trusted over messy lived accounts, when the map is preferred because it is easier than the territory, we drift away from the conditions that give meaning to knowledge at all.

Since LLMs do not cope, and have no need for care, we should not assign them responsibilities where care is required. Care is not a functional add-on that can be simulated through better language or richer models. It arises only where existence is exposed to risk, where failure has consequences, and where something can be lost. There is currently a great deal of discussion about the role of AI and the possibility of AI replacing humans. Much of this discussion quietly assumes that agency can be transferred wherever competence appears. This is a serious mistake!

The use of LLMs lies exclusively in the domain of coordination and language. They can generate possibilities, assist with articulation, and operate within representational spaces at a scale no human can match. But to conflate this capability with human agency, and to assign responsibilities that presuppose care, concern, or accountability, is a terrible idea. Responsibility belongs to entities that can be held answerable by the world, because the world can push back on them.

Only entities that can be broken can learn what matters. Humans are such entities. This vulnerability is the ground of responsibility, ethics, and meaning. The task, then, is not to teach machines how to live, but to remember, in the presence of their fluency, that living is something language can point to but never replace.

Stay curious and Always keep on learning…

If you liked what you have read, please consider my book “Second Order Cybernetics,” available in hard copy and e book formats. https://www.cyb3rsyn.com/products/soc-book

Note:

In referencing the work of Martin Heidegger, I want to acknowledge the deeply troubling fact of his affiliation with the Nazi party. This aspect of his life casts a long and painful shadow over his legacy. While I draw on specific philosophical ideas that I find thought-provoking or useful, this is not an endorsement of the man or his actions. Engaging with his work requires ethical vigilance, and I remain mindful of the responsibility to not separate ideas from the broader context in which they were formed.

Information from a Cybernetic Viewpoint:

In today’s post, I want to revisit the notion of information from a cybernetic viewpoint, drawing primarily from Gregory Bateson’s well known formulation that information is the difference that makes a difference. This definition does not merely redefine information. It quietly displaces where information is assumed to reside and how it is assumed to function. This post is part of a series examining a cybernetic approach to tackling misinformation.

In everyday discourse, information is commonly treated as a thing. We speak of information being transmitted, stored, corrupted, lost, or controlled. This language suggests that information exists independently of those who encounter it, as if it were a commodity that can be packaged and delivered. Cybernetics has long resisted this framing, not by denying the existence of data in the form of signals or messages, but by insisting that information cannot be separated from the consequences it produces within a system.

Bateson’s phrasing forces a pause because it contains two differences, not one. These two differences are often collapsed into a single gesture, which obscures what cybernetics is trying to put more light on. To understand information cybernetically, these differences must be held apart and examined in relation to the observer, the context, and the viability of the system involved.

The first difference concerns distinguishability, or the ability to make distinctions. For a difference to exist as a difference, it must be generated or recognized by an observer. This does not mean that the world lacks structure or regularity. It means that distinctions do not announce themselves independently of the capacities and concerns of the cognitive observer encountering them. An observer must be able to draw a distinction for it to count as a difference at all.

This ability to distinguish is not abstract or universal. It is shaped by history, embodiment, training, and present need. In cybernetic terms, this is a question of variety. An observer with limited internal variety cannot register certain distinctions, regardless of how obvious they may appear to another observer. What fails to be noticed is a mismatch between the variety available and the variety required.
This immediately situates information within the notion of context. A difference that matters in one situation may be invisible or irrelevant in another. The same signal can be richly informative for one observer and entirely inert for another. From this perspective, the problem of information overload is often misdiagnosed. What overwhelms is not the quantity of differences but the absence of appropriate distinctions and filtering mechanisms within the observer.

The second difference concerns consequence. Not every distinction that can be made will matter. A difference becomes information only when it participates in altering the state, orientation, or activity of the cognizing “system”. This is where the second difference enters, the difference made by the difference.
Cybernetically, this is best understood in terms of viability. A difference matters when it bears upon the conditions under which a cognizing “system” continues to operate. It may support stability, signal threat, invite adaptation, or require reorganization. A distinction that does not affect viability may still be noticed, but it does not rise to the level of information in Bateson’s sense.

In a pragmatic turn, this reframing moves information away from correctness and toward consequence. It is not enough for a distinction to be accurate or well formed. It must matter in practice. Information is therefore tied directly to action potential, even when that action takes the form of restraint, delay, or reconsideration.

Between these two differences sits transduction. Whatever perturbation occurs in the environment does not arrive as meaning. It must be transformed through the structures of the observer. This transformation is neither passive nor optional. It is how a system turns disturbance into significance.

Transduction is deeply contextual and personal, without being arbitrary. It reflects the ways in which a system has learned to respond to its surroundings. Two observers may be perturbed by the same event, yet transduce it differently because their histories, expectations, and responsibilities differ. Meaning is not extracted from the world. It is enacted through ongoing structural coupling.

This is why information cannot be cleanly separated from the observer. What appears as the same input can lead to entirely different informational outcomes. To speak of information without speaking of transduction is to quietly reintroduce representational assumptions that cybernetics sought to set aside.
This leads naturally to the notion of informational closure. As Heinz von Foerster put it, the environment is as it is. It does not contain information waiting to be picked up. It contains events, regularities, and disturbances. Information arises only within operationally closed systems as a result of their internal changes in response to perturbation.

From this viewpoint, information is not transmitted. Signals may pass between systems, but information happens only when a system changes in a way that matters to it from the perturbation. What is stored are not information units but traces that may later participate in new acts of distinction. This undermines the idea of information as a substance that can be accumulated or depleted independently of the systems involved.

Human communication introduces an additional layer through language and social coordination. For a difference to make a difference in a social context, participants must be engaged in overlapping language games. Meaning does not reside in words alone but in shared practices, expectations, and forms of life.
Error correction, in this sense, does not occur in the signal but in interaction. A message is understood not because it is decoded correctly, but because the receiver anticipates what is likely to be meant and adjusts that anticipation through feedback. Reading a doctor’s cursive prescription is a familiar example. The pharmacist does not decipher letters in isolation. They draw upon knowledge of past interactions with the doctor, medications, dosages, and common medical practice. Understanding emerges from participation, not from transmission.

All of this brings us to a final consideration that is often neglected because it does not present itself as information at all. This is the question of slack. For a difference to make a difference, there must be sufficient room within the system for it to be taken up. This slack can appear in several forms. It may take the form of redundancy, where a distinction is encountered through multiple channels or repetitions. It may appear as amplification, where the manner of presentation gives the difference sufficient weight to register. It may also appear as relaxation time, where the system is afforded the temporal space to digest what has occurred.

Without some degree of slack, even meaningful distinctions fail to become information. When perturbations arrive faster than they can be transduced, the system does not become more informed. It becomes saturated. What follows is not heightened responsiveness but withdrawal. The system in many regards learns that responding no longer contributes to viability.

Relaxation time is particularly important in this regard. There was a period when news arrived with built in pauses. A morning paper or an evening broadcast created a rhythm that allowed distinctions to settle. Between these moments, there was time for discussion, reflection, and forgetting. That rhythm provided slack and maybe allowed for a more congenial political climate.

The continuous, twenty four hour cycle of today’s media, in which opinion often masquerades as news, has steadily eroded this condition and altered the political landscape in ways that reward polarization and immediacy. Nowadays, perturbations arrive without pause, and the responsibility for digestion has been shifted entirely onto the observer. The result is a familiar paradox. As reports of suffering increase, the capacity to respond meaningfully diminishes. Perturbations may accumulate, but few of them make a difference.

This is often described as complacency or moral failure. From a cybernetic viewpoint, it is more accurately described as a collapse of the conditions under which information can occur. The system is overwhelmed beyond its capacity to transduce, and indifference emerges as a protective response. This leads to the conditions for the medium to become the message.


Final Words:
If information is not a commodity, then neither is attention. Both depend on proportion, timing, and care. Environments that destroy slack while demanding responsiveness do not produce better informed observers. They erode the very capacities required for differences to make a difference.

Seen this way, the preservation of informational conditions is not merely a technical concern. It is an ethical one, bound up with how we design systems, share responsibility, and allow meaning the time and space it requires to emerge.

Stay curious and Always keep on learning…


If you liked what you have read, please consider my book “Second Order Cybernetics,” available in hard copy and e book formats. https://www.cyb3rsyn.com/products/soc-book

Throwing the Fish Back into the Water:

In today’s post, I am refining my thoughts on reentry as a wonderful tool to tackle cognitive blind spots. A common saying goes that a fish does not know it is in water. The phrase is usually offered as a comment on unexamined assumptions. The fish is fully immersed in a medium that makes its life possible, and yet that very immersion renders the medium invisible. We the observers, standing outside the water, can easily point to what the fish cannot see.

The metaphor is useful, but only if we do not misunderstand what it implies. The problem is not ignorance in the sense of missing information. The problem is immersion, being inside the loop and not being aware of it. In other words, I am positing that cognitive blind spots arise not because we lack data, but because we fail to notice the conditions under which noticing itself takes place. We assume that observation is independent of the observer, and in doing so, we negate the very act that makes observation possible.

This negation is not accidental. It is built into many of our conceptual frameworks.

Cognitive Blind Spots and the Negated Observer:

In this view, a cognitive blind spot appears when a distinction is treated as if it exists independently of the act that produced it. We speak as though there is an object “out there” and an observer “in here,” and as though the observer merely reports what is already the case. This framing quietly removes the observer from the scene by denying that the act of description must re-enter the conditions it describes.

Once the observer is negated, the distinction hardens and begins to appear as a feature of the world itself. What began as a practical cut in experience is mistaken for something given rather than constructed. At that point, the blind spot is complete. There is nothing left to question because the conditions of questioning have disappeared.

This is precisely where re-entry becomes relevant.

Re-entry as a Mechanism for Error-correction:

Spencer-Brown’s notion of re-entry does not simply add complexity for its own sake. It forces a distinction to turn back upon itself. A form re-enters the space it distinguishes. The marked state is no longer allowed to pretend that the unmarked state is irrelevant or absent. Reentry is an attempt to bring the act of distinction itself into view.

Re-entry is uncomfortable because it breaks the illusion of a clean separation. It exposes the fact that every distinction carries its own conditions inside it. What we thought was a stable category now reveals its dependence on an operation. This is why re-entry is such a powerful tool for revealing cognitive blind spots. It does not offer a better description of the world. It shows how our descriptions are made, and what they quietly exclude in order to function. Once this lens is applied, certain familiar structures begin to look less secure.

The Subject/Object Split and Being in the Water:

The subject/object dichotomy is one such structure that we can use to expand on this line of thinking. It assumes that there is a knowing subject on one side and a known object on the other, connected by representation. From a Heideggerian perspective, this is already a distortion. We are not subjects standing over against a world of objects. We are always already being in the world.

The fish is not first a subject and then later related to water as an object. Fish and water show up together. The relation is not secondary. It is constitutive. Remove the water and the fish does not remain as a fish that merely lacks an environment. It ceases to be what it is.

Re-entry makes this visible. When the observing system is reintroduced into the observation, the subject object split begins to collapse. What remains is participation, involvement, and structural coupling. Observation is no longer a neutral act. It is an activity performed from within the medium it seeks to describe. We will use this line of thinking to examine another familiar idea in philosophy from Charles S. Peirce.

The Triad and the Problem of Firstness:

Peirce’s triad of firstness, secondness, and thirdness is frequently described as dynamic and non-linear. However, when examined through the logic of distinction and re-entry, the triad reveals a fundamental instability. That instability is most clearly exposed in the notion of firstness.

Consider a simple example: a red apple. Its redness is firstness, the immediate quality that appears without reference or comparison. The apple itself, as a physical object that resists gravity and interacts with us, illustrates secondness. The recognition that the apple is a fruit, part of a category, and meaningful within a broader system of relations exemplifies thirdness. Even here, we see the dependence of firstness on context; its pure quality only becomes intelligible through interaction and relation.

Firstness is described as pure quality, pure possibility, or pure feeling, intended to stand prior to relation, reaction, or mediation. What follows from this is not only an empirical difficulty but also a logical one.

From a Spencer-Brown standpoint, nothing can appear without a distinction. A distinction simultaneously produces a marked state and an unmarked state. There is no marked state by itself, just as there is no distinction that does not also imply what it excludes. When one speaks of “good,” the notion of “not good” is already present as its context. “Good” by itself has no meaning. Even our most absolute categories depend on what they deny, as the invention of God quietly presupposes the invention of Evil.

If firstness is spoken of at all, it has already been marked. The moment one says “firstness,” one has drawn a boundary around something and set it apart from what it is not. That act already presupposes contrast. It already invokes relation. It already smuggles in what Peirce would later call secondness and thirdness. The triad never leaves the water it claims to describe.

If there is no distinction, there is no information. Without contrast, there is nothing to register. Pure undifferentiated “information” is not information. It does not inform. It does not appear. It does not function. In that sense, pure firstness is not just unreachable in practice, it becomes incoherent in principle.

The problem is not one of interpretation but of structure. The triad depends on a move that collapses under re-entry. Firstness cannot exist in isolation, yet the triad requires it to.

Re-entry Exposes the Blind Spot:

Here is where the cognitive blind spot becomes “visible”. The triad purports to articulate the conditions of experience while remaining blind to the operation that makes them appear. Firstness is treated as if it could exist prior to distinction, while the very articulation of firstness performs the distinction it denies.

Re-entry forces the concept to confront its own conditions. When firstness re-enters the space of its own description, it collapses into relation. It cannot remain alone. It cannot stay pure. It cannot avoid invoking what it claims to precede.

In this sense, the triad is flatter than it appears. Not because it lacks movement, but because its movement never quite escapes the logic of classification. Re-entry reveals that the flow Peirce gestures toward is already constrained by the need to name and separate what is being described.

Final Words:

The point of this critique is not to replace one framework with another. It is to show how certain blind spots persist even in sophisticated theories. When distinctions are treated as if they precede the act of distinction, the observer disappears. When the observer disappears, responsibility disappears with it.

Re-entry restores that responsibility. It reminds us that our concepts are not mirrors of reality, but tools we use from within the world we inhabit. Like the fish in water, we do not escape the medium by describing it. We only learn to see it by noticing how our seeing works. That seems to be the deeper utility of re-entry. The goal is not to produce better categories, but to cultivate a deeper awareness of how categories emerge. It is not purity, but participation. It is not firstness untouched by relation, but the recognition that relation is always already present. Seeing the water does not mean leaving it. It means acknowledging that one was never outside it to begin with.

Stay curious and Always keep on learning…

Post script:

Further clarification on the following statement – Re-entry reveals that the flow Peirce gestures toward is already constrained by the need to name and separate what is being described.

Peirce presents the triad as something dynamic and flowing rather than static. Firstness flows into secondness, secondness into thirdness, and so on. However, when you apply re-entry, you see that this apparent flow is already limited by the act of naming the categories in the first place. The moment you say “firstness,” “secondness,” and “thirdness,” you have already separated what you claim is flowing. The movement is therefore happening inside a framework that has already been cut up by distinctions.

So the “flow” Peirce gestures toward is not free movement within experience itself. It is movement between pre-named compartments. Re-entry exposes that the triad cannot escape the logic of distinction because it depends on that logic to exist at all.

In other words, the triad looks process-oriented, but it still operates as a classificatory scheme. The flow is real only insofar as the categories have already been stabilized by naming and separation. That is the constraint.

On Diversity as a Cybernetic Necessity:

AI Generated

In today’s post, I want to explore an idea that often gets framed in moral terms but is actually a cybernetic imperative: the necessity of diversity for viable systems. Whether we are talking about societies, organizations, or even artificial intelligence systems, the principle remains consistent. A system that suppresses differences suppresses the very disturbances that give it life.

This insight comes from cybernetics, and it helps us understand why diversity matters beyond moral arguments.

The Cybernetic Case for Diversity:

A society’s resilience and therefore viability emerges more from difference than agreement. When I think about what makes communities sustainable over time, I keep returning to this basic insight from cybernetics: without variation, a system cannot absorb disturbance. This is of course a simpler rephrasing of Ashby’s Law of Requisite Variety. Without challenge, a system cannot correct itself. Without friction, a system cannot renew its distinctions.

This becomes clearer when we think about information, distinction, and correction. Every observer draws distinctions. Every distinction creates a horizon of what can be noticed next. Every act of understanding sets the conditions for future understanding. For this reason, no observer, no community, and no language can remain viable without exposure to other perspectives. A view from nowhere is an impossibility.

Difference is not an obstacle to communication. Difference is what makes communication meaningful. As Gregory Bateson described it, information is the difference that makes the difference.

A community that supports only one way of thinking, one way of speaking, or one way of being slowly loses the very conditions that allow it to remain viable. When everyone thinks alike, quality begins to decay. Ideas become smoother but thinner. Creativity does not disappear because people stop trying. Creativity disappears because nothing pushes back. Nothing resists. Nothing surprises.

I have written before about von Foerster’s ethical imperative to increase the number of choices. Here, I want to extend that thinking to show why diversity itself is a condition for viability, not merely a moral preference.

The Negotiable Space:

A society with many ways of speaking has many ways of seeing. It has many ways to reframe a problem, many ways to interpret events, many ways to challenge assumptions, and many ways to correct errors. It is able to sustain what I call a “negotiable space” in which ideas can be contested, sharpened, and sometimes abandoned.

This continual negotiation is what keeps concepts alive. It is what makes meaning robust. It is what makes a collective capable of navigating uncertainty.

The negotiable space is the environment in which language, ideas, and understanding evolve and error correction happens. It is created not by agreement but by the friction of difference. Human cognition is not viable in isolation. It is viable only when embedded in a world where every utterance is exposed to other minds – resisted, questioned, corrected, or refined.

I see this friction as the medium of viability. When someone challenges your idea, asks for clarification, or pushes back against an assertion, they are not merely disagreeing. They are sustaining the recursive loop that keeps understanding alive. Without friction, distinctions decay. Without challenge, knowledge becomes brittle.

A word is never alone. It survives only through the continual friction of conversation. It carries a lineage of previous uses and a horizon of possible future uses. It remains viable only because a social world holds it accountable.

When Homogeneity Replaces Diversity:

When we reduce the diversity of perspectives, the negotiable space begins to shrink and close. Without enough difference, language becomes flatter. Categories become rigid. Distinctions become dull. Error correction becomes weak. The collective loses the source of renewal that once made it resilient.

Attempts to homogenize societies have produced similar outcomes throughout history. They create environments that look orderly from the outside but are fragile from the inside. Homogeneity amplifies the illusion of stability while stripping away the mechanisms that produce actual stability. A system without variation becomes a system without resilience. It stops promoting learning and staying curious. It stops promoting error correction. Eventually it stops being able to sustain itself at all.

We see this pattern repeat across contexts. A social world in which every voice echoes the same pattern begins to collapse inward. Its range of distinctions shrinks. Its ability to adapt weakens. Its capacity to navigate uncertainty fades.

Recursion Requires Disturbance:

In cybernetics, stability is not the absence of disturbance. Stability is the capacity to absorb disturbance without collapse. This requires variation. It requires the presence of alternatives. It requires a dynamic interplay of perspectives.

A system that eliminates disturbance does not become more stable. It becomes brittle. Without contradiction, the recursive loop of understanding begins to stagnate. Without challenge, the distinctions that support cognition degrade. Without tension, the structures that produce meaning weaken.

Human cognition remains viable because its recursion is continually informed by a social world rich in disagreement. An individual does not refine understanding alone. Understanding is sharpened by exposure to other interpretations. These interpretations emerge from diverse backgrounds, diverse experiences, and diverse cognitive histories.

I want to now take this train of thought to Large Language Models.

The Case of Large Language Models:

Large language models are often described as systems that learn from vast amounts of data. But what they learn is not raw experience. They learn from the residue of human meaning-making. They learn from language that has already passed through the recursive loops of human correction. They inherit the stability produced by these loops, but they do not participate in the loops themselves.

At least, not in the same way.

An artificial intelligence does not inhabit a social world where its utterances are corrected by others. It does not participate in the negotiable space through which language evolves. It does not receive feedback proportional to the scale of its output. It does not face the resistances that keep human cognition aligned with the world. This is an important distinction that leads to interesting outcomes.

A human remains viable because every use of language is exposed to correction. An AI remains unchallenged because its output overwhelms the capacity for correction to flow back.

The Collapse of the Negotiable Space:

A living language depends on a balance between output and correction. Human linguistic communities have historically generated meaning at a rate the community can digest. New terms emerge. Old terms fade. Misunderstandings provoke clarification. Disagreements produce refinement.

This equilibrium gets disrupted. The scale of machine-generated text has exceeded the capacity of human communities to critique it. The negotiable space, the space where meaning is contested and corrected, is now flooded. Variations in meaning that once signaled novelty are now drowned in statistical smoothness. The framework receives too much of its own output and too little balanced resistance.

A framework that receives little correction cannot maintain the integrity of its distinctions. It will start to drift. It will begin to feed on its own unchallenged productions. The range of distinctions therefore shrinks. The recursive loop that once sharpened meaning begins to flatten it.

At first the effects are subtle. Over time, the trajectory becomes clearer. A structure that cannot renew itself through grounded critique will drift toward diminishing returns. More scale will not resolve this. Faster generation will only accelerate the loss of the very conditions that once made the system appear intelligent.

Here, we see what I call the amplification of constraints in action: the model grows in output yet declines in viability. It is simultaneously expansive and fragile.

The Coming Burst:

All this seems to indicate that the AI bubble may burst in the near future. The ability of LLMs to be trained fast and to generate fast may become their downfall. Paradoxically, the better the large language model becomes, the faster this downfall may approach. Each improvement accelerates the collapse of the negotiable space. Each refinement increases the volume of uncorrected output flowing back into the system. Each new iteration tightens the closure that limits its future.

This is also a cautionary insight for societies that reject diversity and embrace homogeneity. Any system that narrows its space of variation, whether a community or a computational model, risks collapsing under the weight of its own uniformity.

The burst may come not because the models are weak, but because they are strong in the wrong direction. They refine themselves into a narrowing corridor. They amplify a recursion that cannot sustain itself. They accelerate toward diminishing returns.

The Lesson for ‘Systems’ Design:

Human cognition has survived because it is recursive from the inside and is in a social realm. Artificial intelligence’s recursion is lifeless. This difference matters. A system that does not participate in the social negotiation that gives words their life cannot maintain the vitality of its distinctions. It cannot renew its closure through lived coordination with others. It can only repeat the patterns it has been given.

Large language models may not become artificial general intelligence by accelerating the very process that undermines their viability. They are not suited to replace the human capacity for negotiated meaning-making. Their true value lies in augmentation, not imitation. They support human thought. They do not replace the recursive, socially grounded, diversity-dependent mechanisms that make human thought viable.

Every viable system must remain open to disturbance. The observer must remain open to being surprised. The language community must remain open to contradiction. A system that avoids disturbance does not stabilize, it stagnates.

Final Words:

The warning is clear for both machines and societies: maintain openness, embrace difference, and preserve the friction that keeps life viable.

A system without diversity collapses.

A recursion without resistance decays.

A language without a negotiable space drifts into incoherence.

This is not merely about being open-minded or tolerant. It is about understanding the conditions that allow any system (biological, social, or computational) to remain viable over time. Diversity is a cybernetic requirement. Without it, we lose the capacity to correct ourselves, to adapt, and ultimately, to survive.

Always keep learning…

Minimizing Harm, Maximizing Humanity:

In today’s post, I am looking at a question that is rarely asked in management. What if the most responsible course of action is not to maximize benefit, but to minimize harm? In decision theory, this is expressed as the minimax principle. The idea is that one should minimize the worst possible outcome. In human systems, that outcome is best understood as harm to people, relationships, and the invisible infrastructure that sustains collective work.

The language of management is often dominated by the pursuit of gains. Leaders are taught to ask what is the best that can happen. They are told to optimize, to scale, and to seek advantage. The minimax principle turns this question around. It asks instead what is the worst that can happen and how do we prevent it. Every decision about maximization must be evaluated through the lens of minimizing harm. Harm minimization is not a boundary condition but the primary ethical directive that governs all other management decisions.

Russell Ackoff once observed that the more efficient you are at doing the wrong thing, the wronger you become. This statement captures the ethical inversion at the heart of many managerial failures. The pursuit of maximum gain often blinds organizations to the quiet forms of loss that accumulate in the background. Human systems depend on tacit networks of trust, communication, and mutual adjustment. When efficiency cuts too deeply, these invisible infrastructures collapse. The system loses its ability to adapt.

To minimize maximum harm is not to resist change. It is not an invitation to stand still. Rather, it is a recognition that progress and ethics operate according to different logics. Progress concerns improvement and expansion. Ethics concerns the protection of dignity, agency, and reversibility. Once we place harm minimization at the center of our decisions, progress becomes sustainable because it no longer depends on exploitation or exclusion.

The primary ethical directive to minimize harm requires a clear operational principle. Heinz von Foerster provided this principle with remarkable clarity- I shall act always so as to increase the number of choices. This is not a secondary value. This is how harm minimization is operationalized.

Consider what happens when choices are available. When options remain open, people retain the capacity to move in different directions. They can experiment, observe the results, and if those results prove harmful or undesirable, they can try a different direction. This is reversibility. It is not that decisions are undone but that people are not locked into a single path with no way out. Reversibility means the system retains the capacity to self-correct. This becomes an integral part of being viable.

When choices are removed, a different logic takes hold. A decision made under constraint, with no alternatives available, becomes irreversible. The person cannot change course because there is no other path to take. The harm accumulates and cannot be addressed through adaptation or choice. This is an important distinction. To minimize harm is to preserve the optionality that allows people to respond when things go wrong. When you increase the number of choices available to people, you prevent harm from becoming locked in place. You maintain the possibility of recovery. You keep open the horizon of possibilities. The person is not left to say I had no choice, which is the expression of the deepest form of harm, the harm from which there is no escape.

This means that every decision about maximization or progress must be evaluated through this lens. Does it increase or decrease the number of choices available to people? Does it preserve reversibility or does it close off futures? Does it prevent irreversible harm or does it create conditions from which recovery is impossible? This is how we operationalize the primary ethical directive in practice.

Werner Ulrich’s Critical Systems Heuristics extends this insight into a framework for reflective practice. Ulrich reminds us that every system boundary includes some and excludes others. Those excluded often bear the consequences of decisions without having had a voice in making them. Ethics therefore requires that we identify who loses in the system we design. Ethics requires that we act in ways that allow their participation and emancipation. To preserve choice is to protect those at the margins of decisions. It is to recognize that moral responsibility lies in how boundaries are drawn. When we ask who loses, we are asking a minimax question. We are asking what is the worst that can happen for those at the margins.

To some, the minimax principle might sound like a cautious philosophy, one that restrains progress. This would be a misunderstanding. The aim is not to prevent change but to cultivate conditions under which change can occur without catastrophic harm. Here the insights of Magoroh Maruyama are valuable. In his work on second cybernetics, he distinguished between negative feedback processes that regulate deviation and positive feedback processes that amplify it. He noted that deviation amplification is the essence of morphogenesis. Not all deviations are errors to be corrected. Some are the sources of new order and innovation. Ethical design therefore should not eliminate deviation but create conditions in which positive deviation can be generative without catastrophic harm. To minimize maximum harm is not the same as to minimize deviation. It is about preserving the space in which positive deviation can arise safely.

Von Foerster’s imperative and Maruyama’s insight converge here. Both point toward the idea that ethics in complex systems must not suppress variety. Von Foerster’s view was that more freedom comes with more responsibility. When we create systems that expand choice, we simultaneously increase the responsibility of those who act within them. The ethical task is not to eliminate risk but to manage it in a way that nurtures diversity and growth while protecting the conditions of future choice. To design ethically is to create the space in which deviation, learning, and emergence can unfold without irreversible harm.

Behind every visible structure of management lies an invisible infrastructure. It consists of relationships, trust, informal knowledge, and the tacit coordination that keeps work alive. This infrastructure is often taken for granted. It is noticed only when it breaks down. In the pursuit of efficiency, organizations frequently erode these invisible supports. Staff reductions, rigid procedures, and mechanistic control can destroy the very human capacities that enable adaptability and resilience. The question therefore is not what can be gained but what can be lost without recovery. True resilience depends on maintaining the conditions that allow the system to heal itself. When we ask this question, we are asking what choices we are removing from people. We are asking what futures we are closing off.

It is important to distinguish ethics from progress. Ethics does not belong to the domain of progress. Progress concerns the expansion of capability. Ethics concerns the preservation of humanity. The two may coexist, but they are not the same. Progress without ethical constraint risks creating conditions from which recovery is impossible. Ethics without openness to change risks paralysis. The minimax principle, interpreted through von Foerster and Ulrich, provides a way to hold both. It calls for action that reduces maximum harm while sustaining the capacity for continued evolution.

Maruyama’s perspective deepens this understanding. By allowing positive deviation, we cultivate the potential for new forms of order. By preserving choice, we protect against harm that would close the future. The task of management therefore is not to optimize the present but to sustain the possibility of better futures without destroying the diversity from which they may emerge.

Ackoff’s view was that the future is not something to be predicted but something to be designed. The ethical responsibility of design is to ensure that this future remains open. To minimize maximum harm is to recognize the fragility of what is human in our systems. To preserve choice is to keep open the horizon of possibility. To embrace positive deviation is to invite emergence without destruction. Ethics in management is not about perfection or certainty. It is about maintaining the delicate balance between care and change.

Final Words:

When compromises are inevitable in human systems, the most humane path is to protect what allows us to begin again. The minimax principle is an invitation to ask different questions in our organizations. It is an invitation to be aware of who loses in the systems we design. It is an invitation to increase the number of choices available to people. It is an invitation to preserve reversibility and to protect the invisible infrastructure that sustains our collective work. We are responsible for our construction of these systems. We are responsible for the futures we foreclose and the futures we keep open. To be an authentic manager is to be aware of this responsibility and to strive, always, to minimize the harm we might do while creating conditions for emergence and learning.

Stay curious and always keep on learning.