The Persistent Unmarked Space:

In today’s post, I want to explore an observation about how we make distinctions and what this reveals about the structure of our thinking. I am inspired by the ideas in Spencer-Brown’s “Laws of Form” and broader themes in cybernetics about how observers construct meaning.

The starting point is simple. When we make a distinction, we create a boundary that separates what is inside from what is outside. Spencer-Brown formalized this with his notation of the Mark, showing how any act of indication simultaneously creates both the indicated and the non-indicated. This is shown below:

As we look closer, things get more interesting.

The Basic Operation of Distinction-Making:

When I make one distinction to mark “A,” I create two states. There is A (the marked state) and not-A (the unmarked state). This seems straightforward enough. We can depict this as below:

(A) not-A

Spencer-Brown showed that this basic operation has interesting algebraic properties. The unmarked state is not simply absence or void. It is the enabling condition that gives the marked state its meaning. Without the background of the unmarked, the mark itself would be meaningless.

This relationship between marked and unmarked is fundamental to how meaning emerges. The marked state exists only in relation to what it excludes.

We can take this further. Consider what happens when we make multiple distinctions. If I distinguish both A and B within the same unmarked space, Spencer-Brown’s notation shows this as ((A)(B)).

This actually creates three categories, not four. There is A, there is B, and there is everything else that is neither A nor B. We can represent this as ((A)(B))X, where X represents the remainder of the unmarked space.

In Spencer-Brown’s system, A and B are mutually exclusive by the nature of how the distinctions are made. They are separate marks within the same unmarked background, not overlapping regions as in classical set theory.

This gives us the pattern that n distinctions create n+1 categories. Three distinctions would create four categories, four distinctions would create five, and so on.

The Persistent Unmarked State:

What interests me most is how something remains unmarked regardless of how many distinctions we make. No matter how extensively we mark up our space with categories and boundaries, there is always an unmarked background that enables those markings to have meaning.

This unmarked background is not just everything else we have not thought of yet. It is the condition that makes thinking and categorizing possible in the first place. When we argue about categories like hot versus cold, we often treat these as exhaustive alternatives, often as dichotomies. But there is always the unmarked space that contains the ideas of moderate temperatures, context-dependent judgments, and the framework of assumptions that makes temperature distinctions seem natural and meaningful.

Connection to Self-Reference Problems:

This observation about the persistent unmarked state connects to well-known problems in formal systems, though the connection is analogical rather than mathematically precise.

Russell discovered that attempts to create completely comprehensive sets run into contradictions when they try to include themselves. The set of all sets that do not contain themselves creates a paradox when we ask whether it contains itself. Gödel showed that formal systems strong enough to express arithmetic cannot prove their own consistency without appealing to principles outside the system.

These results point to a general pattern. Complete self-inclusion appears to be impossible. There is always something outside the system that the system requires but cannot fully capture within its own terms.

The unmarked state in Spencer-Brown’s system suggests a similar limitation. The observer making distinctions cannot fully mark their own position as observer. There is always something unmarked that enables the marking process itself.

Implications for How We Think:

This has practical implications for how we approach knowledge and categories. It suggests epistemic humility. If our categorical frameworks always rest on unmarked assumptions and background conditions, then we should hold our categories lightly. They are tools for navigating experience, not mirrors of an independent reality.

In addition, it points toward the value of examining our own distinction-making processes. When we notice ourselves categorizing something, we can ask what remains unmarked in that process. What assumptions are we making? What alternatives are we not seeing?

And it also suggests why different observers can legitimately make different distinctions. The unmarked background that enables distinctions varies with the observer’s purposes, biological capabilities, and cultural context. The distinctions we make depend entirely on the purpose(s) of the observer. Different observers make different distinctions. This viewpoint supports the idea of pluralism.

Final Words:

Spencer-Brown’s insight about the marked and unmarked states reveals something fundamental about the structure of thought itself. Every act of indication creates both what it marks and what it leaves unmarked. The unmarked is not simply absence but the enabling condition for meaning.

This leads to both epistemic humility and intellectual pluralism. Different ways of making distinctions reveal different aspects of complex situations. No single framework captures everything. The wisdom lies in working skillfully with multiple perspectives while recognizing what each obscures.

Most importantly, the unmarked space always exceeds our attempts to mark it completely. As Heinz von Foerster observed, “Objectivity is a subject’s delusion that observing can be done without him.” The observer making distinctions cannot fully step outside their own process of observation.

This is not a limitation to overcome but a fundamental feature of how minds engage with complexity. “The environment as we perceive it is our invention,” von Foerster also noted, pointing to the active role we play in constructing the realities we inhabit.

Understanding this process of distinction-making is essential for navigating complexity with wisdom. Think about how this affects the popular frameworks with neat triads, 2×2 matrices, etc. that promise to carve up the world into manageable categories. Every one of these frameworks commits the same fundamental error. They erase the observer who created the distinctions and ignore the vast unmarked space of assumptions, context, and excluded possibilities that makes their tidy categories seem meaningful.

The unmarked state reminds us that thinking is always an ongoing process within contexts we can never fully transcend. This recognition opens us to continued learning and the possibility of seeing familiar situations in new ways.

Stay Curious and Always Keep on Learning.

If you found value in this exploration of thinking and categories, check out my latest book on the Toyota Production System, Connecting the Dots…

The soft copy is available here. And the hard copy is available here.

Rethinking Purpose: When Organizations Stop Having and People Start Being…

Part 1: The Reification Trap and What We Actually Observe

In today’s post, I am looking at the notion of organizational purposes in light of cybernetic constructivism. The ideas here are inspired by giants like Stafford Beer, Spencer Brown, Ralph Stacey, Werner Ulrich, Russell Ackoff and Erik Hollnagel.

The corporate world seems to be obsessed with organizational purpose. Mission statements adorn lobby walls. Consultants make fortunes helping executives discover their organization’s deeper calling, their “why”.

From a cybernetic constructivist perspective, this entire enterprise rests on a philosophical error. This is the notion that organizations have purposes. Organizations do not have purposes, people do.

Organizations are certainly created with specific objectives and goals in mind. For example, a company can be formed to develop software or a charity established to alleviate poverty. But the idea that these entities themselves possess purposes is what philosophers call reification, treating an abstraction as if it were a concrete thing.

Organizations have goals and objectives set by their founders or governing bodies. But purposes, the deeper sense of meaning and direction that drives behavior, belong to individuals. This distinction is crucial for understanding emergence in an organizational setting.

This is not semantic nitpicking. It is a fundamental reframe that helps us rethink how we understand organizational behavior and human experience within systems.

The Reification Trap and POSIWID:

When we say something like “our company’s purpose is to make the world more sustainable”, we commit reification. We treat an abstraction as if it were a concrete thing. Organizations are viewed wrongly as entities with intentions, values, and purposes of their own.

What organizations actually have are stated goals and objectives, declarations about what they aim to achieve. But when we strip away this corporate fiction, what remains is people. People with their own purposes, their own sense-making processes, their own constructed meanings about what matters and why.

Stafford Beer’s insight that “the purpose of a system is what it does” (POSIWID) helps us cut through the fog of stated intentions and mission statements. But when we think about what we have been saying so far, we can see that the idea of POSIWID itself could be a reification trap. In criticizing the reification of organizational purpose using POSIWID, we risk reifying “the system” itself as something that “does” things.

A way to ease out of this apparent trap is to use Wittgenstein’s Ladder. POSIWID serves as a cognitive aid helping us climb to better understanding, which we then discard.

What we actually observe are patterns of human behavior and interaction. When we say “the system produces data harvesting behaviors”, we mean “we observe people engaging in data harvesting activities within particular structural contexts”. When we say “the system undermines individual viability”, we mean “we observe interactions between people that result in reduced individual flourishing”.

The value of POSIWID lies not in discovering what systems “really want” but in training our attention on emergent patterns of human behavior rather than declared organizational intentions. Once this shift in attention is accomplished, we can discard the system-as-actor metaphor and focus on the actual phenomenon. People with purposes interacting within conditions that constrain and enable certain patterns of behavior.

Applied to organizations, this refined principle becomes this. If we want to understand what is actually happening, we should observe the patterns of behavior and interaction that emerge from people’s purposes within particular conditions, not focus on declared organizational goals.

Patterns of Purpose Interaction:

From a cybernetic constructivist perspective, what we observe are patterns emerging from the interactions of individual purposes within structured contexts. When a software engineer’s purpose to solve elegant problems intersects with a marketer’s purpose to help people discover useful tools, and both operate within structures that reward customer satisfaction, we observe certain patterns of behavior and outcomes.

These patterns are dynamic, not fixed. As people’s individual purposes evolve, as new people join the system, as external conditions shift, the observable patterns shift too. The patterns become a living expression of ongoing purpose interactions rather than a static implementation of declared intentions.

But here’s the crucial insight from cybernetics. The observer is part of the system being observed. When we observe patterns of organizational behavior, we are not neutral external scientists. We are participants whose own purposes and perspectives shape what we see. This creates recursive loops that traditional management thinking often ignores.

The manager who observes that “people are not motivated” and implements new programs is not a neutral observer. They are a participant whose own purposes drive their observation and choice of interventions. These interventions then become part of the conditions within which other people’s purposes interact, potentially changing the very patterns the manager was trying to understand.

The refined POSIWID insight helps us see that if we want to change observable patterns, we need to understand and work with the actual purposes of the actual people involved, not impose new mission statements or organizational goals from above.

From Alignment to Resonance:

Traditional thinking seeks alignment, getting everyone pointed in the same direction toward the same stated organizational goals. But our refined understanding shows us that there is no collective entity that can choose a unified direction. In reality, there are only individuals with purposes engaging in ongoing interactions.

Some of these interactions create resonance, patterns where individual purposes amplify and support each other in ways that produce coherent behavioral patterns. Others create tension or conflict. The software engineer’s elegant problem-solving and the marketer’s user advocacy can resonate productively, creating emergent value. But this coherent behavior is not orchestrated by some collective consciousness. It emerges from how these specific people with these specific purposes interact within particular conditions.

What we observe are the behavioral patterns emerging from these ongoing purpose-interactions, not something chosen by “the organization.” Even when there are formal decision-making processes, you still have individual people making individual choices about whether to participate, how to contribute, what to support.

Understanding Recursive Viability:

When we talk about recursive systems, we mean something different from linear processes. In recursive systems, each loop is independently viable. Each person constructs their own purposes, observes their own interactions, and maintains their own capacity to adapt and respond. They are not merely components serving the larger system. They are complete systems in themselves.

People observe the patterns of interaction, including their own participation in those patterns. This observation changes how they construct their purposes, which changes their interactions, which changes the patterns, which changes what they observe. Each person completes this cycle independently while also participating in the larger patterns.

The viability of observable patterns emerges from the viability of individual participants, rather than being imposed upon them. When individual people can maintain their own purposefulness and adaptive responses, the larger patterns that emerge tend to be more resilient and creative.

Instead of asking “how do we get people to serve the organization’s purpose”, we ask “how do we create conditions where each person’s independent viability contributes to emerging patterns that enhance collective viability?”

Collective viability is not itself an entity or fixed goal. It is an emergent, dynamic pattern arising from the interactions of individual viable systems. It shifts as individual purposes evolve, as new people join the system, as conditions change.

Quality of Life and Practical Implications:

Quality of life is not something organizations provide to employees like a benefit package. It is something individuals construct through their lived experience of pursuing their purposes within particular conditions. But quality of life is both an input and output of the system. When people experience high quality of life, they bring different energy and capability to their interactions.

This reframe has practical implications. If we want to change observable patterns of behavior, we need to understand and work with the actual purposes of the actual people involved. What do people actually care about? How do their purposes complement or conflict? What conditions support the expression of these purposes?

Sustainable change happens through shifts in the interaction of purposes, not through compliance with new directives. People adapt their behavior when conditions change in ways that better enable them to pursue what they already care about, or when they develop new purposes through their lived experience of interaction with others.

Final Words:

Let go of the fiction that your organization has a purpose. Instead, get curious about the actual purposes of the actual people involved and observe the patterns of behavior that emerge through POSIWID analysis. What do they care about? How do their purposes interact? What behavioral patterns emerge from these interactions?

Then, experiment with conditions. What structures and processes support the kinds of interactions that produce the behavioral patterns you want to see more of? Pay attention to emergence while remaining aware of your position as observer. Use POSIWID as your reality check. If the observable patterns do not match the stated intentions, look to the interaction of individual purposes within current conditions for explanation.

This shift from organizational purposes to human purposes is not merely theoretical. It is practical. When we stop pretending that abstractions have agency and start working with the actual agency of actual people, we discover possibilities for organizing that honor both individual viability and collective capability.

In the next post, we will explore what this means for leadership as condition creation, boundary critique, and the challenge of supporting diverse purposes within structured contexts.

I will finish this post with a quote from Ralph Stacey:
There is no possibility of standing outside human interaction to design a program for it since we are all participants in that interaction.

Stay curious and always keep on learning…

Wittgenstein’s Ladder in Complexity: Why We Need Tools We Must Abandon

My propositions serve as elucidations in the following way: anyone who understands me eventually recognizes them as nonsensical, when he has used them as steps to climb beyond them. (He must, so to speak, throw away the ladder after he has climbed up it.) – Ludwig Wittgenstein, Tractus Logico-Philosophicus

In my recent post on the two dogmas of complexity science, I talked about ontological complexity realism and epistemological representationalism. These are the beliefs that complexity exists ‘out there’ to be measured and that our task is to create neutral representations of it. Today, I want to explore why these dogmas persist and why overcoming them requires something that seems paradoxical. We need conceptual tools that we must eventually abandon.

This is where Wittgenstein’s ladder becomes particularly relevant for complexity work. When reentry per Spencer-Brown’s Laws of Form is needed to achieve second-order understanding, the ladder offers a path through what might otherwise be an intractable problem.

The Reentry Problem in Complexity:
When talking about complexity, we often overlook the point that the observer cannot be separated from what they observe. Every attempt to map or measure complexity changes the observer-system relationship, which changes the ‘complexity’ itself. This creates what George Spencer-Brown called reentry: when a distinction folds back on itself.

Consider the Ashby Space framework I critiqued earlier. The moment we try to plot an organization on its coordinates, we encounter reentry. Who determines where the organization sits on the ‘variety of stimuli’ axis? The organization itself, through its own distinction-making processes. What counts as ‘variety of responses’? Again, this depends entirely on the distinctions the observer can make about meaningful action.

The framework cannot escape this recursion. It treats as measurable quantities what are actually dynamic processes of distinction-making between observer and observed. This recursion is not a bug to be fixed but a feature of complexity itself.

As I explored in my post on the form of decency, reentry reveals contradictions in systems that try to maintain rigid boundaries. When xenophobic ideologies apply their own criteria to themselves, when the form folds back, they collapse under their internal logic. The same dynamic occurs when complexity frameworks attempt to map the very processes of distinction-making that generate complexity.

Why Reentry Creates a Need for Ladders:
If our tools for understanding complexity are themselves subject to reentry effects, how do we develop more sophisticated ways of thinking about complex systems? We cannot simply abandon all conceptual tools, yet we cannot treat them as neutral representations either.

This is where we need to recognize a crucial distinction about when ladder consciousness becomes necessary. When we engage with situations in ways that generate significant recursive coupling between observer and observed (when our distinction-making substantially shapes what we are trying to understand, when our interventions change the system which changes us which changes our interventions), then treating our models as stable representations becomes counterproductive.

Consider the difference between using a roadmap to navigate familiar streets versus using a systems model to understand organizational dynamics. The roadmap engages with relatively stable relationships such as the streets that rarely change position because we are looking at the map. But organizational systems modeling involves high degrees of recursive coupling. The very process of creating models changes how participants see their organization, which changes how they behave, which changes the organizational dynamics, which requires updating the models.

When we are complexifying our relationship with a situation through high degrees of recursive engagement, our models must become ladders. They cannot remain permanent reference tools because both we and the situation are co-evolving through the modeling process itself.

This is where Wittgenstein’s ladder becomes relevant. The ladder offers a way to use conceptual tools while remaining aware of their provisional nature. We need frameworks to help us think about complexity, but we also need mechanisms for transcending the limitations of those same frameworks.

The ladder works through what might seem like a contradiction: we use conceptual distinctions to develop awareness of the limitations of conceptual distinctions. We employ frameworks like Ashby Space not because they represent reality accurately, but because they can help us recognize how our own distinction-making processes shape what appears as ‘complex’.

This creates what Heinz von Foerster called second-order cybernetics, observing observation. First-order thinking assumes we can step outside the system and create objective maps. Second-order thinking recognizes that we are always already participants in the systems we are trying to understand.

The Ladder in Practice: From Tools to Meta-Awareness:
Consider how this works in organizational consulting. When we facilitate a systems mapping exercise, we might begin by treating the resulting diagram as if it represents the ‘real’ organizational structure. This first-order approach focuses on improving the accuracy of the map.

But when we are engaged in recursive coupling with the organization (when the mapping process itself changes how participants understand and enact their organizational reality), ladder consciousness suggests a different approach. The map becomes valuable not when it accurately represents the organization, but when the mapping process helps participants recognize how their own distinction-making participates in creating organizational dynamics. We use the tool to develop meta-awareness of how we collectively complexify organizational life.

This shift points to the very needed meta-awareness. Instead of asking ‘Is our systems map accurate?’ we ask ‘How does the process of creating this map reveal and reshape our current ways of making distinctions about organizational life?’ The tool serves its purpose when it points beyond itself toward the processes that we participate in creating organizational reality, then becomes disposable once we have developed more direct awareness of our participation.

This principle applies across complexity frameworks. When we use any analytical tool, ladder consciousness means recognizing that we are not discovering objective properties but enacting particular ways of making sense that bring certain possibilities into view while obscuring others. The framework becomes useful when we can use it to examine our own sense-making, then let it go.

Beyond Tools: What Emerges After the Ladder:
This raises an important question. What happens after we kick away the ladder? What replaces our conceptual tools once we have transcended their limitations?

The answer is not the absence of structure but a different relationship to structure. After using and abandoning frameworks, what can emerge is what John Dewey called ‘inquiry’, a more fluid, responsive way of engaging with situations that draws on conceptual resources without being constrained by them.

Dewey’s conception of inquiry is particularly relevant here because it transcends the subject-object dualism that creates many of our analytical problems. Instead of treating thinking as something that happens inside our heads while we observe an external world, Dewey understood inquiry as a transactional process between organism and environment. The inquirer and the situation inquired into are parts of a single unfolding transaction.

This means inquiry is not about representing a pre-existing reality but about transforming problematic situations into more settled ones. When we encounter what we call a ‘complex situation’, inquiry suggests we are not discovering complexity ‘out there’ but participating in an ongoing transaction that we might call ‘complexifying’. The situation becomes complex through our engagement with it, just as we become complex through our engagement with the situation.

For Dewey, genuine inquiry involves what he called ‘learning by doing’ coupled with reflection on that doing. We act, observe the consequences, and adjust our future actions based on what we learn. This creates a recursive cycle where our understanding evolves through engagement rather than through detached observation. The goal is not to achieve final truth but to develop more intelligent ways of acting within ongoing situations.

This approach naturally incorporates ladder consciousness. We use conceptual tools as hypotheses for action rather than as final descriptions of reality. We test these tools against their consequences in lived experience, keeping those that prove helpful and abandoning those that constrain effective action. The tools serve inquiry rather than replacing it.

This post-ladder engagement is characterized by several qualities. This is not meant to be an exhaustive list by any means. Just like the ladder, this should serve as an intuition pump.

Responsiveness over methodology: Instead of applying predetermined frameworks, we develop sensitivity to what each situation calls for. We maintain access to various conceptual tools while remaining free to abandon them when they no longer serve.
Process awareness: We become more conscious of how our own sense-making participates in creating the realities we encounter. This is not relativism but what Donna Haraway called ‘situated knowledge’: knowledge that acknowledges its own positioning.
Provisional commitment: We can act decisively based on our current understanding while remaining open to revision. This allows for second order approach to wisdom, intuitive knowledge of the limits of knowledge.

The Ethics of Temporary Tools:
There is an ethical dimension to ladder consciousness that connects to my earlier post on reentry and xenophobia. When we hold our conceptual tools too tightly, we risk treating our provisional distinctions as absolute truths, our temporary boundaries as permanent walls. This is one of the main reasons why we must discard the ladder rather than hold onto it.

The ladder teaches a different relationship to our beliefs and frameworks, firm enough to guide action, light enough to avoid becoming weapons. This balance is crucial and deserves deeper exploration.

What does it mean to hold beliefs firmly enough to guide action? It means we must be able to act decisively based on our current understanding, even while acknowledging that understanding is provisional. Without some degree of commitment to our frameworks, we become paralyzed by infinite doubt. We need enough conviction to move forward, to make choices, to take responsibility for our actions.

But what does it mean to hold these same beliefs lightly enough to avoid weaponizing them? It means maintaining what Keats called ‘negative capability’. This is the ability to remain in uncertainty and doubt without irritably reaching after fact and reason. It means recognizing that our strongest convictions might be wrong, our clearest insights might be partial, our most cherished frameworks might be limiting us in ways we cannot yet see.

This creates a paradoxical situation that the ladder helps us navigate. We must act as if our current understanding is enough to work with, while remaining open to its revision. We must commit without clinging. We must form strong opinions, but hold them lightly.

This becomes particularly crucial when working with others who hold different frameworks. Instead of engaging in battles over whose map is more accurate, ladder consciousness invites us to explore how different ways of making sense might serve different purposes. It asks us to treat our frameworks as offerings to collective inquiry rather than as territories to defend.

The ethical imperative here connects to von Foerster’s principle: ‘Act always so as to increase the number of choices’. When we hold our tools lightly, we create space for others to contribute their own sense-making resources. When we avoid weaponizing our frameworks, we keep possibilities open rather than shutting them down.

Our role becomes less about providing definitive maps and more about helping develop capacities for making better distinctions in the face of uncertainty. This suggests designing interventions that increase what von Foerster called ‘the number of choices’ rather than narrowing them down to predetermined solutions.

Climbing Toward Participatory Knowing:
This brings us back to my critique of complexity science’s foundational dogmas, but with an additional insight that shifts how we use language itself. We typically use complexity as a noun (‘this system has complexity’) or an adjective (‘this is a complex situation’). But it may be time to recognize complexity as a verb, something we do rather than something we encounter.

When we complexify a situation, we are not discovering pre-existing complexity but participating in an ongoing process of distinction-making and sense-making that brings complexity into being. The situation becomes complex through our engagement with it, just as we become complex through our engagement with the situation. Complexity emerges from what I have called epistemic coupling: the recursive interaction between knowing systems and their environments.

This verb-oriented understanding aligns with Dewey’s transactional thinking and Spencer-Brown’s emphasis on the observer’s role in creating distinctions. It suggests that when we say a situation is ‘complex’, we might more accurately say we are ‘complexifying’ our relationship with that situation through the particular ways we choose to engage with it.

This reframing has practical implications. Instead of asking ‘How can we manage this complex system?’ we might ask ‘How are we complexifying this situation, and how might we complexify it differently?’ Instead of treating complexity as a problem to be solved, we recognize complexifying as an ongoing process we participate in creating.

This perspective naturally leads to ladder consciousness. If complexity emerges from observer-system interactions, then studying complexity must include studying how we study. We cannot step outside the epistemic coupling that generates complexity in the first place.

The ladder provides a way to work with this recursion constructively. It allows us to use conceptual tools to bootstrap ourselves into meta-cognitive awareness, then abandon those tools once they have served their purpose of revealing our own participation in constructing what we take to be reality.

Final Words:
Wittgenstein’s ladder offers more than a philosophical metaphor for complexity work. It suggests a practical approach to navigating situations where traditional analytical tools reach their limits. In a world facing unprecedented challenges that resist conventional problem-solving approaches, we may need frameworks that can help us think more clearly while remaining open to possibilities we cannot yet imagine.

The ladder teaches us that sometimes the most sophisticated response to complexity is paradoxical, using our best analytical tools while remaining prepared to abandon them in favor of more direct engagement with emerging situations. Sometimes deeper understanding comes not from having better maps, but from developing better capacities for navigation in unmapped territory.

This suggests a form of wisdom that seems well-suited to our current historical moment: recursive and reflective, provisional and purposeful. Each of these qualities that represent a cybernetic Constructivist approach deserves elaboration.

Recursive wisdom acknowledges that we are always inside the systems we are trying to understand. It recognizes that our attempts to make sense of complexity are themselves part of the complexity we are trying to navigate. This leads to what we might call ‘meta-learning’: learning about how we learn, thinking about how we think. Recursive wisdom asks us to include ourselves in our analyses, to observe our own observing.

Reflective wisdom suggests that effective action in complex situations requires ongoing consideration of our own assumptions, biases, and blind spots. But this is not the paralysis of infinite self-doubt. Rather, it is the cultivation of the ability to think about what we are doing while we are doing it, to adjust our approach based on emerging feedback from the situation itself.

Provisional wisdom means holding our current understanding as our best guess given available information, while remaining genuinely open to revision. It means acting with conviction while maintaining epistemic humility. This is what we can call as ‘fallibilism’, the recognition that any particular perspective, no matter how well-supported, might be incomplete or mistaken.

Purposeful wisdom suggests that this openness to revision is not aimless but directed toward some vision of beneficial outcomes. It means using our provisional understanding to work toward flourishing, justice, and expanded possibilities for all participants in the situation. Purposeful wisdom asks us to take responsibility for the worlds our actions help create.

Together, these aspects suggest an approach to complexity that is both humble and decisive, both open and committed. It invites us to use our best tools while holding them lightly, to think systematically while remaining open to surprise, to act decisively while staying curious about the consequences of our actions.

Perhaps most importantly, it reminds us that we are not outside observers of complex systems but participants within them. The ladder helps us climb to a perspective from which we can see this participation more clearly. And then, if we choose wisely, we can kick it away and engage more consciously with the complexity we help create.

Stay curious and Always keep on learning…

The Two Dogmas of Complexity Science: How Our Best Tools Can Mislead Us

I borrow the term ‘dogma’ from W. V. Quine’s classic essay Two Dogmas of Empiricism, where he showed that unquestioned assumptions can quietly shape an entire field. Complexity science, too, rests on its own dogmas that deserve examination.

In today’s post, I want to explore what I see as two fundamental dogmas with how we think about complexity science. These dogmas are deeply embedded in our thinking, and they shape how we create tools, design interventions, and understand organizational life without us realizing it.

To explain these dogmas, let me use the chart of Ashby Space by Max Boisot and Bill McKelvey. It appears clean, scientific, and objective. The kind of visualization that makes the science feel rigorous and mathematical.

This framework comes from Ross Ashby’s Law of Requisite Variety. It maps organizational viability across different complexity regimes. It seems to offer clear insights. Systems in the ordered regime operate through routine procedures. Those in the complex regime require learning and adaptation. Those in the chaotic regime lose coherence when environmental variety exceeds their response capacity.

The 45° diagonal represents Ashby’s famous law. Only variety can absorb variety. Systems above this line face more environmental complexity than they can handle. Systems below it have excess capacity for response. From a conventional perspective, an organization might assess their position by measuring environmental turbulence against internal response capabilities. They might conclude they need to increase internal variety to match external complexity.

It is worth noting that Ashby himself understood variety as observer-dependent. His cybernetic work emphasized that distinctions are made by observers, not discovered in objective reality. The challenge arises when we operationalize such insights into frameworks and tools. What began as a nuanced understanding of observer-enacted variety becomes translated into seemingly measurable coordinates. This transformation from process to representation exemplifies the dogmas I want to examine.

This transformation reveals two fundamental dogmas that have shaped complexity science.

The First Dogma: Ontological Complexity Realism

The chart treats “variety of stimuli” as if it were an objective quantity that exists independently in the environment. It waits to be measured and plotted on the Y-axis. This reflects what I call ontological complexity realism. This is the belief that complexity is an intrinsic property of systems that exists regardless of who observes them.

Here lies the fundamental problem. Variety does not exist “out there” in any objective sense. What counts as variety depends entirely on the distinctions made by the observer or system. The environment does not contain variety. Variety emerges through the interaction between system and environment, mediated by the system’s capacity for making distinctions.

Let me give you a concrete example from healthcare. Is an emergency room “complex”? For a patient’s family member, the ER appears chaotic and overwhelming. Multiple alarms sound. Staff rush between rooms. Medical terminology flies around that they cannot understand. Life-and-death decisions happen at bewildering speed.

For an experienced ER physician, the same environment reveals familiar patterns. They recognize the rhythm of triage protocols. They understand the meaning behind different alarm sounds. They know the standard procedures that guide most interventions. The complexity is not inherent in the ER itself. It emerges from the coupling between the medical environment and each observer’s capacity for clinical distinction-making.

But this observer-dependence extends equally to the horizontal axis. What counts as “variety of responses” depends entirely on the distinctions the observer can make about available actions. The same ER situation reveals entirely different response repertoires to different observers.

The family member might see only binary options. Panic or wait helplessly. The nurse sees a rich array of possible interventions. The attending physician distinguishes even more nuanced response possibilities. The hospital administrator observes yet another set of responses. None of these response varieties exists independently in the situation. Each emerges from the specific capacity of the observer to make distinctions about what constitutes meaningful action.

John Dewey understood this when he argued that organism and environment must be understood as parts of a single transaction rather than separate things that interact. Traditional thinking assumes we have an organism “here” and an environment “there.” Then we study how they interact. But Dewey argues this separation is itself an artificial division that obscures the primary reality. The ongoing transaction between organism and environment creates experience itself.

The key insight is that stimulus and response are not external to each other. They are “always inside a coordination and have their significance purely from the part played in maintaining or reconstituting the coordination”. The stimulus is not something that happens to the organism from outside. It is something “to be discovered,” something “to be made out.” It is “the motor response which assists in discovering and constituting the stimulus.”

As Dewey puts it, “The stimulus is that phase of the forming coordination which represents the conditions which have to be met in bringing it to a successful issue. The response is that phase of one and the same forming coordination which gives the key to meeting these conditions”.

This transactional view transforms how we understand knowledge. Instead of a mind representing an external world, we have knowing as a mode of transaction between organism and environment. Knowledge emerges from this transaction rather than copying something pre-existing. This is not purely subjective nor purely objective, but relational.

Applied to complexity science, Dewey’s approach reveals why Ashby Space fails. The chart treats “variety of stimuli” and “variety of responses” as if they were separate, measurable quantities. But these are artificial divisions of the ongoing transaction between system and environment. There is no variety “out there” waiting to be counted. There are no responses “in here” waiting to be catalogued. There is only the ongoing transaction through which system and environment mutually specify each other.

The Second Dogma: Epistemological Representationalism

The chart presents itself as a neutral representation of complexity regimes. This embodies what I call epistemological representationalism. This is the belief that our task is to discover and measure pre-existing complexity through better methods and tools.

This dogma assumes we can create objective maps of complexity that correspond to how the world really is. The clean boundaries between regimes suggest we are mapping objective territory. The precise diagonal line suggests objective measurement. The measurable axes suggest neutral observation rather than conceptual construction.

But the moment you try to actually use this framework, its claims about objectivity break down. Where exactly would you locate a specific organization on these coordinates? How would you measure “variety of stimuli” independently of the system’s own distinction-making processes?

The chart cannot answer these questions because it treats as measurable quantities what are actually dynamic processes of distinction-making. It tries to map what can only be enacted.

Humberto Maturana and Francisco Varela’s work on structural coupling reveals why this approach fails. Living systems do not represent an independent environment. They enact their world through their structure and history of coupling. As Maturana put it, “everything said is said by an observer to an observer.” The boundaries we draw around “systems” and “environments” are distinctions made by observers, not features of an objective world waiting to be mapped.

The Fundamental Contradiction: Mapping the Unmappable

Here lies the deeper issue that cuts to the heart of what we mean by complexity itself. The very notion that complexity can be mapped contradicts the fundamental nature of what it means for something to be complex.

If something is indeed complex, it resists reduction to mappable coordinates. Complexity implies emergence, unpredictability, context-sensitivity, and observer-dependence. These are not accidental features that better measurement tools might eventually overcome. They are defining characteristics of complexity itself.

Yet the frameworks prevalent in complexity science attempt to do precisely what complexity theory tells us should be impossible. It tries to reduce emergent, context-dependent, observer-enacted phenomena to static, universal, objective coordinates. This creates a performative contradiction. We use the insights of complexity science to argue that phenomena are emergent and context-dependent. Then we immediately create tools that treat those same phenomena as mappable and context-independent.

The contradiction runs deeper still. If complexity truly emerges from the recursive coupling between observers and their domains of inquiry, then any attempt to create a universal map of complexity must necessarily fail. The observer drawing the map cannot step outside the epistemic coupling that generates the complexity in the first place.

Why These Dogmas Generate Persistent Puzzles

These two dogmas create persistent puzzles that are often ignored. The list below is not meant to be an exhaustive list at all.

The Expert-Novice Paradox Why do experts and novices see different levels of complexity in the same system? If complexity emerges from epistemic coupling, then of course they enact different complexities. They have different capacities for distinction-making.

The Measurement Tool Problem Why do different measurement tools reveal different complexities? If complexity is relational, then different tools necessarily enact different varieties by making different distinctions possible.

The Scaling Paradox Why does complexity seem to change when we shift between levels of analysis? Different levels of observation necessarily enact different complexities.

The Intervention Prediction Failure Why do interventions designed based on complexity mappings so often produce unexpected results? Because any intervention changes the observer-system relationship itself. This makes prediction inherently problematic.

These puzzles persist not because of inadequate methods. They persist because they are generated by the assumptions we bring to complexity science.

Beyond the Dogmas: Epistemic Coupling as Transaction

What if we abandoned these dogmas entirely? Instead of asking “How complex is this system?” we might ask this. “How does complexity emerge from the recursive interaction between this knowing system and its environment?”

This shifts focus from measuring pre-existing complexity to understanding epistemic coupling. The dynamic process through which systems and environments mutually specify each other through ongoing interaction. Complexity becomes not a property to be measured but a relationship to be understood.

This framework synthesizes insights from three traditions.

Dewey’s Transaction Theory Instead of separate entities that interact, we have organism-environment as a unified field. The “stimuli” and “responses” in Ashby Space are abstractions from this ongoing transaction.

Maturana and Varela’s Structural Coupling Living systems do not represent an environment but enact their world through their structure. The coupling between system and environment is the source of complexity.

Ashby’s Cybernetics Before the Law of Requisite Variety can even apply, an observer must create variety through distinction-making. The law cannot operate on raw reality. It requires an observer to carve up the world into meaningful categories.

This reinterpretation transforms Ashby’s contribution from a focus on objective regulatory mechanisms to an emphasis on the active and constitutive role of the knowing system in shaping the very “variety” it then seeks to regulate. Rather than discovering pre-existing variety that must be matched, systems participate in enacting the complexity they face through their own distinction-making capacities.

The Chart as Tool, Not Map

This does not mean frameworks like Ashby Space are useless. But we need to understand them differently. Not as maps of objective complexity regimes but as tools for thinking about epistemic coupling processes.

Used this way, the framework serves as what Wittgenstein called a ladder. Something we climb up to reach a new perspective, then kick away once we no longer need it. It helps us think more clearly about complexity without pretending to be complexity itself.

Final Words: Complexity as Participation

The chart looked so clean and objective at first. But complexity is messier, more relational, and more participatory than any representation can capture. That is not a limitation to be overcome. It is the very nature of what we are trying to understand.

Understanding complexity as epistemic coupling opens different possibilities. For designing systems that can remain coherent while staying open to surprise. For cultivating capacities for distinction-making that can expand as we encounter new varieties. For taking responsibility for the complexities we participate in creating.

Heinz von Foerster understood this when he formulated his ethical imperative. “Act always so as to increase the number of choices”. If we are responsible for constructing our realities through our distinctions, then we are also responsible for ensuring that others can participate in that construction.

The challenge is not to model the world but to participate in it more wisely. That participation depends fundamentally on understanding that complexity emerges from epistemic coupling. The recursive interaction between knowing systems and their domains of inquiry. This makes us responsible not just for our actions but for the worlds those actions help bring forth.

 I will finish with wise words from Quine:
No statement is immune to revision.

Stay curious and Always Keep on Learning…

The Road Not Taken: What It Means to Enact

Robert Frost’s “The Road Not Taken” was one of my favorite poems as a child growing up. This was taught in my high school as part of my English classes. In today’s post, I am exploring the idea of enacting, and I will use Frost’s poem as the background.

When we say we are enacting, it means that meaning is not something fixed or “out there” waiting to be discovered. Meaning is constructed in the very process of engagement with a situation. It arises through our participation, through the way we bring ourselves into the world. To enact is to bring forth a situation, to make it real to us, not as an abstract idea but as a lived, embodied experience. It is not about observing passively. It is about being implicated in the situation, shaping it as much as it shapes us.

The Walk in the Woods:

Consider what happens when you walk in a forest. The conventional view suggests that trees, paths, and birdsong exist as objective features that you then perceive and interpret. But from an enactivist perspective, the very capacity to distinguish “tree” from “not-tree,” “path” from “not-path,” emerges through your embodied history of interaction. Your visual system, shaped by evolution and development, structurally couples with light patterns in ways that bring forth the phenomenon we call “seeing a tree.” The tree as a meaningful entity and you as a perceiver of trees co-emerge through this coupling. Neither exists independently of this relationship.

The phrase “walking in nature” does not carry its own meaning. The rustle of leaves, the birdsong, the way sunlight falls on the path are not simply sensory inputs. Their significance arises through my participation. My posture, my attention, my breathing, the way I anticipate each step all enact the experience. I am not a detached observer. I am a co-creator of the moment.

When we say we are enacting, we mean something far more nuanced than simply interpreting or giving meaning to neutral objects. Enaction means that the very distinctions we perceive, the boundaries between self and world, the categories through which we understand experience, all emerge through our embodied coupling with our environment. We do not discover meaning that exists independently, nor do we project meaning onto a meaningless world. Instead, meaning and world co-arise through the history of our embodied interactions.

Now imagine that the path splits. Two trails stretch out before me. One appears more traveled, familiar, comfortable. The other appears less worn, less certain. But what does it mean to take the path that seems “less traveled”? The significance of this “less traveled” quality does not exist independently of my participation. It is inseparable from the observer. The meaning is enacted because I am there, making choices, paying attention, and engaging with the path in a particular way.

Beyond the Woods:

This subtle interplay appears everywhere. In traffic, for example, we often think we are passive observers, noticing congestion or delays as if they were external facts. Yet we are part of the system we observe. Our braking, accelerating, and positioning contribute to the very dynamics we perceive. Meaning, order, and significance arise through participation, through enactment, not detached observation.

We are never outside the system looking in. We are always already coupled, always already participating in the ongoing emergence of the world we inhabit. From an embodied mind perspective, cognition is not about representing a pre-given world inside our heads. It arises in the interaction between body, brain, and environment. Perception, action, and attention are inseparable. They shape and are shaped by the world we inhabit. Meaning is not discovered. It is enacted.

What Frost Shows Us:

“The Road Not Taken” is often misread as a celebration of choosing the unconventional path. But look closely at what the poem actually says. When the speaker encounters two roads in a yellow wood, he notes that one path “was grassy and wanted wear,” seeming to suggest it was less traveled. But then comes the crucial admission:

Though as for that the passing there
Had worn them really about the same.

The paths were equivalent. There was no meaningful difference between them at the moment of choice. The speaker even acknowledges this “And both that morning equally lay In leaves no step had trodden black.” Both paths were equally untraveled that morning.

So where does the meaning of taking “the road less traveled” come from? It emerges in the final stanza, where the speaker projects himself into the future:

I shall be telling this with a sigh
Somewhere ages and ages hence
Two roads diverged in a wood, and
I took the one less traveled by,
And that has made all the difference.

Notice the verb tense “I shall be telling this.” The speaker is anticipating the story he will tell, not describing what actually happened. The meaning of the fork is not fixed in the woods. It is enacted in memory, in the narrative he constructs to hold his world together. The road becomes “less traveled” only because he enacted it as such, giving shape to his experience after the fact.

Taking The Road of Enaction:

To enact is to participate in bringing forth the very world you inhabit. This is not about construction or interpretation in the usual sense. Construction implies a pre-existing subject who builds or creates something external to themselves, preserving the subject/object dualism that enactivism explicitly rejects. In enaction, both the “constructor” and what is “constructed” emerge simultaneously. There is no independent agent doing the constructing. The very capacity to be an agent emerges through the enactive process itself.

Rather than construction or representation, enaction involves reciprocal specification. World and mind co-specify each other through embodied interaction. Your perceptual world is not a representation of an independent reality, nor is it constructed from neutral materials, but the ongoing result of your coupling with your environment.

Every step you take participates in enacting the ground as walkable. Every glance brings forth the visible world through the coupling of eye and light. Every breath participates in enacting the boundary between organism and environment. These are not passive observations of pre-existing features but absorbed engagements in the ongoing emergence of your lived world.

The forest path that splits before you exists neither as pure objectivity nor pure subjectivity. It emerges through the structural coupling between your embodied capacities and the environmental configuration. Your choice to go left or right participates in enacting not just your path but the very nature of the fork as a meaningful juncture.

This is why the “road less traveled” becomes meaningful only through its enactment. Frost did not discover an objective fact about the path’s traffic patterns. Through walking, remembering, and narrating, he participated in bringing forth a world in which one path could become “less traveled” than the other. The poem does not describe a choice between pre-existing options but demonstrates the enactive process through which organism and environment co-specify each other.

Final Words:

Meaning is not inherent in the road, nor even in the moment of choice. It arises in the way we live, remember, and retell the path we have walked. We do not stand outside the system, weighing timeless truths. We are always already part of it, enacting coherence, sometimes even reshaping the past, in order to make sense of the present.

The “road less traveled” is not an objective fact. It is an enactment whose significance comes from our participation, our stories, our presence. The poem demonstrates how we bring forth meaning through the very act of engaging with our choices and telling ourselves which road we took. In recognizing this process, we glimpse our profound capacity as active participants in shaping the reality we inhabit. Every moment of attention, every step forward, every story we tell becomes an act of creation in the ongoing emergence of our world.

Stay curious and Always Keep on Learning…

Interested readers can check out the NLM podcast version here – https://youtu.be/rUEyiNEj4yE

The Art of ‘Somewhat’:

In today’s post, I am exploring Ashby’s Law of Requisite Variety and why it might be both more necessary and more slippery than most presentations suggest. Ashby’s Law might not be just another management principle. It could be a window into how we navigate complexity when the world refuses to be pinned down by our desire for certainty.

Stafford Beer once wrote something that might be more profound than it first appears:

Instead of trying to specify a system in full detail, specify it only somewhat. You can then ride on the dynamics of the system in the direction you want to go.

That word ‘somewhat’ could be carrying more weight than we realize. It might signal a kind of intellectual humility that most management theories avoid. It suggests that our relationship with complex systems is not one of mastery but of skillful navigation. Perhaps it is more like learning to surf than trying to control the ocean.

This brings us to Ashby’s Law of Requisite Variety, which is a simple statement. “Only variety can absorb variety.” This looks simple, clean, and mathematical. This is the kind of principle that promises hard tangible answers in a soft world. We need to attenuate excess external variety so that we focus on only the relevant variety, and we need to amplify our internal variety so that we can adequately respond to the external variety.

Let us look at the nuances of this law more.

Ashby’s Law tells us that a regulator can only control outcomes it can distinguish and respond to. If environmental disturbances exceed the regulator’s response capacity, some disturbances will pass through uncontrolled. This is presented as a logical necessity. It appears as inevitable as gravity.

And in one sense, it is. Given any finite set of regulatory responses, there will always be environmental states that cannot be adequately handled. Mathematics seems to be unforgiving. The logic seems to be airtight.

But mathematics operates within assumptions, and assumptions are where humans enter the picture. Most presentations of Ashby’s Law miss this. The law is simultaneously necessary and observer-dependent. It might be a constraint that applies absolutely, but only within the frames we construct.

The Indefinite World:

There is a distinction that might change how we see everything. The external variety is not infinite. It is something else entirely. It is indefinite.

Infinite means without limits. It is a mathematical concept that extends forever. Indefinite here means without defined limits. It requires someone to do the defining.

This might not be academic hairsplitting. It could be the key to understanding why Ashby’s Law feels both rock-solid and frustratingly slippery to grasp.

The world contains countless differences, but only some matter for any given purpose. Gregory Bateson captured this. “Information is a difference that makes a difference.” The same principle applies to variety. Variety is not a raw count of states “out there.” It is a relational property that emerges when an observer draws distinctions that serve a purpose.

Think about managing a parking lot as an example. How many “states” might this system have? If you only care about full or empty, there are two states. If you track individual spaces, there might be hundreds. If you include weather patterns, time of day, driver behavior, and maintenance schedules, there could be thousands. The world contains all these potential distinctions simultaneously. But variety for control purposes might depend entirely on which distinctions you choose to make matter.

This creates a fundamental tension. Ashby’s Law holds as a logical necessity. If your frame ignores differences that turn out to matter, your system will fail. But the application of the law depends entirely on how you frame the situation.

When Frames Collide with Reality:

The COVID-19 pandemic might have given us a natural experiment in how different frames handle the same underlying reality.

Some governments approached the crisis with what we might call a narrow medical frame. The pandemic was fundamentally a healthcare capacity problem for them. The focus was on hospital beds, ventilators, testing infrastructure, and transmission control. Their variety management attempted to attenuate viral spread while amplifying medical response capacity. From this perspective, lockdowns might be seen as a straightforward attenuation strategy, and field hospitals as variety amplification.

This frame had a certain elegant simplicity. The problem was clearly defined, the metrics were measurable, and the interventions had precedent in public health history.

But other governments adopted what we could call a broad socio-economic-health frame. From this perspective, the pandemic was not just a medical crisis. It might be a system-wide disruption that threatened social cohesion, economic stability, and political legitimacy simultaneously. Their variety management involved coordinated interventions across multiple domains. Public health measures, economic support packages, mental health services, educational continuity, and social solidarity initiatives.

Both approaches were tested against the same underlying reality. The virus did not care about our framing preferences. But the broader frame generally proved more viable because it might have acknowledged more of the variety that actually mattered for maintaining social stability during the crisis.

The narrow medical frame was not wrong in many regards. It might have been incomplete. It failed to account for economic disruption, compliance fatigue, mental health deterioration, and social unrest. When these unacknowledged varieties of disturbance began overwhelming the system, control failures cascaded in directions the frame could not anticipate.

This might be where Ashby’s Law reveals its true nature. The law did not prescribe which frame to use. It simply ensured that inadequate frames would reveal themselves through control failures.

The Observer Inside the System:

Here is where the story might deepen into something more complex than most management textbooks are comfortable acknowledging.

Traditional cybernetics, what we might call first-order cybernetics, treats the observer as outside the system being controlled. From this perspective, variety could be objective. You count the states, build matching responses, and apply the law mechanically.

But second-order cybernetics recognizes something that might be more unsettling. The observer is always inside the system. The regulator is part of what is being regulated. Variety is not given. It might be constructed through distinctions that reflect purpose, context, and the observer’s own limitations.

This might mean Ashby’s Law operates at two levels simultaneously. At the operational level, your responses must match the variety you have acknowledged as relevant. If you identify ten types of disturbances, you might need at least ten different responses. This could be the familiar version of the law.

But at a deeper level, your capacity to make useful distinctions must itself be adequate to the situation’s demands. If your frame excludes crucial differences, operational control might fail regardless of how well you handle the differences you do recognize.

The law does not fail when you frame poorly. Your framework fails. The law simply describes what happens when your variety is inadequate, regardless of whether that inadequacy comes from poor responses or poor framing.

Back to Riding the Dynamics:

This brings us back to Beer’s insight about specification. If the world might be indefinite rather than infinite, if variety could depend on the distinctions we draw rather than existing independently, then total specification becomes not just impossible but potentially counterproductive.

The goal is not to capture all possible variety in advance. It is to develop the capacity to recognize when your current framing is failing and to generate alternatives before failure becomes catastrophic.

This reflexivity can be viewed as a type of variety amplification. Instead of just amplifying operational responses, we can amplify our capacity to reframe situations when current framings prove inadequate.

What might this look like in practice? Running scenario exercises that stress-test your assumptions. Monitoring for weak signals that could indicate emerging types of disturbance your current frame does not recognize. Institutionalizing checkpoints where teams question basic premises. Building relationships with people who might frame problems differently.

These are not just theoretical exercises but insurance policies against the kind of frame failure we saw in the early pandemic response.

The Paradox of Precision:

Here is something that might bother us about how Ashby’s Law is usually presented. It gets dressed up in mathematical clothing similar to formal models, game theory, Bayesian analysis, etc. These might make the approach feel objective and precise.

But precision might be exactly what we need to be suspicious of. Those models feel rigorous because, once you set the assumptions, the math is unforgiving. But who might define the players in your game theory model? Who sets the priors in your Bayesian analysis? Who decides what payoffs could matter?

Those are framing decisions. Ashby’s Law might apply before your math begins. If your framing excludes relevant variety, even perfect calculations could fail when they meet reality.

The law might remind us that objectivity begins after assumptions are set, but assumptions are never neutral. They could reflect purpose, context, and the inevitable limitations of the framers.

Living with Indefiniteness:

All this might be making the reader wonder… Are we condemned to relativism, where any frame could be as good as any other?

The answer in my opinion is – Not quite. The test of a frame might not be whether it is objectively true. That is not necessarily available to us. The test is whether it enables viable action in pursuit of purposes we care about.

This provides a practical discipline. You cannot retroactively change your frame to explain away failure. Either your original frame enabled adequate control or it did not. The test could be prospective viability, not post-hoc rationalization.

And frames do get tested. Reality pushes back. Systems fail in ways that might reveal the blind spots in our framing. The pandemic was particularly instructive because it tested everyone’s frames simultaneously against the same underlying dynamics.

The countries that performed best were not necessarily those with the most resources or the smartest experts. They might have been those with the most adaptive framing capacity. The ability to recognize when initial approaches were not working and to generate alternatives. It also means the ability to use multiple approachesinstead of adhering to one or the other. Variety gives you the grip needed to grasp the situation to manage it.

The Art of Somewhat:

Which brings us full circle to Beer’s notion of specifying “only somewhat.”

This might not be about being vague or uncommitted. It could be about building systems that can evolve their own specifications as they encounter unexpected variety. It might be about designing for frame flexibility rather than frame optimization.

In practical terms, this means designing feedback loops that can detect when current framings are failing. Building redundancy not just in operational responses but in framing capacity. Distributing the work of making distinctions across multiple agents. Creating safe spaces for questioning fundamental assumptions before those assumptions might lead to failure.

Most importantly, it means accepting that our relationship with complexity could be more like navigation than engineering. We might influence direction, but we cannot control destination with the precision our engineering metaphors suggest.

The question is not whether we can master complexity. The question is whether we can learn to move skillfully within it, specifying only somewhat and riding the dynamics in directions we care about.

That might not be a limitation of Ashby’s law. That could be its gift. It might free us from the impossible burden of total specification while preserving the discipline of logical constraint. It is inviting epistemic humility because we can never ever have complete information, especially when the external world has indefinite variety and is dynamic.

Final Words:

Ashby’s Law teaches us something deeper than just variety management. It shows us how to live with indefiniteness without abandoning the pursuit of viable action. In a world that refuses to hold still for our theories, the art of somewhat might be the most important skill we can develop.

The law is neither a rigid formula nor empty relativism. It is a constraint that operates within human-constructed frames, testing whether those frames prove adequate for achieving intended purposes. Its power lies not in prescribing solutions but in revealing inadequacies. It forces us to confront the relationship between our conceptual maps and the territories we are trying to navigate.

Stay Curious and Always Keep Learning…

The -isms of a Man Who Rejected -isms:

In today’s post, I am exploring one of the most fascinating aspects of Heinz von Foerster’s work: his complete rejection of philosophical labels and -isms. Von Foerster, the Austrian-American physicist and cybernetician, in his later years did not want to be pinned down by any single philosophical position. This was not philosophical indecision but a carefully crafted stance that reflected his deepest insights about observation, responsibility, and the nature of knowledge itself.

Von Foerster’s view that he was an -ist only of the -ism he could laugh at. While there is no definitive record of this exact phrase, in my opinion it perfectly captures his approach to philosophical thinking. He would only commit to philosophical positions that he could maintain with lightness and humor, positions that did not take themselves too seriously. This prevented his thinking from becoming rigid or dogmatic. He treated thought as an ongoing exploration, not a fixed doctrine.

To understand why this matters, let me walk through the major -isms that von Foerster consistently stepped around, and show you how his alternative approach offers something far more powerful than any single philosophical position.

Objectivism – The View from Nowhere:

Objectivism claims there is a world “out there,” independent of us, that we can know through careful observation and measurement. It insists on a sharp separation between the observer and what they observe.

Von Foerster had no patience for this illusion. As he put it:

Objectivity is the delusion that observations could be made without an observer.

This was not just philosophical wordplay. Von Foerster understood something deeper about responsibility:

Objectivity is one of the great tricks to get rid of responsibility.

When we claim to simply observe “what is,” we avoid acknowledging our role in bringing forth what we see. This is particularly important in his work on self-organizing systems. As he demonstrated, “There are no such things as self-organizing systems!” What we observe is always a system in close contact with an environment, in a state of perpetual interaction. The observer and the observed emerge together.

This insight reverses the logic of classical science. Instead of trying to eliminate the observer, von Foerster insisted that the observer must be included in the description of the observing system.

Subjectivism – The Prison of the Self:

You might think that if objectivism is wrong, then subjectivism must be right. Subjectivism is the idea that reality is purely a personal interpretation. Von Foerster rejected this view as well.

Von Foerster explicitly refuted the notion that reality is solely the product of an individual’s imagination. He used what philosophers call a “reductio ad absurdum” argument to show the logical impossibility of pure subjectivism.

As he put it:

If I assume that I am the sole reality, it turns out that I am the imagination of somebody else, who in turn assumes that he is the sole reality.

This paradox is easily resolved, by postulating the reality of the world in which we happily thrive.

He also addressed the idea of isolated experience directly. When people talk about being alone with their thoughts, von Foerster pointed out: “The man alone? He would just have to re-member that he is only alone when compared to others.” Even the concept of being “alone” requires the existence of others as a reference point.

Subjectivism treats knowledge as trapped inside individual minds. But von Foerster understood that knowledge emerges through interaction, not isolation. When discussing cognition, he clarified that subjectivism fundamentally misunderstood what knowing is. As he explained, your nervous activity is just your nervous activity and, alas, not mine. Knowledge and information cannot simply be “passed on” as commodities from one person to another because they are processes of individual nervous systems.

Instead, von Foerster showed us that Reality appears as a consistent reference frame for at least two observers. This is crucial. Meaning does not reside in isolated subjects but arises between them, through their coordinated actions and mutual orientation.

This insight connects to what von Foerster called the fundamental structure of communication. Following Maturana’s theorem that “Anything said is said by an observer,” von Foerster added his corollary: “Anything said is said to an observer.” These two propositions establish what he called a nontrivial connection between three concepts: the observers; second, the language they use; and third, the society they form by the use of their language. He compared this to the chicken, egg, and rooster problem: You need all three in order to have all three.

Human consciousness, including self-awareness and self-reflection, emerges from this social foundation. As von Foerster explained: “Self-awareness and self-reflection arise in ‘languaging’, which is necessarily a social affair.” We are conscious, he argued, because “we ‘know with’ ourselves” precisely because we “‘know with’ others.” This awareness of mutual interdependence is “the root of conscience.” The “other” is what makes us a “self.”

Even the objects we perceive are not pre-existing entities but what von Foerster called “tokens for eigenbehaviors”. They are “indications of processes” that arise from our interactions. “In the process of observation, we interact with ourselves and with the world to produce stabilities that become the objects of our perception.”

Von Foerster distinguished between “the reality” (as confirmed by independent observations) and “a reality” (as constructed through correlations). He preferred the latter approach: “My sensation of touch in correlation with my visual sensation generate an experience that I may describe by ‘here is a table.'”

This is why subjectivism fails. There is no pure inner experience independent of interaction with others and environment. The subject and the world they know arise together through recursive processes of coordination. As von Foerster put it: “without language and outside language there are no objects, because objects only arise as consensual coordinations of actions in the recursion of consensual coordinations of actions that languaging is.”

Relativism – The Collapse of Commitment:

Many people assume that if you reject both objectivism and subjectivism, you must be a relativist. A relativist is someone who thinks all views are equally valid. Von Foerster avoided this trap too.

He did not believe all truths were equal. He believed we are responsible for the truths we construct. This led to his famous ethical imperative:

Act always so as to increase the number of choices.

This was not tolerance born from “anything goes” thinking. It was responsibility born from understanding that we are the ones drawing distinctions, and we must accept responsibility for what those distinctions allow and exclude.

Von Foerster’s approach to education illustrates this perfectly. He distinguished between “legitimate questions” (questions to which the answers are unknown) and “illegitimate questions” (questions to which the answers are already known). Some ways of questioning are simply more generative than others. A relativist might say all questions are equally valid. Von Foerster insisted that only legitimate questions open up new possibilities for learning and growth.

The Constructive Alternative – Cybernetic Constructivism:

So what was von Foerster’s alternative? If we must give it a label, we might call it cybernetic constructivism. But he would probably laugh at that label too.

His key insight was this: “The environment contains no information. The environment is as it is. Information is a cognitive function.”

This leads to a profound recognition. Meaning does not exist “out there” waiting to be discovered, nor is it trapped “in here” within individual minds. It emerges through the recursive process of observation itself.

As von Foerster put it: “If you want to see, learn how to act.”

This brings us to his most challenging statement: “I shall act always as if I were the creator of the world I perceive.”

This is not solipsism. Solipsism is the idea that only your mind exists. It’s about responsibility. The world we bring forth is shaped by our choices, our distinctions, our attention, our participation with others.

The Ethics of Observation:

This is where von Foerster’s approach becomes deeply ethical. If there is no privileged position outside the system, then every observer is responsible for their constructions. There is no place to hide.

Von Foerster expressed this beautifully: “At any moment we are free to act toward the future we desire.”

You are the Copernican revolution. The observer is not the center of the universe, but every observation necessarily has them at its center. That realization is not humbling or arrogant but liberating. Because if you are responsible for your world, you are also free to change how you perceive and engage with it.

This is the deepest insight of von Foerster’s anti-philosophy. In taking responsibility for our constructions, we discover our freedom to construct differently. The future “will be as we wish and perceive it to be” not because we can impose our will on reality, but because future and perceiver arise together through the choices we make and the possibilities we keep open.

Communication as Dance:

Von Foerster understood communication differently too. As he explained:

“Language for me is an invitation to dance… When we are talking with each other, we are in dialogue and invent what we both wish the other would invent with me.”

We are not separate minds trying to transmit fixed meanings, but partners in an ongoing creative process. This view of communication aligns perfectly with his rejection of fixed philosophical positions. Every conversation is an opportunity to create something new together.

Final Words:

Here we encounter the deepest irony. In trying to understand von Foerster’s rejection of -isms, we risk creating the very thing he warned against: a fixed doctrine, a systematic position, an -ism of anti-isms.

But perhaps this is exactly the point. We are free to choose our -ism, and in choosing, we become responsible for what that choice enables and forecloses. Von Foerster would not exempt us from this responsibility, not even when engaging with his own work.

Von Foerster’s approach embodies what we might call epistemic humility. He understood that our knowledge is always partial, always constructed from a particular perspective, always open to revision. This humility does not lead to paralysis but to a productive pluralism. Multiple ways of knowing can coexist without one needing to eliminate the others. His ethics of observation, always acting to increase choices, becomes particularly relevant when our information systems often work to narrow them.

This is why von Foerster’s thinking remains provisional. He demonstrated that thought need not crystallize into fixed positions. We can remain responsive to what emerges in the very act of thinking. This provisionality is not philosophical indecision but intellectual courage. It is the willingness to stay with questions that matter more than the answers they might produce.

In our current moment of polarized certainties, von Foerster reminds us that we are not spectators of the world, but co-creators of it.

Von Foerster would likely laugh at being treated as the final word on anything. His laughter would remind us that the best thinking often begins where our certainties end. And if that insight threatens to become an -ism? Von Foerster might simply smile and remind us that we are only -ists of the -isms we can laugh at.

Stay curious and always keep on learning…

The Right Thing and the Right Reason:

In today’s post, I am exploring the notion of “doing the right thing.”

We encounter this expectation everywhere in workplaces, personal relationships, and civic life. The phrase appears in mission statements, performance reviews, and everyday conversations. At first glance, it feels simple and reassuring. Of course we should do the right thing.

In regulated industries, this mantra becomes even more clearly pronounced. Every procedure, every record, and every audit echoes that expectation. It appears in training sessions, quality policies, and compliance frameworks.

I want to add an important layer: do the right thing for the right reason.

The distinction may seem subtle, yet it initiates a reflexive turn. It moves us from mechanical compliance to ethical responsibility.

A statement by itself carries no value. “Do the right thing” means nothing until someone makes it their own. The phrase appears to describe a fact, but it actually expresses a value judgment. Value enters only when a person acts from conviction, not from blind obligation. The second part, “for the right reason,” is where responsibility begins. It asks a crucial question about why I am doing this. That question transforms an empty slogan into a deliberate act grounded in personal values.

If I follow orders or check boxes without reflection, I might appear to do the right thing. But in truth, I have surrendered ownership. From the perspective of cybernetic constructivism, meaning is not handed down from the outside. It emerges within the observer. As Heinz von Foerster showed in his work on observing systems, we do not simply receive reality but construct it through our interactions and decisions.

When we speak of “the right thing,” the phrase suggests precision, as if a decision could fit reality without error. In practice, this rarely happens. Thought and reality belong to different domains. A decision formed in thought appears complete because ideas do not encounter resistance until they are acted out. The flaws surface only when they meet real conditions.

This is the illusion of completeness in the right thing, the comforting belief that something can be fully correct. It persists because thought gives us a sense of closure that reality cannot guarantee.

Here is where the phrase “for the right reason” matters. It does not make the decision perfect; it acknowledges that it never was. Adding this second part challenges the belief in absolute correctness and invites humility about what we can know. It says you cannot guarantee the outcome, but you can own the reasoning. That ownership gives the action its integrity. The emphasis shifts from claiming completeness to accepting responsibility. This matters because it prevents us from confusing the clarity of thought with the complexity of life.

I want to focus on this more with a question: When the time comes, can I do the right thing? This question seems simple, but it hides a deeper issue. What exactly is the right thing? We often talk as if the right thing exists “out there,” waiting for us to discover, a fixed fact like the boiling point of water. But this assumes that what appears complete in thought will remain complete in practice. That assumption is an illusion.

In many situations, the right thing is not given. It is what von Foerster calls an undecidable.

The Nature of the Undecidable:

Von Foerster introduced this term for questions that cannot be answered by logic, rules, or computation alone. An undecidable resists algorithmic resolution. Regulations provide structure and consistency, and they are essential. Yet they do not eliminate undecidables. They never will.

Undecidables exist because the variety of real-world situations far exceeds what any rulebook can anticipate. In cybernetics, variety means the number of possible states a system can take. The more possible situations, the greater the variety. And the world does not just throw edge cases at us. It quite often generates entirely new scenarios. Each innovation, each unique user context, and every unexpected failure mode creates conditions no standard procedure can fully capture.

No rulebook, whether corporate policy or government regulation, can provide ready-made answers to every question. Rules may reduce some complexity and provide crucial guidance, but they cannot close the gap between their finite scope and the indefinite creativity of reality. That gap is where undecidables live, and where human judgment becomes indispensable.

Von Foerster put it clearly:

“Only those questions that are in principle undecidable, we can decide.”

This is not a logical contradiction. It is an ethical imperative. The undecidable is not an error to fix or a loophole to close. It is an invitation to take responsibility. And responsibility cannot be delegated to systems or rules.

Many people resist this truth. We want the comfort of certainty. We prefer to believe the right thing exists as a fixed point, like a law of physics. If that were true, we would not bear the weight of decision. But ethics begins where algorithmic certainty ends. When we say “Just tell me the rule,” we try to trade agency for comfort. And in doing so, we risk betraying the very principles we claim to uphold.

The uncomfortable insight is this: the right thing has validity only as something we decide and own.

A Practical Question:

In the medical device industry, when I encounter an undecidable, my first question is always:

“How does this help or hurt the end user?”

That question brings the undecidable into focus. Regulations cannot cover every nuance. They can only guide. The decision remains mine. The responsibility cannot be outsourced.

Doing the right thing for the right reason is not about perfection. It is not about moral grandstanding. It is about intentionality, the choice to act from internal commitment rather than external command. It is the courage to decide when certainty is impossible and when existing protocols do not apply.

Von Foerster understood this deeply. When he spoke of undecidables, he was not describing a flaw in logic or a failure of system design. He was describing the essence of ethical life: that there are decisions no one can make for us. This insight formed the heart of his second-order cybernetics, which places the observer and their responsibility at the center of any system.

The Ladder We Must Throw Away:

Here I must acknowledge an irony. In adding the phrase “for the right reason,” I am still using the word “right.” By doing so, I risk introducing the very assumption I wanted to question: that rightness exists as something fixed and pre-given. This reflects a pattern throughout the article, where language itself hints at the various complexities we grapple with in an attempt to grasp or cope with the external world.

This is where Wittgenstein helps. In the Tractatus Logico-Philosophicus, he wrote that the propositions in his book were like a ladder. Once you have climbed it, you must throw it away. These propositions were tools, not eternal truths. They guide you to a vantage point, and then you move beyond them.

The phrase “do the right thing,” and even my expanded version, “do the right thing for the right reason,” works the same way. These are useful as orienting principles in regulated industries. They provide direction in moments of uncertainty. But if we cling to them as ultimate truths, we miss their purpose.

Like Wittgenstein’s ladder, their role is pragmatic and temporary. They guide us to a place where we can make responsible decisions. Once we understand that responsibility cannot be outsourced to a phrase or a rule, we can discard the ladder, not by abandoning the principle, but by letting go of the illusion that the phrase absolves us of thinking.

The deeper insight is this: the right thing does not exist as a given. It exists as something we must decide. And that decision, by its very nature, will always belong to us.

The next time you hear the phrase “do the right thing,” pause and ask:

What undecidable am I facing, and will I have the courage to decide it for the right reason, knowing that even the word “right” is only a ladder?

Final Words:

The tension between following rules and taking responsibility is not a flaw to fix. It is a fundamental condition of ethical life in complex systems. Von Foerster’s cybernetics teaches us that we cannot escape this tension by creating better rules or more comprehensive procedures. The variety of situations we face will always exceed the variety our systems can anticipate.

This does not diminish the value of regulations. They provide the backbone of responsible practice and create the conditions for ethical decisions. But they cannot substitute for judgment when the genuinely novel situation arises.

The courage to decide undecidables belongs to every professional who encounters the limits of the rulebook. When we recognize that meaning emerges within the observer, we are called to decide thoughtfully, with full awareness of our role in shaping the meaning of our actions.

This is neither comfortable nor easy. But it is the price of genuine ethical responsibility. The ladder remains useful until we no longer need it. The goal is to reach the place where we can make decisions worthy of the trust placed in us.

Always keep on learning…

If you enjoyed this post and find my work valuable, I would appreciate your support. You can explore more of my ideas in my latest book, Second Order Cybernetics, Essays for Silicon Valley, hard copy available at the Lulu Store.

A Good Enough Post:

In today’s post, I am exploring the notion that viability depends on our capacity for action, and that this capacity may not entirely rely on having a perfect grasp of “Truth.” This possibility, drawn from evolutionary theory, invites us to reconsider a deeply rooted assumption in human thought: that knowledge aims to reflect the world as it is. Perhaps organisms do not carry mirrors of an objective environment. Perhaps they generate workable patterns that allow action. If so, truth in the sense of full correspondence might be not only unnecessary for survival but impossible to achieve.

This shift from truth to adequacy might be more than a semantic difference. It suggests we could reconsider how perception, cognition, and action evolve under the pressure of complexity. Our nervous systems may not have emerged to catalog every detail of reality. They might have emerged to enable viable engagement. They filter, reduce, and transform. They make the unmanageable manageable. This economy of attention could be what allowed life to persist in an environment whose complexity always exceeds the capacity of any single organism.

The Evolutionary Logic of Selective Attention

The earliest organisms had comparatively simple structures. Their survival depended on detecting a few vital differences: light and dark, motion and stillness, hunger and satiation. These differences were not representations of reality in its full richness. They were pragmatic distinctions, selected by evolution because they mattered for survival.

As ecosystems diversified, so did the organisms within them. Greater complexity in the environment favored organisms with richer internal structures. These structures allowed them to absorb more variety and generate more flexible responses. But this expansion had limits. No organism could ever match the full complexity of its environment. Every adaptation remained selective.

Yet evolution’s relationship with cognitive economy appears more nuanced than simple efficiency maximization. Many organisms maintain seemingly “wasteful” capacities (elaborate plumage, complex social behaviors, or redundant sensory systems) that prove crucial during rare but catastrophic events. This apparent contradiction might reveal something deeper. Evolution does not eliminate selectivity; it shapes what gets selected and how. The peacock’s tail represents a different kind of cognitive economy, one that trades metabolic efficiency for reproductive advantage. Even redundancy involves choices about what to duplicate and what to ignore.

Here we see why the word “better” seems always contextual. An organism appears better only in relation to its ecological niche and temporal horizon. There may be no universal scale of improvement. Adequacy appears always local, contingent on the demands of the situation, and provisional across time scales.

The Law of Requisite Variety and Regulatory Challenges

This principle finds a formal expression in W. Ross Ashby’s Law of Requisite Variety: only variety can absorb variety. A regulator must have as much variety in its responses as exists in the disturbances it faces. If the environment can vary in ten ways and the organism can respond in only five, some disturbances will remain unchecked, threatening viability.

Ashby’s law applies specifically to regulatory systems maintaining homeostasis, but its insights extend to cognitive systems facing similar challenges. Both must manage variety mismatches between their internal organization and environmental complexity. Yet matching variety does not mean copying the environment. No finite system can track every detail. Instead, regulation depends on attenuation and amplification. Organisms attenuate the vast variety of the environment into a reduced set of distinctions. They amplify the significance of certain cues to prioritize action.

This does not seem to be a flaw in design. It might be a condition of survival. The key point is this: attenuation may not be about discovering truth but about achieving functional adequacy within specific contexts and time frames. And here is a critical implication – what works today may fail tomorrow. Adequacy is dynamic because the variety we face today may not be the variety we face tomorrow. If we are not able to adapt to new disturbances, viability collapses. Our current struggle to integrate artificial intelligence into the workplace illustrates this point. Many organizational models were built on assumptions of human exclusivity in cognitive labor. Those assumptions worked for decades. Today, they are brittle because the environment has changed. Ashby’s law prevails.

The Shortcut Analogy: Logarithms and Cognitive Compression

To appreciate the elegance and risk of attenuation, consider a good enough historical analogy. Before the age of electronic calculators, navigation and astronomy depended on logarithmic tables. Multiplying large numbers was time-consuming and error-prone. Logarithms offered a remarkable shortcut: turn multiplication into addition. By converting numbers into their logarithmic values, sailors could compute distances and bearings quickly, reducing the cognitive load of calculation.

Crucially, these tables were extremely accurate within their domain of application. Lives depended on precise calculations, and navigators understood both the power and limitations of their tools. They built in multiple redundancies and cross-checks. This compression did not deliver the full detail of multiplication, but it delivered enough precision for safe passage across oceans when used with appropriate awareness of its boundaries.

Our minds seem to prefer operating in a linear way. Sequential thinking appears natural most likely because it proves cognitively economical. It reduces overwhelming complexity to manageable sequences we can follow. Like logarithmic tables, our conceptual frameworks trade completeness for efficiency. They allow us to act without drowning in detail. But there is an important difference. It is that logarithmic tables are mathematically precise within their defined limits. Human cognitive shortcuts however are bias-prone and culturally shaped, and they rarely come with warning labels. When we mistake our tools for the territory itself, the cost becomes invisible. Information is lost. Subtleties disappear. And when the environment changes, what once worked can become dangerous. This is the paradox: what enables us to cope also constrains what we can see. Our abstractions could be both our superpower and our vulnerability.

Pragmatism and Cybernetic Constructivism

This brings us to the philosophical dimension of the topic. Pragmatism, particularly as articulated by William James and John Dewey, treats knowledge as a tool for action rather than a mirror of reality. A belief is “true” not because it corresponds to some ultimate fact but because it proves useful in guiding behavior within a specific context. Truth is redefined as what works, but this “working” must be understood across multiple time scales and contexts. Adequacy is not fixed. It requires constant revision as the environment shifts.

This is not a license for arbitrary belief or wishful thinking. Pragmatic truth remains constrained by consequences. A bridge designed on faulty engineering principles will collapse regardless of the designer’s confidence. A medical treatment based on wishful thinking will fail regardless of the practitioner’s intentions. The pragmatic test is whether our frameworks enable effective action in the world as it actually responds to our interventions. Reality provides feedback, even if we cannot access it directly.

Cybernetic constructivism shares this orientation. Heinz von Foerster reminds us that “the environment contains no information”. What we call information arises in the interaction between an organism and its surroundings. The world does not impose meaning; meaning is enacted. Maturana and Varela describe this as structural coupling. Organisms and environments co-determine each other through ongoing interactions.

Seen in this light, our nervous system does not passively record inputs but brings forth distinctions through its own organization, maintaining coherence in continuous interaction with its surroundings. Knowing becomes an adaptive dance rather than a passive recording. The goal is not to represent an independent world but to maintain viability within a world that is partially brought forth by the act of knowing. This does not mean stability is irrelevant. Reliable patterns of interaction matter. Some regularities can be engaged in ways that allow prediction and engineering. Scientific methodology succeeds not because it removes simplification but because it manages it systematically, using feedback processes such as replication and peer review to adjust and refine adequacy over time and in a social realm.

The Double-Edged Sword: Superpower and Kryptonite

The ability to compress complexity seems to have made life possible. Yet this same ability becomes dangerous when compression becomes rigidity. When abstractions are treated as final truths, systems lose their capacity for adaptation. Stafford Beer captured this danger when he observed that ignorance becomes “the lethal attenuator”. When we lose track of what our simplifications exclude, adequacy transforms into vulnerability.

Let’s look at some examples. The use of algorithms in hiring often reduces the complexity of human potential to a few simplified metrics, which can perpetuate bias. Climate models, although highly advanced, still miss certain feedback loops and critical tipping points. Social media recommendation engines compress human interests into engagement-focused categories, which can push users toward more extreme views by filtering out moderating influences. This is evident in the world nowadays.

Heinz von Foerster reminded us that although the map may not be the territory, the map is all we have. Our ways of making sense are always partial and limited, yet they are the only tools we can use to navigate complexity. Recognizing this helps us remain aware of our cognitive blind spots.

In each case, the problem is not the use of shortcuts but forgetting their limits combined with insufficient feedback. The map is never the territory. When we mistake our ways of making sense for reality itself, fragility follows. What helps us stay viable can also make us blind.

Ethical Implications: What Do We Choose to Ignore?

If we accept that knowledge is constructed for adequacy, not truth, then the question of responsibility becomes unavoidable. Every act of attenuation involves a choice about what to include and what to ignore. These choices shape not only individual survival but collective futures.

In social systems, ignoring complexity can marginalize voices that do not fit dominant abstractions. In technological systems, it can produce biases that perpetuate injustice. The ethic of constructivism is not to abandon simplification (without it, we could not act) but to cultivate awareness of its costs and remain open to revision.

At the individual level, deliberate exposure to dissenting views, reflective journaling on hidden assumptions, and iterative sensemaking can help maintain cognitive flexibility.

We can restate Ashby’s law by saying that viability requires variety. A society that suppresses diversity of thought and perspective reduces its internal variety and becomes brittle in the face of unforeseen challenges. To design for resilience, we must design for plurality.

Final Words:

Survival does not seem to require perfect knowledge. It has required workable distinctions, compressed into forms that enable timely action. This logic of adequacy explains why our minds favor shortcuts, why linear thinking feels natural, and why abstraction is indispensable. Yet it also warns us that what we simplify to live by can, in time, limit what we live for.

The challenge, or more precisely the necessity, might be to balance economy with humility. To remember that our conceptual logarithms, like the tables once used by navigators, are tools for a journey, not the journey itself. They serve us best when we keep them provisional, open to correction, and sensitive to the richness they cannot capture.

Managing attenuation wisely is itself a complex adaptive challenge without simple solutions. It requires not just awareness of our limitations but active practices that surface hidden costs and maintain cognitive flexibility. It demands that we ask not whether our ways of making sense mirror reality, but whether they continue to support effective action in the conditions we now face, and whether we have ways to notice when they no longer do.

Engaging with complexity means getting better at being good enough, continuously. Our task is not to eliminate attenuation but to manage it wisely. And that begins with a question we often neglect. What do we choose to ignore, and how do we ensure that choice remains conscious, provisional, and responsive to feedback?

Always keep learning…

If you enjoyed this post and find my work valuable, I would appreciate your support. You can explore more of my ideas in my latest book, Second Order Cybernetics, Essays for Silicon Valley, hard copy available at the Lulu Store.

A Tale of a Thousand Models:

In today’s post, I am further exploring the notion of models and mental models. We often speak of mental models as though they are neat packages of knowledge stored somewhere in the mind. These models are typically treated as internal blueprints and as simplified representations of the world that help us navigate and make decisions. But what exactly do we mean when we call something a model? And are we always speaking about the same kind of thing?

The term model, in both technical and informal contexts, carries more ambiguity than we often acknowledge. In classical cybernetics, W. Ross Ashby gave the concept a central role. For Ashby, a model was a representation that could simulate the behavior of a system. A good regulator, he argued, must contain a model of the system it seeks to control. This model did not need to be a literal image or a complete mirror. It simply needed to have the right kind of functional correspondence with just enough structure to predict and act upon.

Ashby’s definition is rigorous and functional. The model need not share the same physical form or medium as the system it regulates. What matters is not material resemblance but structural correspondence across selected variables. The model must preserve the relations and transformations that enable viable regulation. Ashby called this ‘isomorphism’. This isomorphism does not demand total replication. It requires that the model preserve only those relations necessary for viable control. This is the basic premise of First Order Cybernetics.

This isomorphic correspondence is what makes the model useful for regulation. The regulator can manipulate the model, run it forward, test interventions, explore possibilities, and trust that the results will map back to the actual system. The model becomes a kind of structural analogue: a way of capturing pattern without requiring material similarity.

When we look deeper, something about this view of models can feel distant. It risks separating the observer from the observed, the knower from the known. It tends toward a view of knowledge that is separated from lived experience. What does it mean for an organism to contain a model of its world, if that organism is not a computer but a living, breathing being?

This is where the Thousand Brains Hypothesis (TBH) offers a helpful contrast. Jeff Hawkins, in developing this hypothesis, suggests that intelligence arises not from a single unified model of the world, but from many partial models working in parallel. Here, however, Hawkins seems to use ‘model’ in a markedly different sense than Ashby’s isomorphic structures. For Hawkins, a cortical column’s model is not a representation that stands apart from experience but a learned pattern of prediction embedded within sensorimotor engagement itself.

Each cortical column builds what Hawkins calls a model of objects in the world, but this model is constituted by the column’s capacity to predict sensory sequences as the body moves through space. The column does not store a picture of a coffee cup. Instead, it develops expectations about what sensations will follow from particular movements when encountering cup-like patterns. Some of these may be visual, some tactile, while others may be of a different sense altogether. The model is not a static thing, but a dynamic process. It is a way of being attuned to specific sensorimotor regularities.

While Hawkins retains the term “model,” his usage stretches its meaning. These patterns may not be models in the traditional sense at all. When we say a cortical column builds a model or learns expectations, we may still be trapped in representational thinking. The cortical column does not store information about objects. It maintains patterns of connectivity shaped by experience. These patterns do not represent the world per se. Instead, they enact a way of being responsive to it. A column’s knowledge of a coffee cup is not a stored description, but a readiness to engage with cup-like affordances. This is the key nuance I would like to offer.

This view of modeling resonates with Heidegger’s phenomenological understanding of being-in-the-world. Heidegger once noted that a hammer is not first known through its shape or composition, but through its use. It becomes present to us as ready-to-hand, as something we know by doing. Similarly, a cortical column knows an object by interacting with it, not by storing a detached image of it. As Heinz von Foerster once said, if you want to see, learn how to act.

In earlier reflections, I explored the limitations of treating mental models as internal representations. When we interact with a system or object, we are not retrieving stored pictures. Instead, we are drawing upon a history of lived engagement. Our orientation is not merely cognitive, but bodily and situated. The notion of a model here becomes something that reveals itself through action, not inspection. The Thousand Brains Hypothesis reinforces this idea by showing how perception and prediction are distributed. A single cortical column may only know part of an object in a specific sensory dimension, but through movement and integration with other columns, it participates in a kind of collective intelligence. There is no master map but only partial perspectives constantly updating and coordinating with one another. The columns are not comparing models. They are participating in a dynamic process of mutual constraint and coordination. This is what Maturana and Varela would recognize as structural coupling. Each column’s activity is shaped by its coupling with other columns, with the body, and with the environment. The result is a network of mutual specification rather than a collection of independent representations.

Intelligence, in this view, emerges not from the integration of discrete models but from the ongoing attunement of multiple sensorimotor streams. This attunement is guided not by accuracy but by viability. Viability is the organism’s capacity to maintain its structure and continue its pattern of living. It is often misunderstood that accuracy directly correlates with viability. The external world presents more complexity than any cognitive system can represent in full. The response, shaped by both constraint and energetic efficiency, is not to build exhaustive models but to maintain abstractions that are good enough. These are not symbolic summaries, but embodied dispositions formed through recurrent interaction.

This is not a flaw, but a feature of adaptive beings. Cognitive structures are not designed to capture the world exhaustively, but to filter it selectively. The principle of structural coupling rests on repetition. It rests on the organism’s ability to reinforce useful patterns over time. What endures are not accurate representations but habits of orientation that have proven viable. Cortical columns do not construct truthful depictions of the world. They cultivate ways of engaging that preserve continuity and coherence within the organism’s domain of living.

This stands in contrast to the classical view where the model is assumed to be singular, coherent, and representational. The model is not something we hold apart from the world, but something we become a part of through interaction with it*. This framing aligns with the constructivist view that organisms are informationally closed. An organism does not passively receive information from an objective world. It brings forth a world through its own structural coupling. What we call a model, then, is not a mirror of external reality but a structure of engagement, a dynamic fit between the organism and its environment.

The language of structure is important. Rather than thinking of models as things organisms have, we might think of them as patterns organisms are. A cortical column’s responsiveness to a coffee cup is not something it possesses but something it enacts. The pattern of connectivity is not a representation of the cup, but a way of being coupled to the cup’s affordances. Whether we call these models, structures of prediction, or patterns of skilled engagement, what unites them is that they are not static descriptions. They are emergent dispositions, formed through repeated interaction. Each term foregrounds a different aspect such as structure, process, or habit. However, they all point to intelligence as enacted rather than mirrored.

This is not to dismiss Ashby’s insight. His use of the term model was never about mirroring for its own sake. It was about enabling viable regulation and constructing just enough structure to explain and act. Perhaps it is more accurate to think of such models as habits of expectation. They are not representations but anticipations. They do not describe the world as it is but orient us toward what is likely to come. They are pragmatic, situated, and always in motion. Or perhaps the term model itself is too burdened. What we call a model may be better understood as a form of skilled attunement. It becomes a pattern of responsiveness that is cultivated through history, shaped by constraints, and sustained by viability. The cortical column does not model the coffee cup. It simply becomes responsive to it.

This reframing opens up deeper questions. If intelligence is not the construction of better representations but the cultivation of more viable engagements, what does this mean for artificial intelligence? Can machines learn to be responsive rather than simply predictive? Can they participate in the world, rather than map it?

The Thousand Brains Hypothesis, interpreted through the lens of structural coupling and lived engagement, suggests that intelligence emerges not from central models but from richly distributed interactions. It implies that robust intelligence does not require more accurate representations, but more diverse ways of being coupled to the world.

To model, in this deeper sense, is to engage. It is to live into a world that reveals itself not all at once, but gradually through action, adjustment, and care. Perhaps, the real power of what we call a model may not lie in what it represents, but in what it enables us to do. Or more accurately, in what it allows us to become.

Final Words:

This shift from models as internal representations to models as patterns of skilled engagement challenges deeply held assumptions about knowledge, cognition, and intelligence. It is not merely a technical redefinition. It is a philosophical turning. If cognition is not about mirroring the world but about maintaining a viable relation to it, then intelligence becomes a matter of fitting rather than mapping. It is not about what we store, but about how we respond. Even this post is not free of modeling. It draws distinctions, frames structures, and builds conceptual pathways. But it does so with an orientation toward viability, not toward finality. The second order reflexive nature of this inquiry (modeling the limits of models) underscores the point. Intelligence is not found in having the final answer, but in remaining open to reframing, recoupling, and reengaging as the world shifts around us.

This reframing also casts new light on the ambitions of artificial intelligence. If intelligence is not the construction of better representations but the cultivation of more viable engagements, then it becomes clear that AI systems, as currently conceived, may be fundamentally limited. The limitation is not merely technical. It is existential. Intelligence, in this deeper sense, emerges from embodied interaction, historical coupling, and recursive responsiveness to a world that matters. Machines that manipulate symbols or detect statistical regularities may approximate aspects of intelligent behavior, but they remain ungrounded in the affective, bodily, and experiential dynamics that make living cognition what it is. Responsiveness is not a product of prediction alone. It emerges from vulnerability, concern, and the need to maintain coherence amid complexity.

Without changes in their environment shaping how they persist, machines may simulate participation, but they do not truly engage. They act without inhabiting. They process without perspective. Perhaps this is one of the main reasons artificial intelligence may fall short of achieving sentience. It relies on static internal representations and lacks the embodied, experiential living necessary for understanding, concern, or care. Without lived coupling, there may be behavior, but not presence. There may be processing, but not perspective.

While navigating complexity, my hope is that this reframing offers both humility and hope. Humility, because it reminds us that our understanding is always partial and situated. Hope, because it suggests that intelligence is not a fixed capacity, but a living process which is co-created, and transformed through our engagements with the world and with each other in a social realm. I will finish with an excellent quote from Di Paolo, Rhohde and De Jaegher:

Organisms do not passively receive information from their environments, which they then translate into internal representations. Natural cognitive systems are simply not in the business of accessing their world in order to build accurate pictures of it. They participate in the generation of meaning through their bodies and action often engaging in transformational and not merely informational interactions; they enact a world.

Always keep learning…

* Hat tip to Heinz von Foerster’s wonderful quote. Am I apart from the universe or am I a part of the universe?