Minimizing Harm, Maximizing Humanity:

In today’s post, I am looking at a question that is rarely asked in management. What if the most responsible course of action is not to maximize benefit, but to minimize harm? In decision theory, this is expressed as the minimax principle. The idea is that one should minimize the worst possible outcome. In human systems, that outcome is best understood as harm to people, relationships, and the invisible infrastructure that sustains collective work.

The language of management is often dominated by the pursuit of gains. Leaders are taught to ask what is the best that can happen. They are told to optimize, to scale, and to seek advantage. The minimax principle turns this question around. It asks instead what is the worst that can happen and how do we prevent it. Every decision about maximization must be evaluated through the lens of minimizing harm. Harm minimization is not a boundary condition but the primary ethical directive that governs all other management decisions.

Russell Ackoff once observed that the more efficient you are at doing the wrong thing, the wronger you become. This statement captures the ethical inversion at the heart of many managerial failures. The pursuit of maximum gain often blinds organizations to the quiet forms of loss that accumulate in the background. Human systems depend on tacit networks of trust, communication, and mutual adjustment. When efficiency cuts too deeply, these invisible infrastructures collapse. The system loses its ability to adapt.

To minimize maximum harm is not to resist change. It is not an invitation to stand still. Rather, it is a recognition that progress and ethics operate according to different logics. Progress concerns improvement and expansion. Ethics concerns the protection of dignity, agency, and reversibility. Once we place harm minimization at the center of our decisions, progress becomes sustainable because it no longer depends on exploitation or exclusion.

The primary ethical directive to minimize harm requires a clear operational principle. Heinz von Foerster provided this principle with remarkable clarity- I shall act always so as to increase the number of choices. This is not a secondary value. This is how harm minimization is operationalized.

Consider what happens when choices are available. When options remain open, people retain the capacity to move in different directions. They can experiment, observe the results, and if those results prove harmful or undesirable, they can try a different direction. This is reversibility. It is not that decisions are undone but that people are not locked into a single path with no way out. Reversibility means the system retains the capacity to self-correct. This becomes an integral part of being viable.

When choices are removed, a different logic takes hold. A decision made under constraint, with no alternatives available, becomes irreversible. The person cannot change course because there is no other path to take. The harm accumulates and cannot be addressed through adaptation or choice. This is an important distinction. To minimize harm is to preserve the optionality that allows people to respond when things go wrong. When you increase the number of choices available to people, you prevent harm from becoming locked in place. You maintain the possibility of recovery. You keep open the horizon of possibilities. The person is not left to say I had no choice, which is the expression of the deepest form of harm, the harm from which there is no escape.

This means that every decision about maximization or progress must be evaluated through this lens. Does it increase or decrease the number of choices available to people? Does it preserve reversibility or does it close off futures? Does it prevent irreversible harm or does it create conditions from which recovery is impossible? This is how we operationalize the primary ethical directive in practice.

Werner Ulrich’s Critical Systems Heuristics extends this insight into a framework for reflective practice. Ulrich reminds us that every system boundary includes some and excludes others. Those excluded often bear the consequences of decisions without having had a voice in making them. Ethics therefore requires that we identify who loses in the system we design. Ethics requires that we act in ways that allow their participation and emancipation. To preserve choice is to protect those at the margins of decisions. It is to recognize that moral responsibility lies in how boundaries are drawn. When we ask who loses, we are asking a minimax question. We are asking what is the worst that can happen for those at the margins.

To some, the minimax principle might sound like a cautious philosophy, one that restrains progress. This would be a misunderstanding. The aim is not to prevent change but to cultivate conditions under which change can occur without catastrophic harm. Here the insights of Magoroh Maruyama are valuable. In his work on second cybernetics, he distinguished between negative feedback processes that regulate deviation and positive feedback processes that amplify it. He noted that deviation amplification is the essence of morphogenesis. Not all deviations are errors to be corrected. Some are the sources of new order and innovation. Ethical design therefore should not eliminate deviation but create conditions in which positive deviation can be generative without catastrophic harm. To minimize maximum harm is not the same as to minimize deviation. It is about preserving the space in which positive deviation can arise safely.

Von Foerster’s imperative and Maruyama’s insight converge here. Both point toward the idea that ethics in complex systems must not suppress variety. Von Foerster’s view was that more freedom comes with more responsibility. When we create systems that expand choice, we simultaneously increase the responsibility of those who act within them. The ethical task is not to eliminate risk but to manage it in a way that nurtures diversity and growth while protecting the conditions of future choice. To design ethically is to create the space in which deviation, learning, and emergence can unfold without irreversible harm.

Behind every visible structure of management lies an invisible infrastructure. It consists of relationships, trust, informal knowledge, and the tacit coordination that keeps work alive. This infrastructure is often taken for granted. It is noticed only when it breaks down. In the pursuit of efficiency, organizations frequently erode these invisible supports. Staff reductions, rigid procedures, and mechanistic control can destroy the very human capacities that enable adaptability and resilience. The question therefore is not what can be gained but what can be lost without recovery. True resilience depends on maintaining the conditions that allow the system to heal itself. When we ask this question, we are asking what choices we are removing from people. We are asking what futures we are closing off.

It is important to distinguish ethics from progress. Ethics does not belong to the domain of progress. Progress concerns the expansion of capability. Ethics concerns the preservation of humanity. The two may coexist, but they are not the same. Progress without ethical constraint risks creating conditions from which recovery is impossible. Ethics without openness to change risks paralysis. The minimax principle, interpreted through von Foerster and Ulrich, provides a way to hold both. It calls for action that reduces maximum harm while sustaining the capacity for continued evolution.

Maruyama’s perspective deepens this understanding. By allowing positive deviation, we cultivate the potential for new forms of order. By preserving choice, we protect against harm that would close the future. The task of management therefore is not to optimize the present but to sustain the possibility of better futures without destroying the diversity from which they may emerge.

Ackoff’s view was that the future is not something to be predicted but something to be designed. The ethical responsibility of design is to ensure that this future remains open. To minimize maximum harm is to recognize the fragility of what is human in our systems. To preserve choice is to keep open the horizon of possibility. To embrace positive deviation is to invite emergence without destruction. Ethics in management is not about perfection or certainty. It is about maintaining the delicate balance between care and change.

Final Words:

When compromises are inevitable in human systems, the most humane path is to protect what allows us to begin again. The minimax principle is an invitation to ask different questions in our organizations. It is an invitation to be aware of who loses in the systems we design. It is an invitation to increase the number of choices available to people. It is an invitation to preserve reversibility and to protect the invisible infrastructure that sustains our collective work. We are responsible for our construction of these systems. We are responsible for the futures we foreclose and the futures we keep open. To be an authentic manager is to be aware of this responsibility and to strive, always, to minimize the harm we might do while creating conditions for emergence and learning.

Stay curious and always keep on learning.

Bundle Deal for Second Order Cybernetics and Connecting the Dots…

I have been writing blog posts for over 15 years now. In 2025, I was fortunate to have two of my books published by Cyb3rSyn Labs (cyb3rsynlabs.com), a wonderful community founded by Laksh Raghavan. One of these books was coauthored with my good friend, Venkatesh Krishnamurthy.

Second Order Cybernetics is an anthology that explores how observers shape the systems they observe. The role of the observer is often neglected, but it is a crucial part of sensemaking in general. This anthology tries to address that gap. Readers can explore thought-provoking ideas like understanding understanding, balancing autonomy and control, requisite variety, and viable systems. The essays cover key thinkers like Ross Ashby, Heinz von Foerster, Stafford Beer, and Humberto Maturana. The book was curated from my blog posts, and they are now exclusively available via this book. The posts have been removed from the blog.

Connecting the Dots… takes a different approach to the Toyota Production System. It looks beyond the familiar tools (kanban boards, value stream maps, standardized work) to explore the thinking that created them. The book examines how the principles that shaped TPS in post-war Japan remain powerful for navigating today’s challenges with automation, AI, and organizational change. It is my view that if you want to learn a subject at a deep level, you should try to find the common threads from different domains, and that gives you an additional insight that you would not otherwise have had. That was our inspiration behind the book. Bruce Hamilton, the Toast Kaizen teacher, was kind enough to write the foreword for this book.

Both books are available as a bundle on LeanPub: https://lnkd.in/dUZVxDHn

Purchased separately, they cost $39.98 total, but as a bundle you can get them for $29.99.

Hard copies are also available at Lulu: https://lnkd.in/dV9Fg-BA

I have heard it said that if you are writing, you should write as if you are writing for yourself. Write about things you would want to read. That is exactly what I have tried to do here. I hope you find insights that stay with you, insights that prove formative, and insights worth sharing with others.

Stay Curious and Keep on Learning…

The Persistent Unmarked Space:

In today’s post, I want to explore an observation about how we make distinctions and what this reveals about the structure of our thinking. I am inspired by the ideas in Spencer-Brown’s “Laws of Form” and broader themes in cybernetics about how observers construct meaning.

The starting point is simple. When we make a distinction, we create a boundary that separates what is inside from what is outside. Spencer-Brown formalized this with his notation of the Mark, showing how any act of indication simultaneously creates both the indicated and the non-indicated. This is shown below:

As we look closer, things get more interesting.

The Basic Operation of Distinction-Making:

When I make one distinction to mark “A,” I create two states. There is A (the marked state) and not-A (the unmarked state). This seems straightforward enough. We can depict this as below:

(A) not-A

Spencer-Brown showed that this basic operation has interesting algebraic properties. The unmarked state is not simply absence or void. It is the enabling condition that gives the marked state its meaning. Without the background of the unmarked, the mark itself would be meaningless.

This relationship between marked and unmarked is fundamental to how meaning emerges. The marked state exists only in relation to what it excludes.

We can take this further. Consider what happens when we make multiple distinctions. If I distinguish both A and B within the same unmarked space, Spencer-Brown’s notation shows this as ((A)(B)).

This actually creates three categories, not four. There is A, there is B, and there is everything else that is neither A nor B. We can represent this as ((A)(B))X, where X represents the remainder of the unmarked space.

In Spencer-Brown’s system, A and B are mutually exclusive by the nature of how the distinctions are made. They are separate marks within the same unmarked background, not overlapping regions as in classical set theory.

This gives us the pattern that n distinctions create n+1 categories. Three distinctions would create four categories, four distinctions would create five, and so on.

The Persistent Unmarked State:

What interests me most is how something remains unmarked regardless of how many distinctions we make. No matter how extensively we mark up our space with categories and boundaries, there is always an unmarked background that enables those markings to have meaning.

This unmarked background is not just everything else we have not thought of yet. It is the condition that makes thinking and categorizing possible in the first place. When we argue about categories like hot versus cold, we often treat these as exhaustive alternatives, often as dichotomies. But there is always the unmarked space that contains the ideas of moderate temperatures, context-dependent judgments, and the framework of assumptions that makes temperature distinctions seem natural and meaningful.

Connection to Self-Reference Problems:

This observation about the persistent unmarked state connects to well-known problems in formal systems, though the connection is analogical rather than mathematically precise.

Russell discovered that attempts to create completely comprehensive sets run into contradictions when they try to include themselves. The set of all sets that do not contain themselves creates a paradox when we ask whether it contains itself. Gödel showed that formal systems strong enough to express arithmetic cannot prove their own consistency without appealing to principles outside the system.

These results point to a general pattern. Complete self-inclusion appears to be impossible. There is always something outside the system that the system requires but cannot fully capture within its own terms.

The unmarked state in Spencer-Brown’s system suggests a similar limitation. The observer making distinctions cannot fully mark their own position as observer. There is always something unmarked that enables the marking process itself.

Implications for How We Think:

This has practical implications for how we approach knowledge and categories. It suggests epistemic humility. If our categorical frameworks always rest on unmarked assumptions and background conditions, then we should hold our categories lightly. They are tools for navigating experience, not mirrors of an independent reality.

In addition, it points toward the value of examining our own distinction-making processes. When we notice ourselves categorizing something, we can ask what remains unmarked in that process. What assumptions are we making? What alternatives are we not seeing?

And it also suggests why different observers can legitimately make different distinctions. The unmarked background that enables distinctions varies with the observer’s purposes, biological capabilities, and cultural context. The distinctions we make depend entirely on the purpose(s) of the observer. Different observers make different distinctions. This viewpoint supports the idea of pluralism.

Final Words:

Spencer-Brown’s insight about the marked and unmarked states reveals something fundamental about the structure of thought itself. Every act of indication creates both what it marks and what it leaves unmarked. The unmarked is not simply absence but the enabling condition for meaning.

This leads to both epistemic humility and intellectual pluralism. Different ways of making distinctions reveal different aspects of complex situations. No single framework captures everything. The wisdom lies in working skillfully with multiple perspectives while recognizing what each obscures.

Most importantly, the unmarked space always exceeds our attempts to mark it completely. As Heinz von Foerster observed, “Objectivity is a subject’s delusion that observing can be done without him.” The observer making distinctions cannot fully step outside their own process of observation.

This is not a limitation to overcome but a fundamental feature of how minds engage with complexity. “The environment as we perceive it is our invention,” von Foerster also noted, pointing to the active role we play in constructing the realities we inhabit.

Understanding this process of distinction-making is essential for navigating complexity with wisdom. Think about how this affects the popular frameworks with neat triads, 2×2 matrices, etc. that promise to carve up the world into manageable categories. Every one of these frameworks commits the same fundamental error. They erase the observer who created the distinctions and ignore the vast unmarked space of assumptions, context, and excluded possibilities that makes their tidy categories seem meaningful.

The unmarked state reminds us that thinking is always an ongoing process within contexts we can never fully transcend. This recognition opens us to continued learning and the possibility of seeing familiar situations in new ways.

Stay Curious and Always Keep on Learning.

If you found value in this exploration of thinking and categories, check out my latest book on the Toyota Production System, Connecting the Dots…

The soft copy is available here. And the hard copy is available here.

Leadership as Condition Creation and Boundary Critique:

Part 2: Boundary Critique and Condition Creation

Refer to my previous post here.

In today’s post, I am exploring following up on what leadership means when we recognize that organizations do not have purposes, but people do. If we cannot simply align everyone to an organizational purpose, what does it mean to lead? How do we create conditions where diverse human purposes can interact productively?

I am drawing on insights from Critical Systems Heuristics, second order cybernetics, and systems thinking. The ideas here continue from my previous post on organizational purpose.

Leaders as Condition Creators Within the System:

If organizations do not have purposes, what does leadership mean? I believe leaders are people who take up the responsibility to create conditions so that desired patterns of behavior and interaction emerge.

But here is the crucial point from second order cybernetics that I find fascinating. Leaders are not neutral architects standing outside the system. They are participants whose own purposes drive their condition-creating. When a leader decides what outcomes are desired, they make that determination based on their own purposefulness, their own constructed sense of what matters.

This creates recursive loops that traditional leadership thinking ignores. I picture this as a spiral of mutual influence. Leaders create conditions based on their purposes. These conditions interact with others’ purposes. The resulting patterns influence what the leader observes as working or failing. This changes the leader’s purposes and their condition-creation. The cycle continues.

I should note that this recursive leadership operates at multiple time scales. Leaders need to maintain day-to-day viability by preserving conditions that allow current purpose interactions to function. This is the frequent work of maintaining operational stability. But they must also monitor whether environmental changes threaten the essential variables that enable people to maintain their purposefulness and adaptive capacity.

When environmental shifts make current conditions unsustainable, leaders need to engage in what Ashby called ultrastable adaptation. For instance, when sudden regulatory changes undermine existing processes, stability requires maintaining day-to-day viability, but adaptation might mean restructuring the whole feedback system. The challenge is knowing when to maintain stability and when adaptation requires breaking and rebuilding the very conditions they have been protecting.

The leader is simultaneously observer and observed, designer and designed. Their responsibility does not come from some organizational mandate. It emerges from their own purposefulness and their relationships with other purposeful people in the system.

This raises critical ethical questions that I find compelling. Given that leaders’ individual purposes inevitably shape condition-creation, how do they prevent their strong personal purposes from overshadowing the genuine emergence of diverse patterns?

From a cybernetic constructivism standpoint, I believe the answer lies in the recursive nature of their role. As they create conditions for others to observe and influence the system, they must also create conditions for others to observe and influence their own condition-creating behavior. The leaders should engage in systematic practices for self-critique. They also need a means for regular feedback loops about how their condition-creating affects others’ viability. They need structured processes for others to question their boundary-drawing decisions.

Aiming for Betterment Through Boundary Critique:

Rather than imposing abstract organizational goals, I see leadership as creating conditions to maximize the viability and flourishing of as many participants as possible. This includes ensuring transparent and just processes for navigating inevitable trade-offs.

I acknowledge the reality that in complex systems with genuinely conflicting purposes, achieving betterment for absolutely everyone may be impossible. Some purposes can prove incompatible. Some trade-offs can disadvantage certain participants. Some conflicts may require difficult choices about whose viability takes priority in specific contexts.

This is where Critical Systems Heuristics becomes essential. I believe the leader’s purpose becomes systematically questioning boundaries and stakeholder perspectives to prevent falling into benevolent paternalism. The focus turns to identifying who is not being served by current arrangements. Whose voices are not being heard? Who are the “losers in the game”?

Instead of “I will identify the losers and make their lives better”, the approach becomes “I will create conditions where people can identify when they are losing and have agency to change that”. This requires ongoing boundary critique. This might involve facilitated reflection sessions where excluded stakeholders name their concerns, or governance mechanisms where power asymmetries are explicitly surfaced.

Questions such as these become essential. Who ought to belong to the system of stakeholders? What ought to be the purpose of the system? Who is not being served by this system? Whose voices are not being heard? But these questions require systematic, repeated processes to prevent them from becoming empty rituals.

When purposes prove genuinely incompatible, I believe the leader’s role is not to force resolution but to create transparent processes for making trade-offs and supporting those whose purposes cannot be accommodated within the current system. This might involve restructuring teams. It might mean creating parallel tracks for different approaches. It could include helping people find more compatible contexts for their purposes, or providing transition support for those who need to leave.

Through this process, what we observe through POSIWID analysis becomes more aligned with supporting individual viability and collective flourishing. This is not because “the system” changes its behavior, but because the patterns of human interaction shift.

Purpose and Profit as Emergent Outcomes:

When conditions support individual recursive viability through ongoing boundary critique, when people can maintain their own purposefulness while engaging productively with others, the patterns of behavior often transcend simple profit maximization. Innovation, resilience, creativity, sustainability, and quality of life all emerge as natural expressions of viable recursive interactions. These become part of what we can observe through POSIWID analysis.

The profit motive does not disappear. It becomes one element in the larger emerging patterns of collective viability that arise from supporting individual viability. Profit becomes a signal that people are creating value that others want to exchange for. But through our refined POSIWID approach we can see it is a lagging indicator of the health of human interactions rather than the primary driver of behavioral patterns.

When we apply POSIWID to this approach, we can observe whether the conditions actually support individual viability and produce emergent collective benefit. Or do they just generate new forms of rhetoric while the same problematic patterns of interaction continue?

The question is not whether to choose profit or purpose. This is a false dichotomy. The question is this – How do we create conditions where human flourishing and value creation emerge together? How do we support people pursuing what matters to them in relationship with others, while systematically questioning who gets to define what flourishing and value mean?

Final Thoughts:

Leadership in this light requires epistemic humility and acceptance of pluralism. This approach exposes the myth of the benevolent paternalistic leader. The leader cannot be all knowing and all powerful. Leadership in complex human systems requires epistemic humility. No single person can understand all the purposes at play or predict how they will interact under different conditions.

Epistemic humility means acknowledging the limits of what any observer can know. When we recognize that our observations are shaped by our own purposes and position, we become more cautious about imposing our view of what is best for others. We focus instead on creating conditions where people can pursue their own definitions of flourishing while engaging constructively with others who have different purposes.

Acceptance of pluralism means recognizing that people legitimately hold different purposes and values. These differences are not problems to be solved but realities to be worked with. The art lies in creating conditions where diverse purposes can interact without requiring false unity or artificial harmony.

I find it meaningful that humans evolved as a species to rely on each other. As Heinz von Foerster observed, “A is better off when B is better off“. This insight from second-order cybernetics points toward creating conditions where mutual viability becomes possible. We should focus on building conditions where we can rely on each other rather than trying to control each other.

A wise leader focuses on minimizing harm first before maximizing benefits. In complex systems with genuinely conflicting purposes, I believe the first priority is ensuring that our condition-creating does not undermine the viability of those we claim to serve. Only then can we work toward enhancing collective capability.

When we work with the actual agency of actual people, guided by epistemic humility and acceptance of pluralism, we discover possibilities for organizing that honor both individual viability and collective capability.

Stay Curious, and Always Keep on Learning…

Rethinking Purpose: When Organizations Stop Having and People Start Being…

Part 1: The Reification Trap and What We Actually Observe

In today’s post, I am looking at the notion of organizational purposes in light of cybernetic constructivism. The ideas here are inspired by giants like Stafford Beer, Spencer Brown, Ralph Stacey, Werner Ulrich, Russell Ackoff and Erik Hollnagel.

The corporate world seems to be obsessed with organizational purpose. Mission statements adorn lobby walls. Consultants make fortunes helping executives discover their organization’s deeper calling, their “why”.

From a cybernetic constructivist perspective, this entire enterprise rests on a philosophical error. This is the notion that organizations have purposes. Organizations do not have purposes, people do.

Organizations are certainly created with specific objectives and goals in mind. For example, a company can be formed to develop software or a charity established to alleviate poverty. But the idea that these entities themselves possess purposes is what philosophers call reification, treating an abstraction as if it were a concrete thing.

Organizations have goals and objectives set by their founders or governing bodies. But purposes, the deeper sense of meaning and direction that drives behavior, belong to individuals. This distinction is crucial for understanding emergence in an organizational setting.

This is not semantic nitpicking. It is a fundamental reframe that helps us rethink how we understand organizational behavior and human experience within systems.

The Reification Trap and POSIWID:

When we say something like “our company’s purpose is to make the world more sustainable”, we commit reification. We treat an abstraction as if it were a concrete thing. Organizations are viewed wrongly as entities with intentions, values, and purposes of their own.

What organizations actually have are stated goals and objectives, declarations about what they aim to achieve. But when we strip away this corporate fiction, what remains is people. People with their own purposes, their own sense-making processes, their own constructed meanings about what matters and why.

Stafford Beer’s insight that “the purpose of a system is what it does” (POSIWID) helps us cut through the fog of stated intentions and mission statements. But when we think about what we have been saying so far, we can see that the idea of POSIWID itself could be a reification trap. In criticizing the reification of organizational purpose using POSIWID, we risk reifying “the system” itself as something that “does” things.

A way to ease out of this apparent trap is to use Wittgenstein’s Ladder. POSIWID serves as a cognitive aid helping us climb to better understanding, which we then discard.

What we actually observe are patterns of human behavior and interaction. When we say “the system produces data harvesting behaviors”, we mean “we observe people engaging in data harvesting activities within particular structural contexts”. When we say “the system undermines individual viability”, we mean “we observe interactions between people that result in reduced individual flourishing”.

The value of POSIWID lies not in discovering what systems “really want” but in training our attention on emergent patterns of human behavior rather than declared organizational intentions. Once this shift in attention is accomplished, we can discard the system-as-actor metaphor and focus on the actual phenomenon. People with purposes interacting within conditions that constrain and enable certain patterns of behavior.

Applied to organizations, this refined principle becomes this. If we want to understand what is actually happening, we should observe the patterns of behavior and interaction that emerge from people’s purposes within particular conditions, not focus on declared organizational goals.

Patterns of Purpose Interaction:

From a cybernetic constructivist perspective, what we observe are patterns emerging from the interactions of individual purposes within structured contexts. When a software engineer’s purpose to solve elegant problems intersects with a marketer’s purpose to help people discover useful tools, and both operate within structures that reward customer satisfaction, we observe certain patterns of behavior and outcomes.

These patterns are dynamic, not fixed. As people’s individual purposes evolve, as new people join the system, as external conditions shift, the observable patterns shift too. The patterns become a living expression of ongoing purpose interactions rather than a static implementation of declared intentions.

But here’s the crucial insight from cybernetics. The observer is part of the system being observed. When we observe patterns of organizational behavior, we are not neutral external scientists. We are participants whose own purposes and perspectives shape what we see. This creates recursive loops that traditional management thinking often ignores.

The manager who observes that “people are not motivated” and implements new programs is not a neutral observer. They are a participant whose own purposes drive their observation and choice of interventions. These interventions then become part of the conditions within which other people’s purposes interact, potentially changing the very patterns the manager was trying to understand.

The refined POSIWID insight helps us see that if we want to change observable patterns, we need to understand and work with the actual purposes of the actual people involved, not impose new mission statements or organizational goals from above.

From Alignment to Resonance:

Traditional thinking seeks alignment, getting everyone pointed in the same direction toward the same stated organizational goals. But our refined understanding shows us that there is no collective entity that can choose a unified direction. In reality, there are only individuals with purposes engaging in ongoing interactions.

Some of these interactions create resonance, patterns where individual purposes amplify and support each other in ways that produce coherent behavioral patterns. Others create tension or conflict. The software engineer’s elegant problem-solving and the marketer’s user advocacy can resonate productively, creating emergent value. But this coherent behavior is not orchestrated by some collective consciousness. It emerges from how these specific people with these specific purposes interact within particular conditions.

What we observe are the behavioral patterns emerging from these ongoing purpose-interactions, not something chosen by “the organization.” Even when there are formal decision-making processes, you still have individual people making individual choices about whether to participate, how to contribute, what to support.

Understanding Recursive Viability:

When we talk about recursive systems, we mean something different from linear processes. In recursive systems, each loop is independently viable. Each person constructs their own purposes, observes their own interactions, and maintains their own capacity to adapt and respond. They are not merely components serving the larger system. They are complete systems in themselves.

People observe the patterns of interaction, including their own participation in those patterns. This observation changes how they construct their purposes, which changes their interactions, which changes the patterns, which changes what they observe. Each person completes this cycle independently while also participating in the larger patterns.

The viability of observable patterns emerges from the viability of individual participants, rather than being imposed upon them. When individual people can maintain their own purposefulness and adaptive responses, the larger patterns that emerge tend to be more resilient and creative.

Instead of asking “how do we get people to serve the organization’s purpose”, we ask “how do we create conditions where each person’s independent viability contributes to emerging patterns that enhance collective viability?”

Collective viability is not itself an entity or fixed goal. It is an emergent, dynamic pattern arising from the interactions of individual viable systems. It shifts as individual purposes evolve, as new people join the system, as conditions change.

Quality of Life and Practical Implications:

Quality of life is not something organizations provide to employees like a benefit package. It is something individuals construct through their lived experience of pursuing their purposes within particular conditions. But quality of life is both an input and output of the system. When people experience high quality of life, they bring different energy and capability to their interactions.

This reframe has practical implications. If we want to change observable patterns of behavior, we need to understand and work with the actual purposes of the actual people involved. What do people actually care about? How do their purposes complement or conflict? What conditions support the expression of these purposes?

Sustainable change happens through shifts in the interaction of purposes, not through compliance with new directives. People adapt their behavior when conditions change in ways that better enable them to pursue what they already care about, or when they develop new purposes through their lived experience of interaction with others.

Final Words:

Let go of the fiction that your organization has a purpose. Instead, get curious about the actual purposes of the actual people involved and observe the patterns of behavior that emerge through POSIWID analysis. What do they care about? How do their purposes interact? What behavioral patterns emerge from these interactions?

Then, experiment with conditions. What structures and processes support the kinds of interactions that produce the behavioral patterns you want to see more of? Pay attention to emergence while remaining aware of your position as observer. Use POSIWID as your reality check. If the observable patterns do not match the stated intentions, look to the interaction of individual purposes within current conditions for explanation.

This shift from organizational purposes to human purposes is not merely theoretical. It is practical. When we stop pretending that abstractions have agency and start working with the actual agency of actual people, we discover possibilities for organizing that honor both individual viability and collective capability.

In the next post, we will explore what this means for leadership as condition creation, boundary critique, and the challenge of supporting diverse purposes within structured contexts.

I will finish this post with a quote from Ralph Stacey:
There is no possibility of standing outside human interaction to design a program for it since we are all participants in that interaction.

Stay curious and always keep on learning…

Wittgenstein’s Ladder in Complexity: Why We Need Tools We Must Abandon

My propositions serve as elucidations in the following way: anyone who understands me eventually recognizes them as nonsensical, when he has used them as steps to climb beyond them. (He must, so to speak, throw away the ladder after he has climbed up it.) – Ludwig Wittgenstein, Tractus Logico-Philosophicus

In my recent post on the two dogmas of complexity science, I talked about ontological complexity realism and epistemological representationalism. These are the beliefs that complexity exists ‘out there’ to be measured and that our task is to create neutral representations of it. Today, I want to explore why these dogmas persist and why overcoming them requires something that seems paradoxical. We need conceptual tools that we must eventually abandon.

This is where Wittgenstein’s ladder becomes particularly relevant for complexity work. When reentry per Spencer-Brown’s Laws of Form is needed to achieve second-order understanding, the ladder offers a path through what might otherwise be an intractable problem.

The Reentry Problem in Complexity:
When talking about complexity, we often overlook the point that the observer cannot be separated from what they observe. Every attempt to map or measure complexity changes the observer-system relationship, which changes the ‘complexity’ itself. This creates what George Spencer-Brown called reentry: when a distinction folds back on itself.

Consider the Ashby Space framework I critiqued earlier. The moment we try to plot an organization on its coordinates, we encounter reentry. Who determines where the organization sits on the ‘variety of stimuli’ axis? The organization itself, through its own distinction-making processes. What counts as ‘variety of responses’? Again, this depends entirely on the distinctions the observer can make about meaningful action.

The framework cannot escape this recursion. It treats as measurable quantities what are actually dynamic processes of distinction-making between observer and observed. This recursion is not a bug to be fixed but a feature of complexity itself.

As I explored in my post on the form of decency, reentry reveals contradictions in systems that try to maintain rigid boundaries. When xenophobic ideologies apply their own criteria to themselves, when the form folds back, they collapse under their internal logic. The same dynamic occurs when complexity frameworks attempt to map the very processes of distinction-making that generate complexity.

Why Reentry Creates a Need for Ladders:
If our tools for understanding complexity are themselves subject to reentry effects, how do we develop more sophisticated ways of thinking about complex systems? We cannot simply abandon all conceptual tools, yet we cannot treat them as neutral representations either.

This is where we need to recognize a crucial distinction about when ladder consciousness becomes necessary. When we engage with situations in ways that generate significant recursive coupling between observer and observed (when our distinction-making substantially shapes what we are trying to understand, when our interventions change the system which changes us which changes our interventions), then treating our models as stable representations becomes counterproductive.

Consider the difference between using a roadmap to navigate familiar streets versus using a systems model to understand organizational dynamics. The roadmap engages with relatively stable relationships such as the streets that rarely change position because we are looking at the map. But organizational systems modeling involves high degrees of recursive coupling. The very process of creating models changes how participants see their organization, which changes how they behave, which changes the organizational dynamics, which requires updating the models.

When we are complexifying our relationship with a situation through high degrees of recursive engagement, our models must become ladders. They cannot remain permanent reference tools because both we and the situation are co-evolving through the modeling process itself.

This is where Wittgenstein’s ladder becomes relevant. The ladder offers a way to use conceptual tools while remaining aware of their provisional nature. We need frameworks to help us think about complexity, but we also need mechanisms for transcending the limitations of those same frameworks.

The ladder works through what might seem like a contradiction: we use conceptual distinctions to develop awareness of the limitations of conceptual distinctions. We employ frameworks like Ashby Space not because they represent reality accurately, but because they can help us recognize how our own distinction-making processes shape what appears as ‘complex’.

This creates what Heinz von Foerster called second-order cybernetics, observing observation. First-order thinking assumes we can step outside the system and create objective maps. Second-order thinking recognizes that we are always already participants in the systems we are trying to understand.

The Ladder in Practice: From Tools to Meta-Awareness:
Consider how this works in organizational consulting. When we facilitate a systems mapping exercise, we might begin by treating the resulting diagram as if it represents the ‘real’ organizational structure. This first-order approach focuses on improving the accuracy of the map.

But when we are engaged in recursive coupling with the organization (when the mapping process itself changes how participants understand and enact their organizational reality), ladder consciousness suggests a different approach. The map becomes valuable not when it accurately represents the organization, but when the mapping process helps participants recognize how their own distinction-making participates in creating organizational dynamics. We use the tool to develop meta-awareness of how we collectively complexify organizational life.

This shift points to the very needed meta-awareness. Instead of asking ‘Is our systems map accurate?’ we ask ‘How does the process of creating this map reveal and reshape our current ways of making distinctions about organizational life?’ The tool serves its purpose when it points beyond itself toward the processes that we participate in creating organizational reality, then becomes disposable once we have developed more direct awareness of our participation.

This principle applies across complexity frameworks. When we use any analytical tool, ladder consciousness means recognizing that we are not discovering objective properties but enacting particular ways of making sense that bring certain possibilities into view while obscuring others. The framework becomes useful when we can use it to examine our own sense-making, then let it go.

Beyond Tools: What Emerges After the Ladder:
This raises an important question. What happens after we kick away the ladder? What replaces our conceptual tools once we have transcended their limitations?

The answer is not the absence of structure but a different relationship to structure. After using and abandoning frameworks, what can emerge is what John Dewey called ‘inquiry’, a more fluid, responsive way of engaging with situations that draws on conceptual resources without being constrained by them.

Dewey’s conception of inquiry is particularly relevant here because it transcends the subject-object dualism that creates many of our analytical problems. Instead of treating thinking as something that happens inside our heads while we observe an external world, Dewey understood inquiry as a transactional process between organism and environment. The inquirer and the situation inquired into are parts of a single unfolding transaction.

This means inquiry is not about representing a pre-existing reality but about transforming problematic situations into more settled ones. When we encounter what we call a ‘complex situation’, inquiry suggests we are not discovering complexity ‘out there’ but participating in an ongoing transaction that we might call ‘complexifying’. The situation becomes complex through our engagement with it, just as we become complex through our engagement with the situation.

For Dewey, genuine inquiry involves what he called ‘learning by doing’ coupled with reflection on that doing. We act, observe the consequences, and adjust our future actions based on what we learn. This creates a recursive cycle where our understanding evolves through engagement rather than through detached observation. The goal is not to achieve final truth but to develop more intelligent ways of acting within ongoing situations.

This approach naturally incorporates ladder consciousness. We use conceptual tools as hypotheses for action rather than as final descriptions of reality. We test these tools against their consequences in lived experience, keeping those that prove helpful and abandoning those that constrain effective action. The tools serve inquiry rather than replacing it.

This post-ladder engagement is characterized by several qualities. This is not meant to be an exhaustive list by any means. Just like the ladder, this should serve as an intuition pump.

Responsiveness over methodology: Instead of applying predetermined frameworks, we develop sensitivity to what each situation calls for. We maintain access to various conceptual tools while remaining free to abandon them when they no longer serve.
Process awareness: We become more conscious of how our own sense-making participates in creating the realities we encounter. This is not relativism but what Donna Haraway called ‘situated knowledge’: knowledge that acknowledges its own positioning.
Provisional commitment: We can act decisively based on our current understanding while remaining open to revision. This allows for second order approach to wisdom, intuitive knowledge of the limits of knowledge.

The Ethics of Temporary Tools:
There is an ethical dimension to ladder consciousness that connects to my earlier post on reentry and xenophobia. When we hold our conceptual tools too tightly, we risk treating our provisional distinctions as absolute truths, our temporary boundaries as permanent walls. This is one of the main reasons why we must discard the ladder rather than hold onto it.

The ladder teaches a different relationship to our beliefs and frameworks, firm enough to guide action, light enough to avoid becoming weapons. This balance is crucial and deserves deeper exploration.

What does it mean to hold beliefs firmly enough to guide action? It means we must be able to act decisively based on our current understanding, even while acknowledging that understanding is provisional. Without some degree of commitment to our frameworks, we become paralyzed by infinite doubt. We need enough conviction to move forward, to make choices, to take responsibility for our actions.

But what does it mean to hold these same beliefs lightly enough to avoid weaponizing them? It means maintaining what Keats called ‘negative capability’. This is the ability to remain in uncertainty and doubt without irritably reaching after fact and reason. It means recognizing that our strongest convictions might be wrong, our clearest insights might be partial, our most cherished frameworks might be limiting us in ways we cannot yet see.

This creates a paradoxical situation that the ladder helps us navigate. We must act as if our current understanding is enough to work with, while remaining open to its revision. We must commit without clinging. We must form strong opinions, but hold them lightly.

This becomes particularly crucial when working with others who hold different frameworks. Instead of engaging in battles over whose map is more accurate, ladder consciousness invites us to explore how different ways of making sense might serve different purposes. It asks us to treat our frameworks as offerings to collective inquiry rather than as territories to defend.

The ethical imperative here connects to von Foerster’s principle: ‘Act always so as to increase the number of choices’. When we hold our tools lightly, we create space for others to contribute their own sense-making resources. When we avoid weaponizing our frameworks, we keep possibilities open rather than shutting them down.

Our role becomes less about providing definitive maps and more about helping develop capacities for making better distinctions in the face of uncertainty. This suggests designing interventions that increase what von Foerster called ‘the number of choices’ rather than narrowing them down to predetermined solutions.

Climbing Toward Participatory Knowing:
This brings us back to my critique of complexity science’s foundational dogmas, but with an additional insight that shifts how we use language itself. We typically use complexity as a noun (‘this system has complexity’) or an adjective (‘this is a complex situation’). But it may be time to recognize complexity as a verb, something we do rather than something we encounter.

When we complexify a situation, we are not discovering pre-existing complexity but participating in an ongoing process of distinction-making and sense-making that brings complexity into being. The situation becomes complex through our engagement with it, just as we become complex through our engagement with the situation. Complexity emerges from what I have called epistemic coupling: the recursive interaction between knowing systems and their environments.

This verb-oriented understanding aligns with Dewey’s transactional thinking and Spencer-Brown’s emphasis on the observer’s role in creating distinctions. It suggests that when we say a situation is ‘complex’, we might more accurately say we are ‘complexifying’ our relationship with that situation through the particular ways we choose to engage with it.

This reframing has practical implications. Instead of asking ‘How can we manage this complex system?’ we might ask ‘How are we complexifying this situation, and how might we complexify it differently?’ Instead of treating complexity as a problem to be solved, we recognize complexifying as an ongoing process we participate in creating.

This perspective naturally leads to ladder consciousness. If complexity emerges from observer-system interactions, then studying complexity must include studying how we study. We cannot step outside the epistemic coupling that generates complexity in the first place.

The ladder provides a way to work with this recursion constructively. It allows us to use conceptual tools to bootstrap ourselves into meta-cognitive awareness, then abandon those tools once they have served their purpose of revealing our own participation in constructing what we take to be reality.

Final Words:
Wittgenstein’s ladder offers more than a philosophical metaphor for complexity work. It suggests a practical approach to navigating situations where traditional analytical tools reach their limits. In a world facing unprecedented challenges that resist conventional problem-solving approaches, we may need frameworks that can help us think more clearly while remaining open to possibilities we cannot yet imagine.

The ladder teaches us that sometimes the most sophisticated response to complexity is paradoxical, using our best analytical tools while remaining prepared to abandon them in favor of more direct engagement with emerging situations. Sometimes deeper understanding comes not from having better maps, but from developing better capacities for navigation in unmapped territory.

This suggests a form of wisdom that seems well-suited to our current historical moment: recursive and reflective, provisional and purposeful. Each of these qualities that represent a cybernetic Constructivist approach deserves elaboration.

Recursive wisdom acknowledges that we are always inside the systems we are trying to understand. It recognizes that our attempts to make sense of complexity are themselves part of the complexity we are trying to navigate. This leads to what we might call ‘meta-learning’: learning about how we learn, thinking about how we think. Recursive wisdom asks us to include ourselves in our analyses, to observe our own observing.

Reflective wisdom suggests that effective action in complex situations requires ongoing consideration of our own assumptions, biases, and blind spots. But this is not the paralysis of infinite self-doubt. Rather, it is the cultivation of the ability to think about what we are doing while we are doing it, to adjust our approach based on emerging feedback from the situation itself.

Provisional wisdom means holding our current understanding as our best guess given available information, while remaining genuinely open to revision. It means acting with conviction while maintaining epistemic humility. This is what we can call as ‘fallibilism’, the recognition that any particular perspective, no matter how well-supported, might be incomplete or mistaken.

Purposeful wisdom suggests that this openness to revision is not aimless but directed toward some vision of beneficial outcomes. It means using our provisional understanding to work toward flourishing, justice, and expanded possibilities for all participants in the situation. Purposeful wisdom asks us to take responsibility for the worlds our actions help create.

Together, these aspects suggest an approach to complexity that is both humble and decisive, both open and committed. It invites us to use our best tools while holding them lightly, to think systematically while remaining open to surprise, to act decisively while staying curious about the consequences of our actions.

Perhaps most importantly, it reminds us that we are not outside observers of complex systems but participants within them. The ladder helps us climb to a perspective from which we can see this participation more clearly. And then, if we choose wisely, we can kick it away and engage more consciously with the complexity we help create.

Stay curious and Always keep on learning…

The Two Dogmas of Complexity Science: How Our Best Tools Can Mislead Us

I borrow the term ‘dogma’ from W. V. Quine’s classic essay Two Dogmas of Empiricism, where he showed that unquestioned assumptions can quietly shape an entire field. Complexity science, too, rests on its own dogmas that deserve examination.

In today’s post, I want to explore what I see as two fundamental dogmas with how we think about complexity science. These dogmas are deeply embedded in our thinking, and they shape how we create tools, design interventions, and understand organizational life without us realizing it.

To explain these dogmas, let me use the chart of Ashby Space by Max Boisot and Bill McKelvey. It appears clean, scientific, and objective. The kind of visualization that makes the science feel rigorous and mathematical.

This framework comes from Ross Ashby’s Law of Requisite Variety. It maps organizational viability across different complexity regimes. It seems to offer clear insights. Systems in the ordered regime operate through routine procedures. Those in the complex regime require learning and adaptation. Those in the chaotic regime lose coherence when environmental variety exceeds their response capacity.

The 45° diagonal represents Ashby’s famous law. Only variety can absorb variety. Systems above this line face more environmental complexity than they can handle. Systems below it have excess capacity for response. From a conventional perspective, an organization might assess their position by measuring environmental turbulence against internal response capabilities. They might conclude they need to increase internal variety to match external complexity.

It is worth noting that Ashby himself understood variety as observer-dependent. His cybernetic work emphasized that distinctions are made by observers, not discovered in objective reality. The challenge arises when we operationalize such insights into frameworks and tools. What began as a nuanced understanding of observer-enacted variety becomes translated into seemingly measurable coordinates. This transformation from process to representation exemplifies the dogmas I want to examine.

This transformation reveals two fundamental dogmas that have shaped complexity science.

The First Dogma: Ontological Complexity Realism

The chart treats “variety of stimuli” as if it were an objective quantity that exists independently in the environment. It waits to be measured and plotted on the Y-axis. This reflects what I call ontological complexity realism. This is the belief that complexity is an intrinsic property of systems that exists regardless of who observes them.

Here lies the fundamental problem. Variety does not exist “out there” in any objective sense. What counts as variety depends entirely on the distinctions made by the observer or system. The environment does not contain variety. Variety emerges through the interaction between system and environment, mediated by the system’s capacity for making distinctions.

Let me give you a concrete example from healthcare. Is an emergency room “complex”? For a patient’s family member, the ER appears chaotic and overwhelming. Multiple alarms sound. Staff rush between rooms. Medical terminology flies around that they cannot understand. Life-and-death decisions happen at bewildering speed.

For an experienced ER physician, the same environment reveals familiar patterns. They recognize the rhythm of triage protocols. They understand the meaning behind different alarm sounds. They know the standard procedures that guide most interventions. The complexity is not inherent in the ER itself. It emerges from the coupling between the medical environment and each observer’s capacity for clinical distinction-making.

But this observer-dependence extends equally to the horizontal axis. What counts as “variety of responses” depends entirely on the distinctions the observer can make about available actions. The same ER situation reveals entirely different response repertoires to different observers.

The family member might see only binary options. Panic or wait helplessly. The nurse sees a rich array of possible interventions. The attending physician distinguishes even more nuanced response possibilities. The hospital administrator observes yet another set of responses. None of these response varieties exists independently in the situation. Each emerges from the specific capacity of the observer to make distinctions about what constitutes meaningful action.

John Dewey understood this when he argued that organism and environment must be understood as parts of a single transaction rather than separate things that interact. Traditional thinking assumes we have an organism “here” and an environment “there.” Then we study how they interact. But Dewey argues this separation is itself an artificial division that obscures the primary reality. The ongoing transaction between organism and environment creates experience itself.

The key insight is that stimulus and response are not external to each other. They are “always inside a coordination and have their significance purely from the part played in maintaining or reconstituting the coordination”. The stimulus is not something that happens to the organism from outside. It is something “to be discovered,” something “to be made out.” It is “the motor response which assists in discovering and constituting the stimulus.”

As Dewey puts it, “The stimulus is that phase of the forming coordination which represents the conditions which have to be met in bringing it to a successful issue. The response is that phase of one and the same forming coordination which gives the key to meeting these conditions”.

This transactional view transforms how we understand knowledge. Instead of a mind representing an external world, we have knowing as a mode of transaction between organism and environment. Knowledge emerges from this transaction rather than copying something pre-existing. This is not purely subjective nor purely objective, but relational.

Applied to complexity science, Dewey’s approach reveals why Ashby Space fails. The chart treats “variety of stimuli” and “variety of responses” as if they were separate, measurable quantities. But these are artificial divisions of the ongoing transaction between system and environment. There is no variety “out there” waiting to be counted. There are no responses “in here” waiting to be catalogued. There is only the ongoing transaction through which system and environment mutually specify each other.

The Second Dogma: Epistemological Representationalism

The chart presents itself as a neutral representation of complexity regimes. This embodies what I call epistemological representationalism. This is the belief that our task is to discover and measure pre-existing complexity through better methods and tools.

This dogma assumes we can create objective maps of complexity that correspond to how the world really is. The clean boundaries between regimes suggest we are mapping objective territory. The precise diagonal line suggests objective measurement. The measurable axes suggest neutral observation rather than conceptual construction.

But the moment you try to actually use this framework, its claims about objectivity break down. Where exactly would you locate a specific organization on these coordinates? How would you measure “variety of stimuli” independently of the system’s own distinction-making processes?

The chart cannot answer these questions because it treats as measurable quantities what are actually dynamic processes of distinction-making. It tries to map what can only be enacted.

Humberto Maturana and Francisco Varela’s work on structural coupling reveals why this approach fails. Living systems do not represent an independent environment. They enact their world through their structure and history of coupling. As Maturana put it, “everything said is said by an observer to an observer.” The boundaries we draw around “systems” and “environments” are distinctions made by observers, not features of an objective world waiting to be mapped.

The Fundamental Contradiction: Mapping the Unmappable

Here lies the deeper issue that cuts to the heart of what we mean by complexity itself. The very notion that complexity can be mapped contradicts the fundamental nature of what it means for something to be complex.

If something is indeed complex, it resists reduction to mappable coordinates. Complexity implies emergence, unpredictability, context-sensitivity, and observer-dependence. These are not accidental features that better measurement tools might eventually overcome. They are defining characteristics of complexity itself.

Yet the frameworks prevalent in complexity science attempt to do precisely what complexity theory tells us should be impossible. It tries to reduce emergent, context-dependent, observer-enacted phenomena to static, universal, objective coordinates. This creates a performative contradiction. We use the insights of complexity science to argue that phenomena are emergent and context-dependent. Then we immediately create tools that treat those same phenomena as mappable and context-independent.

The contradiction runs deeper still. If complexity truly emerges from the recursive coupling between observers and their domains of inquiry, then any attempt to create a universal map of complexity must necessarily fail. The observer drawing the map cannot step outside the epistemic coupling that generates the complexity in the first place.

Why These Dogmas Generate Persistent Puzzles

These two dogmas create persistent puzzles that are often ignored. The list below is not meant to be an exhaustive list at all.

The Expert-Novice Paradox Why do experts and novices see different levels of complexity in the same system? If complexity emerges from epistemic coupling, then of course they enact different complexities. They have different capacities for distinction-making.

The Measurement Tool Problem Why do different measurement tools reveal different complexities? If complexity is relational, then different tools necessarily enact different varieties by making different distinctions possible.

The Scaling Paradox Why does complexity seem to change when we shift between levels of analysis? Different levels of observation necessarily enact different complexities.

The Intervention Prediction Failure Why do interventions designed based on complexity mappings so often produce unexpected results? Because any intervention changes the observer-system relationship itself. This makes prediction inherently problematic.

These puzzles persist not because of inadequate methods. They persist because they are generated by the assumptions we bring to complexity science.

Beyond the Dogmas: Epistemic Coupling as Transaction

What if we abandoned these dogmas entirely? Instead of asking “How complex is this system?” we might ask this. “How does complexity emerge from the recursive interaction between this knowing system and its environment?”

This shifts focus from measuring pre-existing complexity to understanding epistemic coupling. The dynamic process through which systems and environments mutually specify each other through ongoing interaction. Complexity becomes not a property to be measured but a relationship to be understood.

This framework synthesizes insights from three traditions.

Dewey’s Transaction Theory Instead of separate entities that interact, we have organism-environment as a unified field. The “stimuli” and “responses” in Ashby Space are abstractions from this ongoing transaction.

Maturana and Varela’s Structural Coupling Living systems do not represent an environment but enact their world through their structure. The coupling between system and environment is the source of complexity.

Ashby’s Cybernetics Before the Law of Requisite Variety can even apply, an observer must create variety through distinction-making. The law cannot operate on raw reality. It requires an observer to carve up the world into meaningful categories.

This reinterpretation transforms Ashby’s contribution from a focus on objective regulatory mechanisms to an emphasis on the active and constitutive role of the knowing system in shaping the very “variety” it then seeks to regulate. Rather than discovering pre-existing variety that must be matched, systems participate in enacting the complexity they face through their own distinction-making capacities.

The Chart as Tool, Not Map

This does not mean frameworks like Ashby Space are useless. But we need to understand them differently. Not as maps of objective complexity regimes but as tools for thinking about epistemic coupling processes.

Used this way, the framework serves as what Wittgenstein called a ladder. Something we climb up to reach a new perspective, then kick away once we no longer need it. It helps us think more clearly about complexity without pretending to be complexity itself.

Final Words: Complexity as Participation

The chart looked so clean and objective at first. But complexity is messier, more relational, and more participatory than any representation can capture. That is not a limitation to be overcome. It is the very nature of what we are trying to understand.

Understanding complexity as epistemic coupling opens different possibilities. For designing systems that can remain coherent while staying open to surprise. For cultivating capacities for distinction-making that can expand as we encounter new varieties. For taking responsibility for the complexities we participate in creating.

Heinz von Foerster understood this when he formulated his ethical imperative. “Act always so as to increase the number of choices”. If we are responsible for constructing our realities through our distinctions, then we are also responsible for ensuring that others can participate in that construction.

The challenge is not to model the world but to participate in it more wisely. That participation depends fundamentally on understanding that complexity emerges from epistemic coupling. The recursive interaction between knowing systems and their domains of inquiry. This makes us responsible not just for our actions but for the worlds those actions help bring forth.

 I will finish with wise words from Quine:
No statement is immune to revision.

Stay curious and Always Keep on Learning…

The Road Not Taken: What It Means to Enact

Robert Frost’s “The Road Not Taken” was one of my favorite poems as a child growing up. This was taught in my high school as part of my English classes. In today’s post, I am exploring the idea of enacting, and I will use Frost’s poem as the background.

When we say we are enacting, it means that meaning is not something fixed or “out there” waiting to be discovered. Meaning is constructed in the very process of engagement with a situation. It arises through our participation, through the way we bring ourselves into the world. To enact is to bring forth a situation, to make it real to us, not as an abstract idea but as a lived, embodied experience. It is not about observing passively. It is about being implicated in the situation, shaping it as much as it shapes us.

The Walk in the Woods:

Consider what happens when you walk in a forest. The conventional view suggests that trees, paths, and birdsong exist as objective features that you then perceive and interpret. But from an enactivist perspective, the very capacity to distinguish “tree” from “not-tree,” “path” from “not-path,” emerges through your embodied history of interaction. Your visual system, shaped by evolution and development, structurally couples with light patterns in ways that bring forth the phenomenon we call “seeing a tree.” The tree as a meaningful entity and you as a perceiver of trees co-emerge through this coupling. Neither exists independently of this relationship.

The phrase “walking in nature” does not carry its own meaning. The rustle of leaves, the birdsong, the way sunlight falls on the path are not simply sensory inputs. Their significance arises through my participation. My posture, my attention, my breathing, the way I anticipate each step all enact the experience. I am not a detached observer. I am a co-creator of the moment.

When we say we are enacting, we mean something far more nuanced than simply interpreting or giving meaning to neutral objects. Enaction means that the very distinctions we perceive, the boundaries between self and world, the categories through which we understand experience, all emerge through our embodied coupling with our environment. We do not discover meaning that exists independently, nor do we project meaning onto a meaningless world. Instead, meaning and world co-arise through the history of our embodied interactions.

Now imagine that the path splits. Two trails stretch out before me. One appears more traveled, familiar, comfortable. The other appears less worn, less certain. But what does it mean to take the path that seems “less traveled”? The significance of this “less traveled” quality does not exist independently of my participation. It is inseparable from the observer. The meaning is enacted because I am there, making choices, paying attention, and engaging with the path in a particular way.

Beyond the Woods:

This subtle interplay appears everywhere. In traffic, for example, we often think we are passive observers, noticing congestion or delays as if they were external facts. Yet we are part of the system we observe. Our braking, accelerating, and positioning contribute to the very dynamics we perceive. Meaning, order, and significance arise through participation, through enactment, not detached observation.

We are never outside the system looking in. We are always already coupled, always already participating in the ongoing emergence of the world we inhabit. From an embodied mind perspective, cognition is not about representing a pre-given world inside our heads. It arises in the interaction between body, brain, and environment. Perception, action, and attention are inseparable. They shape and are shaped by the world we inhabit. Meaning is not discovered. It is enacted.

What Frost Shows Us:

“The Road Not Taken” is often misread as a celebration of choosing the unconventional path. But look closely at what the poem actually says. When the speaker encounters two roads in a yellow wood, he notes that one path “was grassy and wanted wear,” seeming to suggest it was less traveled. But then comes the crucial admission:

Though as for that the passing there
Had worn them really about the same.

The paths were equivalent. There was no meaningful difference between them at the moment of choice. The speaker even acknowledges this “And both that morning equally lay In leaves no step had trodden black.” Both paths were equally untraveled that morning.

So where does the meaning of taking “the road less traveled” come from? It emerges in the final stanza, where the speaker projects himself into the future:

I shall be telling this with a sigh
Somewhere ages and ages hence
Two roads diverged in a wood, and
I took the one less traveled by,
And that has made all the difference.

Notice the verb tense “I shall be telling this.” The speaker is anticipating the story he will tell, not describing what actually happened. The meaning of the fork is not fixed in the woods. It is enacted in memory, in the narrative he constructs to hold his world together. The road becomes “less traveled” only because he enacted it as such, giving shape to his experience after the fact.

Taking The Road of Enaction:

To enact is to participate in bringing forth the very world you inhabit. This is not about construction or interpretation in the usual sense. Construction implies a pre-existing subject who builds or creates something external to themselves, preserving the subject/object dualism that enactivism explicitly rejects. In enaction, both the “constructor” and what is “constructed” emerge simultaneously. There is no independent agent doing the constructing. The very capacity to be an agent emerges through the enactive process itself.

Rather than construction or representation, enaction involves reciprocal specification. World and mind co-specify each other through embodied interaction. Your perceptual world is not a representation of an independent reality, nor is it constructed from neutral materials, but the ongoing result of your coupling with your environment.

Every step you take participates in enacting the ground as walkable. Every glance brings forth the visible world through the coupling of eye and light. Every breath participates in enacting the boundary between organism and environment. These are not passive observations of pre-existing features but absorbed engagements in the ongoing emergence of your lived world.

The forest path that splits before you exists neither as pure objectivity nor pure subjectivity. It emerges through the structural coupling between your embodied capacities and the environmental configuration. Your choice to go left or right participates in enacting not just your path but the very nature of the fork as a meaningful juncture.

This is why the “road less traveled” becomes meaningful only through its enactment. Frost did not discover an objective fact about the path’s traffic patterns. Through walking, remembering, and narrating, he participated in bringing forth a world in which one path could become “less traveled” than the other. The poem does not describe a choice between pre-existing options but demonstrates the enactive process through which organism and environment co-specify each other.

Final Words:

Meaning is not inherent in the road, nor even in the moment of choice. It arises in the way we live, remember, and retell the path we have walked. We do not stand outside the system, weighing timeless truths. We are always already part of it, enacting coherence, sometimes even reshaping the past, in order to make sense of the present.

The “road less traveled” is not an objective fact. It is an enactment whose significance comes from our participation, our stories, our presence. The poem demonstrates how we bring forth meaning through the very act of engaging with our choices and telling ourselves which road we took. In recognizing this process, we glimpse our profound capacity as active participants in shaping the reality we inhabit. Every moment of attention, every step forward, every story we tell becomes an act of creation in the ongoing emergence of our world.

Stay curious and Always Keep on Learning…

Interested readers can check out the NLM podcast version here – https://youtu.be/rUEyiNEj4yE

Connecting the Dots… Reflections on the Toyota Production System

My good friend, Venky and I have completed writing “Connecting the Dots… Reflections on The Toyota Production System.” The book will be available exclusively through Cyb3rSyn (https://www.cyb3rsynlabs.com/feed) next month as an e-book. Hard copies also will be available for order next month.

This book is not another step-by-step manual or toolkit.

We provide a comprehensive exploration of the philosophical underpinnings, historical origins, and cultural roots of TPS, focusing on why Toyota implemented its methods rather than simply what those methods are.

We emphasize the underlying philosophy and spirit of TPS, the principles that give these tools meaning and power. The book covers: 

• The two foundational “Houses” of Toyota: the TPS House and The Toyota Way House. The TPS House represents the “what” and “how,” while The Toyota Way represents the “why” 

• Two pillars of TPS House: Just-in-Time (JIT) and Jidoka, tracing their origins and detailed explanations of their purpose and evolution 

• Core principles of The Toyota Way: Respect for People and Continuous Improvement (Kaizen), exploring their historical context, ethical implications, and practical manifestations 

• Other crucial concepts: Genchi Genbutsu (“go and see”), Taiichi Ohno’s adaptive production system, and the Thinking Production System

The detail extends to the influences and rationale behind these concepts, including Kiichiro Toyoda’s “daily bulletin system” for JIT, Sakichi Toyoda’s automatic loom for Jidoka, and the various forms of kaizen.

The title “Connecting the Dots…” describes our methodology and core goal. We recognize similar patterns across different domains to deepen understanding of TPS, drawing parallels from: 

• Philosophy: Zen Buddhism, Socratic method, Kantian ethics, Ubuntu 

• Science & Systems Theory: Cybernetics (Ross Ashby), Factory Physics (Kingman’s VUT equation), Psychophysics (Weber’s Law), behavioral economics, and the Forth Bridge principle 

• Art & Literature: How artists “see,” Chekhov’s Gun, and Japanese arts like Kintsukuroi and Karakuri Ningyō 

• Japanese Culture & History: Concepts like Wa, Nemawashi, Mottainai, Hansei, Shisa Kanko, and the historical context of post-WWII Japan 

• Training & Development: Insights from TWI programs

• Computer Science: Bootstrap Kaizen and Doug Engelbart’s work

This cross-disciplinary approach supports our argument that TPS is a “way of thinking” that resonates far beyond its immediate manufacturing context.

Reflections on” captures our purpose. This is a collection of reflections that invites readers to “read slowly, one chapter at a time, and reflect” on the ideas. The book encourages second-order thinking, prompting readers to ask “Why am I using this tool? What problem am I trying to solve?” rather than just “What tool should I use?” This reflective approach aims to provide deeper insight and more ethical guidance compared to interpretations of Lean and TPS that often focus on tools and cost reduction.

The book frames the rise of AI as a “Taiichi Ohno moment” for contemporary leaders, posing critical questions about jobs, ethics, and the preservation of human value in the workplace. It emphasizes that leaders must choose to reimagine work around uniquely human strengths to prevent human potential from becoming the “ultimate Mottainai” (wasted potential), rather than simply designing humans out of the equation.

In essence, what we are trying to achieve is to provide the reader a thoughtful, interdisciplinary journey to uncover the profound wisdom and ethical foundations that make TPS a dynamic and human-centered system, rather than a mere collection of operational techniques.

Interested readers can check out the NLM podcast for Chapter 10 here – https://youtu.be/qWcTVQUltHI.

#ToyotaProductionSystem #LeanThinking

The Art of ‘Somewhat’:

In today’s post, I am exploring Ashby’s Law of Requisite Variety and why it might be both more necessary and more slippery than most presentations suggest. Ashby’s Law might not be just another management principle. It could be a window into how we navigate complexity when the world refuses to be pinned down by our desire for certainty.

Stafford Beer once wrote something that might be more profound than it first appears:

Instead of trying to specify a system in full detail, specify it only somewhat. You can then ride on the dynamics of the system in the direction you want to go.

That word ‘somewhat’ could be carrying more weight than we realize. It might signal a kind of intellectual humility that most management theories avoid. It suggests that our relationship with complex systems is not one of mastery but of skillful navigation. Perhaps it is more like learning to surf than trying to control the ocean.

This brings us to Ashby’s Law of Requisite Variety, which is a simple statement. “Only variety can absorb variety.” This looks simple, clean, and mathematical. This is the kind of principle that promises hard tangible answers in a soft world. We need to attenuate excess external variety so that we focus on only the relevant variety, and we need to amplify our internal variety so that we can adequately respond to the external variety.

Let us look at the nuances of this law more.

Ashby’s Law tells us that a regulator can only control outcomes it can distinguish and respond to. If environmental disturbances exceed the regulator’s response capacity, some disturbances will pass through uncontrolled. This is presented as a logical necessity. It appears as inevitable as gravity.

And in one sense, it is. Given any finite set of regulatory responses, there will always be environmental states that cannot be adequately handled. Mathematics seems to be unforgiving. The logic seems to be airtight.

But mathematics operates within assumptions, and assumptions are where humans enter the picture. Most presentations of Ashby’s Law miss this. The law is simultaneously necessary and observer-dependent. It might be a constraint that applies absolutely, but only within the frames we construct.

The Indefinite World:

There is a distinction that might change how we see everything. The external variety is not infinite. It is something else entirely. It is indefinite.

Infinite means without limits. It is a mathematical concept that extends forever. Indefinite here means without defined limits. It requires someone to do the defining.

This might not be academic hairsplitting. It could be the key to understanding why Ashby’s Law feels both rock-solid and frustratingly slippery to grasp.

The world contains countless differences, but only some matter for any given purpose. Gregory Bateson captured this. “Information is a difference that makes a difference.” The same principle applies to variety. Variety is not a raw count of states “out there.” It is a relational property that emerges when an observer draws distinctions that serve a purpose.

Think about managing a parking lot as an example. How many “states” might this system have? If you only care about full or empty, there are two states. If you track individual spaces, there might be hundreds. If you include weather patterns, time of day, driver behavior, and maintenance schedules, there could be thousands. The world contains all these potential distinctions simultaneously. But variety for control purposes might depend entirely on which distinctions you choose to make matter.

This creates a fundamental tension. Ashby’s Law holds as a logical necessity. If your frame ignores differences that turn out to matter, your system will fail. But the application of the law depends entirely on how you frame the situation.

When Frames Collide with Reality:

The COVID-19 pandemic might have given us a natural experiment in how different frames handle the same underlying reality.

Some governments approached the crisis with what we might call a narrow medical frame. The pandemic was fundamentally a healthcare capacity problem for them. The focus was on hospital beds, ventilators, testing infrastructure, and transmission control. Their variety management attempted to attenuate viral spread while amplifying medical response capacity. From this perspective, lockdowns might be seen as a straightforward attenuation strategy, and field hospitals as variety amplification.

This frame had a certain elegant simplicity. The problem was clearly defined, the metrics were measurable, and the interventions had precedent in public health history.

But other governments adopted what we could call a broad socio-economic-health frame. From this perspective, the pandemic was not just a medical crisis. It might be a system-wide disruption that threatened social cohesion, economic stability, and political legitimacy simultaneously. Their variety management involved coordinated interventions across multiple domains. Public health measures, economic support packages, mental health services, educational continuity, and social solidarity initiatives.

Both approaches were tested against the same underlying reality. The virus did not care about our framing preferences. But the broader frame generally proved more viable because it might have acknowledged more of the variety that actually mattered for maintaining social stability during the crisis.

The narrow medical frame was not wrong in many regards. It might have been incomplete. It failed to account for economic disruption, compliance fatigue, mental health deterioration, and social unrest. When these unacknowledged varieties of disturbance began overwhelming the system, control failures cascaded in directions the frame could not anticipate.

This might be where Ashby’s Law reveals its true nature. The law did not prescribe which frame to use. It simply ensured that inadequate frames would reveal themselves through control failures.

The Observer Inside the System:

Here is where the story might deepen into something more complex than most management textbooks are comfortable acknowledging.

Traditional cybernetics, what we might call first-order cybernetics, treats the observer as outside the system being controlled. From this perspective, variety could be objective. You count the states, build matching responses, and apply the law mechanically.

But second-order cybernetics recognizes something that might be more unsettling. The observer is always inside the system. The regulator is part of what is being regulated. Variety is not given. It might be constructed through distinctions that reflect purpose, context, and the observer’s own limitations.

This might mean Ashby’s Law operates at two levels simultaneously. At the operational level, your responses must match the variety you have acknowledged as relevant. If you identify ten types of disturbances, you might need at least ten different responses. This could be the familiar version of the law.

But at a deeper level, your capacity to make useful distinctions must itself be adequate to the situation’s demands. If your frame excludes crucial differences, operational control might fail regardless of how well you handle the differences you do recognize.

The law does not fail when you frame poorly. Your framework fails. The law simply describes what happens when your variety is inadequate, regardless of whether that inadequacy comes from poor responses or poor framing.

Back to Riding the Dynamics:

This brings us back to Beer’s insight about specification. If the world might be indefinite rather than infinite, if variety could depend on the distinctions we draw rather than existing independently, then total specification becomes not just impossible but potentially counterproductive.

The goal is not to capture all possible variety in advance. It is to develop the capacity to recognize when your current framing is failing and to generate alternatives before failure becomes catastrophic.

This reflexivity can be viewed as a type of variety amplification. Instead of just amplifying operational responses, we can amplify our capacity to reframe situations when current framings prove inadequate.

What might this look like in practice? Running scenario exercises that stress-test your assumptions. Monitoring for weak signals that could indicate emerging types of disturbance your current frame does not recognize. Institutionalizing checkpoints where teams question basic premises. Building relationships with people who might frame problems differently.

These are not just theoretical exercises but insurance policies against the kind of frame failure we saw in the early pandemic response.

The Paradox of Precision:

Here is something that might bother us about how Ashby’s Law is usually presented. It gets dressed up in mathematical clothing similar to formal models, game theory, Bayesian analysis, etc. These might make the approach feel objective and precise.

But precision might be exactly what we need to be suspicious of. Those models feel rigorous because, once you set the assumptions, the math is unforgiving. But who might define the players in your game theory model? Who sets the priors in your Bayesian analysis? Who decides what payoffs could matter?

Those are framing decisions. Ashby’s Law might apply before your math begins. If your framing excludes relevant variety, even perfect calculations could fail when they meet reality.

The law might remind us that objectivity begins after assumptions are set, but assumptions are never neutral. They could reflect purpose, context, and the inevitable limitations of the framers.

Living with Indefiniteness:

All this might be making the reader wonder… Are we condemned to relativism, where any frame could be as good as any other?

The answer in my opinion is – Not quite. The test of a frame might not be whether it is objectively true. That is not necessarily available to us. The test is whether it enables viable action in pursuit of purposes we care about.

This provides a practical discipline. You cannot retroactively change your frame to explain away failure. Either your original frame enabled adequate control or it did not. The test could be prospective viability, not post-hoc rationalization.

And frames do get tested. Reality pushes back. Systems fail in ways that might reveal the blind spots in our framing. The pandemic was particularly instructive because it tested everyone’s frames simultaneously against the same underlying dynamics.

The countries that performed best were not necessarily those with the most resources or the smartest experts. They might have been those with the most adaptive framing capacity. The ability to recognize when initial approaches were not working and to generate alternatives. It also means the ability to use multiple approachesinstead of adhering to one or the other. Variety gives you the grip needed to grasp the situation to manage it.

The Art of Somewhat:

Which brings us full circle to Beer’s notion of specifying “only somewhat.”

This might not be about being vague or uncommitted. It could be about building systems that can evolve their own specifications as they encounter unexpected variety. It might be about designing for frame flexibility rather than frame optimization.

In practical terms, this means designing feedback loops that can detect when current framings are failing. Building redundancy not just in operational responses but in framing capacity. Distributing the work of making distinctions across multiple agents. Creating safe spaces for questioning fundamental assumptions before those assumptions might lead to failure.

Most importantly, it means accepting that our relationship with complexity could be more like navigation than engineering. We might influence direction, but we cannot control destination with the precision our engineering metaphors suggest.

The question is not whether we can master complexity. The question is whether we can learn to move skillfully within it, specifying only somewhat and riding the dynamics in directions we care about.

That might not be a limitation of Ashby’s law. That could be its gift. It might free us from the impossible burden of total specification while preserving the discipline of logical constraint. It is inviting epistemic humility because we can never ever have complete information, especially when the external world has indefinite variety and is dynamic.

Final Words:

Ashby’s Law teaches us something deeper than just variety management. It shows us how to live with indefiniteness without abandoning the pursuit of viable action. In a world that refuses to hold still for our theories, the art of somewhat might be the most important skill we can develop.

The law is neither a rigid formula nor empty relativism. It is a constraint that operates within human-constructed frames, testing whether those frames prove adequate for achieving intended purposes. Its power lies not in prescribing solutions but in revealing inadequacies. It forces us to confront the relationship between our conceptual maps and the territories we are trying to navigate.

Stay Curious and Always Keep Learning…