Information from a Cybernetic Viewpoint:

In today’s post, I want to revisit the notion of information from a cybernetic viewpoint, drawing primarily from Gregory Bateson’s well known formulation that information is the difference that makes a difference. This definition does not merely redefine information. It quietly displaces where information is assumed to reside and how it is assumed to function. This post is part of a series examining a cybernetic approach to tackling misinformation.

In everyday discourse, information is commonly treated as a thing. We speak of information being transmitted, stored, corrupted, lost, or controlled. This language suggests that information exists independently of those who encounter it, as if it were a commodity that can be packaged and delivered. Cybernetics has long resisted this framing, not by denying the existence of data in the form of signals or messages, but by insisting that information cannot be separated from the consequences it produces within a system.

Bateson’s phrasing forces a pause because it contains two differences, not one. These two differences are often collapsed into a single gesture, which obscures what cybernetics is trying to put more light on. To understand information cybernetically, these differences must be held apart and examined in relation to the observer, the context, and the viability of the system involved.

The first difference concerns distinguishability, or the ability to make distinctions. For a difference to exist as a difference, it must be generated or recognized by an observer. This does not mean that the world lacks structure or regularity. It means that distinctions do not announce themselves independently of the capacities and concerns of the cognitive observer encountering them. An observer must be able to draw a distinction for it to count as a difference at all.

This ability to distinguish is not abstract or universal. It is shaped by history, embodiment, training, and present need. In cybernetic terms, this is a question of variety. An observer with limited internal variety cannot register certain distinctions, regardless of how obvious they may appear to another observer. What fails to be noticed is a mismatch between the variety available and the variety required.
This immediately situates information within the notion of context. A difference that matters in one situation may be invisible or irrelevant in another. The same signal can be richly informative for one observer and entirely inert for another. From this perspective, the problem of information overload is often misdiagnosed. What overwhelms is not the quantity of differences but the absence of appropriate distinctions and filtering mechanisms within the observer.

The second difference concerns consequence. Not every distinction that can be made will matter. A difference becomes information only when it participates in altering the state, orientation, or activity of the cognizing “system”. This is where the second difference enters, the difference made by the difference.
Cybernetically, this is best understood in terms of viability. A difference matters when it bears upon the conditions under which a cognizing “system” continues to operate. It may support stability, signal threat, invite adaptation, or require reorganization. A distinction that does not affect viability may still be noticed, but it does not rise to the level of information in Bateson’s sense.

In a pragmatic turn, this reframing moves information away from correctness and toward consequence. It is not enough for a distinction to be accurate or well formed. It must matter in practice. Information is therefore tied directly to action potential, even when that action takes the form of restraint, delay, or reconsideration.

Between these two differences sits transduction. Whatever perturbation occurs in the environment does not arrive as meaning. It must be transformed through the structures of the observer. This transformation is neither passive nor optional. It is how a system turns disturbance into significance.

Transduction is deeply contextual and personal, without being arbitrary. It reflects the ways in which a system has learned to respond to its surroundings. Two observers may be perturbed by the same event, yet transduce it differently because their histories, expectations, and responsibilities differ. Meaning is not extracted from the world. It is enacted through ongoing structural coupling.

This is why information cannot be cleanly separated from the observer. What appears as the same input can lead to entirely different informational outcomes. To speak of information without speaking of transduction is to quietly reintroduce representational assumptions that cybernetics sought to set aside.
This leads naturally to the notion of informational closure. As Heinz von Foerster put it, the environment is as it is. It does not contain information waiting to be picked up. It contains events, regularities, and disturbances. Information arises only within operationally closed systems as a result of their internal changes in response to perturbation.

From this viewpoint, information is not transmitted. Signals may pass between systems, but information happens only when a system changes in a way that matters to it from the perturbation. What is stored are not information units but traces that may later participate in new acts of distinction. This undermines the idea of information as a substance that can be accumulated or depleted independently of the systems involved.

Human communication introduces an additional layer through language and social coordination. For a difference to make a difference in a social context, participants must be engaged in overlapping language games. Meaning does not reside in words alone but in shared practices, expectations, and forms of life.
Error correction, in this sense, does not occur in the signal but in interaction. A message is understood not because it is decoded correctly, but because the receiver anticipates what is likely to be meant and adjusts that anticipation through feedback. Reading a doctor’s cursive prescription is a familiar example. The pharmacist does not decipher letters in isolation. They draw upon knowledge of past interactions with the doctor, medications, dosages, and common medical practice. Understanding emerges from participation, not from transmission.

All of this brings us to a final consideration that is often neglected because it does not present itself as information at all. This is the question of slack. For a difference to make a difference, there must be sufficient room within the system for it to be taken up. This slack can appear in several forms. It may take the form of redundancy, where a distinction is encountered through multiple channels or repetitions. It may appear as amplification, where the manner of presentation gives the difference sufficient weight to register. It may also appear as relaxation time, where the system is afforded the temporal space to digest what has occurred.

Without some degree of slack, even meaningful distinctions fail to become information. When perturbations arrive faster than they can be transduced, the system does not become more informed. It becomes saturated. What follows is not heightened responsiveness but withdrawal. The system in many regards learns that responding no longer contributes to viability.

Relaxation time is particularly important in this regard. There was a period when news arrived with built in pauses. A morning paper or an evening broadcast created a rhythm that allowed distinctions to settle. Between these moments, there was time for discussion, reflection, and forgetting. That rhythm provided slack and maybe allowed for a more congenial political climate.

The continuous, twenty four hour cycle of today’s media, in which opinion often masquerades as news, has steadily eroded this condition and altered the political landscape in ways that reward polarization and immediacy. Nowadays, perturbations arrive without pause, and the responsibility for digestion has been shifted entirely onto the observer. The result is a familiar paradox. As reports of suffering increase, the capacity to respond meaningfully diminishes. Perturbations may accumulate, but few of them make a difference.

This is often described as complacency or moral failure. From a cybernetic viewpoint, it is more accurately described as a collapse of the conditions under which information can occur. The system is overwhelmed beyond its capacity to transduce, and indifference emerges as a protective response. This leads to the conditions for the medium to become the message.


Final Words:
If information is not a commodity, then neither is attention. Both depend on proportion, timing, and care. Environments that destroy slack while demanding responsiveness do not produce better informed observers. They erode the very capacities required for differences to make a difference.

Seen this way, the preservation of informational conditions is not merely a technical concern. It is an ethical one, bound up with how we design systems, share responsibility, and allow meaning the time and space it requires to emerge.

Stay curious and Always keep on learning…


If you liked what you have read, please consider my book “Second Order Cybernetics,” available in hard copy and e book formats. https://www.cyb3rsyn.com/products/soc-book

On Self-Deception in Systems Thinking: A Kierkegaardian Mirror

AI Generated

In today’s post I want to spend time with Søren Kierkegaard. I have been interested in his ideas because he occupies an unusual place in the history of thought. He is considered a pioneer of existentialism, and yet he is also a man of faith. Most of the existentialist thinkers who followed him, including Sartre, built their philosophies around radical freedom and human responsibility without any reference to faith at all. Kierkegaard stands in the middle of this tension. He writes from a position of uncertainty and responsibility, but he also lets faith shape his understanding of what it means to be human. This combination gives his work a kind of depth that is difficult to classify. It also gives us a set of ideas that speak directly to the act of thinking, especially when we try to think in ‘systems’.

Thinking in ‘systems’ is often presented as an attempt to arrange the world into a coherent whole. We are encouraged to draw maps, diagrams, and loops that claim to show how everything is connected. These maps have their value, but they also create the illusion that understanding is a matter of fitting pieces together. They invite us to believe that if we only had the right model, the right picture, or the right mission statement, then clarity would follow. But the thinking domain is not the physical domain. Thoughts are not puzzle pieces, and ideas do not snap together neatly. There is no final picture on the box to guide. There is only the ongoing work of trying to understand a world that will not hold still long enough to be captured by a diagram.

Kierkegaard seems to have understood this difficulty quite well. He believed that the greatest danger in human life is self-deception. We long for the comfort of clarity, so we often rush to declare purposes and principles. For Kierkegaard, becoming a self is not a matter of adopting a slogan. It is a lifelong task shaped by inwardness, responsibility, and the willingness to face ambiguity without trying to escape it. Our authenticity comes from this attempt. This is why he would be deeply suspicious of systems that claim to explain everything. For him such ‘systems’ flatten the complexity of human experience. They offer a kind of intellectual reassurance, but they do not help us live.

One of Kierkegaard’s most striking ideas is that life can be understood only by looking backward, but it must be lived forward. Understanding in this regard is a retrospective act. It is something we do when we look back and discover patterns in what has already happened. But living is always forward. It takes place in a stream of uncertainty, where choices must be made without guarantees and where the meaning of those choices often remains unclear until much later. This observation challenges the entire idea of systemic coherence. Systems maps work backward. They create a picture of causality after the fact. They explain what has been, but they do not show us how to live into what is unfolding. They provide a sense of structure, but this structure is largely retrospective.

This backward–forward tension reveals why the search for a perfectly coherent system is misguided. Human life does not unfold according to a diagram. Thinking does not progress by assembling pieces into a single whole. We understand our experiences only after we have lived through them. The clarity we draw from models and mission statements can therefore be misleading. It can be useful as a reflection, but it should not substitute for the lived experience of confronting ambiguity in the moment. Kierkegaard’s insight makes the entire project of declaring a mission or a golden why feel somewhat naive. These declarations claim to give direction, but direction is not something that can simply be proclaimed. Direction must be discovered through the way we participate in the world.

Another of Kierkegaard’s central ideas is that truth becomes meaningful only when it is appropriated inwardly. Truth is not something imposed from above. It must be taken up, lived, wrestled with, and made one’s own. A beautifully crafted mission statement does not create meaning. A polished systems map does not create understanding. These are only starting points. Understanding arises only when individuals confront their own limitations, their own anxieties, and the tensions that shape their experiences. For Kierkegaard, this inward appropriation is the essence of responsible living. It is also the key to responsible thinking.

Kierkegaard’s view of anxiety deepens this idea. Anxiety is not simply fear. It is the feeling that arises when one realizes one must freely choose. It is the dizziness of possibility. In the context of thinking, anxiety shows up when we face the limits of our understanding, when we recognize that we must choose what matters, and when we realize that there is no system neat enough to relieve us of that responsibility. Many organizational declarations are attempts to soothe this anxiety. They create a picture of direction that allows people to avoid the discomfort of thinking for themselves. But Kierkegaard would say that this discomfort is precisely where thinking begins.

This gives us a different language for cognitive blindness. Blindness is not only a matter of not seeing. It is often a refusal to see, a retreat into the comfort of prefabricated clarity. Thinking asks us to approach our blindness with curiosity rather than defensiveness. It invites us to engage with the friction that reveals what we had overlooked. Systems thinking, when practiced responsibly, is not about drawing neat maps. It is about cultivating the openness required to encounter what does not fit and the humility to revise our sense of the world when confronted with surprise.

Final words:

In the end, Kierkegaard helps us see that thinking is not the work of fitting pieces together. It is the work of becoming a self, which requires inwardness, responsibility, and the willingness to live with ambiguity. He reminds us that life unfolds forward while understanding works backward. This simple observation exposes the limitations of any attempt to impose a coherent system on a world that is always in motion. Mission statements and golden whys can be helpful beginnings, but they often promise clarity without cultivating the character and perception that make clarity meaningful.

The point is not to reject purpose or systemic awareness. It is to hold our purposes lightly, to allow our thinking to be shaped by our experiences, and to accept that ambiguity is not a failure of insight but a condition of life. Systems thinking, when grounded in Kierkegaard’s lessons, becomes a stance rather than a diagram. It becomes a way of approaching the world with patience, honesty, and a readiness to see differently. This path is demanding, but it is also the one that keeps us awake to the depth and complexity of being human.

Stay curious and Always keep on learning…

Minimizing Harm, Maximizing Humanity:

In today’s post, I am looking at a question that is rarely asked in management. What if the most responsible course of action is not to maximize benefit, but to minimize harm? In decision theory, this is expressed as the minimax principle. The idea is that one should minimize the worst possible outcome. In human systems, that outcome is best understood as harm to people, relationships, and the invisible infrastructure that sustains collective work.

The language of management is often dominated by the pursuit of gains. Leaders are taught to ask what is the best that can happen. They are told to optimize, to scale, and to seek advantage. The minimax principle turns this question around. It asks instead what is the worst that can happen and how do we prevent it. Every decision about maximization must be evaluated through the lens of minimizing harm. Harm minimization is not a boundary condition but the primary ethical directive that governs all other management decisions.

Russell Ackoff once observed that the more efficient you are at doing the wrong thing, the wronger you become. This statement captures the ethical inversion at the heart of many managerial failures. The pursuit of maximum gain often blinds organizations to the quiet forms of loss that accumulate in the background. Human systems depend on tacit networks of trust, communication, and mutual adjustment. When efficiency cuts too deeply, these invisible infrastructures collapse. The system loses its ability to adapt.

To minimize maximum harm is not to resist change. It is not an invitation to stand still. Rather, it is a recognition that progress and ethics operate according to different logics. Progress concerns improvement and expansion. Ethics concerns the protection of dignity, agency, and reversibility. Once we place harm minimization at the center of our decisions, progress becomes sustainable because it no longer depends on exploitation or exclusion.

The primary ethical directive to minimize harm requires a clear operational principle. Heinz von Foerster provided this principle with remarkable clarity- I shall act always so as to increase the number of choices. This is not a secondary value. This is how harm minimization is operationalized.

Consider what happens when choices are available. When options remain open, people retain the capacity to move in different directions. They can experiment, observe the results, and if those results prove harmful or undesirable, they can try a different direction. This is reversibility. It is not that decisions are undone but that people are not locked into a single path with no way out. Reversibility means the system retains the capacity to self-correct. This becomes an integral part of being viable.

When choices are removed, a different logic takes hold. A decision made under constraint, with no alternatives available, becomes irreversible. The person cannot change course because there is no other path to take. The harm accumulates and cannot be addressed through adaptation or choice. This is an important distinction. To minimize harm is to preserve the optionality that allows people to respond when things go wrong. When you increase the number of choices available to people, you prevent harm from becoming locked in place. You maintain the possibility of recovery. You keep open the horizon of possibilities. The person is not left to say I had no choice, which is the expression of the deepest form of harm, the harm from which there is no escape.

This means that every decision about maximization or progress must be evaluated through this lens. Does it increase or decrease the number of choices available to people? Does it preserve reversibility or does it close off futures? Does it prevent irreversible harm or does it create conditions from which recovery is impossible? This is how we operationalize the primary ethical directive in practice.

Werner Ulrich’s Critical Systems Heuristics extends this insight into a framework for reflective practice. Ulrich reminds us that every system boundary includes some and excludes others. Those excluded often bear the consequences of decisions without having had a voice in making them. Ethics therefore requires that we identify who loses in the system we design. Ethics requires that we act in ways that allow their participation and emancipation. To preserve choice is to protect those at the margins of decisions. It is to recognize that moral responsibility lies in how boundaries are drawn. When we ask who loses, we are asking a minimax question. We are asking what is the worst that can happen for those at the margins.

To some, the minimax principle might sound like a cautious philosophy, one that restrains progress. This would be a misunderstanding. The aim is not to prevent change but to cultivate conditions under which change can occur without catastrophic harm. Here the insights of Magoroh Maruyama are valuable. In his work on second cybernetics, he distinguished between negative feedback processes that regulate deviation and positive feedback processes that amplify it. He noted that deviation amplification is the essence of morphogenesis. Not all deviations are errors to be corrected. Some are the sources of new order and innovation. Ethical design therefore should not eliminate deviation but create conditions in which positive deviation can be generative without catastrophic harm. To minimize maximum harm is not the same as to minimize deviation. It is about preserving the space in which positive deviation can arise safely.

Von Foerster’s imperative and Maruyama’s insight converge here. Both point toward the idea that ethics in complex systems must not suppress variety. Von Foerster’s view was that more freedom comes with more responsibility. When we create systems that expand choice, we simultaneously increase the responsibility of those who act within them. The ethical task is not to eliminate risk but to manage it in a way that nurtures diversity and growth while protecting the conditions of future choice. To design ethically is to create the space in which deviation, learning, and emergence can unfold without irreversible harm.

Behind every visible structure of management lies an invisible infrastructure. It consists of relationships, trust, informal knowledge, and the tacit coordination that keeps work alive. This infrastructure is often taken for granted. It is noticed only when it breaks down. In the pursuit of efficiency, organizations frequently erode these invisible supports. Staff reductions, rigid procedures, and mechanistic control can destroy the very human capacities that enable adaptability and resilience. The question therefore is not what can be gained but what can be lost without recovery. True resilience depends on maintaining the conditions that allow the system to heal itself. When we ask this question, we are asking what choices we are removing from people. We are asking what futures we are closing off.

It is important to distinguish ethics from progress. Ethics does not belong to the domain of progress. Progress concerns the expansion of capability. Ethics concerns the preservation of humanity. The two may coexist, but they are not the same. Progress without ethical constraint risks creating conditions from which recovery is impossible. Ethics without openness to change risks paralysis. The minimax principle, interpreted through von Foerster and Ulrich, provides a way to hold both. It calls for action that reduces maximum harm while sustaining the capacity for continued evolution.

Maruyama’s perspective deepens this understanding. By allowing positive deviation, we cultivate the potential for new forms of order. By preserving choice, we protect against harm that would close the future. The task of management therefore is not to optimize the present but to sustain the possibility of better futures without destroying the diversity from which they may emerge.

Ackoff’s view was that the future is not something to be predicted but something to be designed. The ethical responsibility of design is to ensure that this future remains open. To minimize maximum harm is to recognize the fragility of what is human in our systems. To preserve choice is to keep open the horizon of possibility. To embrace positive deviation is to invite emergence without destruction. Ethics in management is not about perfection or certainty. It is about maintaining the delicate balance between care and change.

Final Words:

When compromises are inevitable in human systems, the most humane path is to protect what allows us to begin again. The minimax principle is an invitation to ask different questions in our organizations. It is an invitation to be aware of who loses in the systems we design. It is an invitation to increase the number of choices available to people. It is an invitation to preserve reversibility and to protect the invisible infrastructure that sustains our collective work. We are responsible for our construction of these systems. We are responsible for the futures we foreclose and the futures we keep open. To be an authentic manager is to be aware of this responsibility and to strive, always, to minimize the harm we might do while creating conditions for emergence and learning.

Stay curious and always keep on learning.

Rethinking Purpose: When Organizations Stop Having and People Start Being…

Part 1: The Reification Trap and What We Actually Observe

In today’s post, I am looking at the notion of organizational purposes in light of cybernetic constructivism. The ideas here are inspired by giants like Stafford Beer, Spencer Brown, Ralph Stacey, Werner Ulrich, Russell Ackoff and Erik Hollnagel.

The corporate world seems to be obsessed with organizational purpose. Mission statements adorn lobby walls. Consultants make fortunes helping executives discover their organization’s deeper calling, their “why”.

From a cybernetic constructivist perspective, this entire enterprise rests on a philosophical error. This is the notion that organizations have purposes. Organizations do not have purposes, people do.

Organizations are certainly created with specific objectives and goals in mind. For example, a company can be formed to develop software or a charity established to alleviate poverty. But the idea that these entities themselves possess purposes is what philosophers call reification, treating an abstraction as if it were a concrete thing.

Organizations have goals and objectives set by their founders or governing bodies. But purposes, the deeper sense of meaning and direction that drives behavior, belong to individuals. This distinction is crucial for understanding emergence in an organizational setting.

This is not semantic nitpicking. It is a fundamental reframe that helps us rethink how we understand organizational behavior and human experience within systems.

The Reification Trap and POSIWID:

When we say something like “our company’s purpose is to make the world more sustainable”, we commit reification. We treat an abstraction as if it were a concrete thing. Organizations are viewed wrongly as entities with intentions, values, and purposes of their own.

What organizations actually have are stated goals and objectives, declarations about what they aim to achieve. But when we strip away this corporate fiction, what remains is people. People with their own purposes, their own sense-making processes, their own constructed meanings about what matters and why.

Stafford Beer’s insight that “the purpose of a system is what it does” (POSIWID) helps us cut through the fog of stated intentions and mission statements. But when we think about what we have been saying so far, we can see that the idea of POSIWID itself could be a reification trap. In criticizing the reification of organizational purpose using POSIWID, we risk reifying “the system” itself as something that “does” things.

A way to ease out of this apparent trap is to use Wittgenstein’s Ladder. POSIWID serves as a cognitive aid helping us climb to better understanding, which we then discard.

What we actually observe are patterns of human behavior and interaction. When we say “the system produces data harvesting behaviors”, we mean “we observe people engaging in data harvesting activities within particular structural contexts”. When we say “the system undermines individual viability”, we mean “we observe interactions between people that result in reduced individual flourishing”.

The value of POSIWID lies not in discovering what systems “really want” but in training our attention on emergent patterns of human behavior rather than declared organizational intentions. Once this shift in attention is accomplished, we can discard the system-as-actor metaphor and focus on the actual phenomenon. People with purposes interacting within conditions that constrain and enable certain patterns of behavior.

Applied to organizations, this refined principle becomes this. If we want to understand what is actually happening, we should observe the patterns of behavior and interaction that emerge from people’s purposes within particular conditions, not focus on declared organizational goals.

Patterns of Purpose Interaction:

From a cybernetic constructivist perspective, what we observe are patterns emerging from the interactions of individual purposes within structured contexts. When a software engineer’s purpose to solve elegant problems intersects with a marketer’s purpose to help people discover useful tools, and both operate within structures that reward customer satisfaction, we observe certain patterns of behavior and outcomes.

These patterns are dynamic, not fixed. As people’s individual purposes evolve, as new people join the system, as external conditions shift, the observable patterns shift too. The patterns become a living expression of ongoing purpose interactions rather than a static implementation of declared intentions.

But here’s the crucial insight from cybernetics. The observer is part of the system being observed. When we observe patterns of organizational behavior, we are not neutral external scientists. We are participants whose own purposes and perspectives shape what we see. This creates recursive loops that traditional management thinking often ignores.

The manager who observes that “people are not motivated” and implements new programs is not a neutral observer. They are a participant whose own purposes drive their observation and choice of interventions. These interventions then become part of the conditions within which other people’s purposes interact, potentially changing the very patterns the manager was trying to understand.

The refined POSIWID insight helps us see that if we want to change observable patterns, we need to understand and work with the actual purposes of the actual people involved, not impose new mission statements or organizational goals from above.

From Alignment to Resonance:

Traditional thinking seeks alignment, getting everyone pointed in the same direction toward the same stated organizational goals. But our refined understanding shows us that there is no collective entity that can choose a unified direction. In reality, there are only individuals with purposes engaging in ongoing interactions.

Some of these interactions create resonance, patterns where individual purposes amplify and support each other in ways that produce coherent behavioral patterns. Others create tension or conflict. The software engineer’s elegant problem-solving and the marketer’s user advocacy can resonate productively, creating emergent value. But this coherent behavior is not orchestrated by some collective consciousness. It emerges from how these specific people with these specific purposes interact within particular conditions.

What we observe are the behavioral patterns emerging from these ongoing purpose-interactions, not something chosen by “the organization.” Even when there are formal decision-making processes, you still have individual people making individual choices about whether to participate, how to contribute, what to support.

Understanding Recursive Viability:

When we talk about recursive systems, we mean something different from linear processes. In recursive systems, each loop is independently viable. Each person constructs their own purposes, observes their own interactions, and maintains their own capacity to adapt and respond. They are not merely components serving the larger system. They are complete systems in themselves.

People observe the patterns of interaction, including their own participation in those patterns. This observation changes how they construct their purposes, which changes their interactions, which changes the patterns, which changes what they observe. Each person completes this cycle independently while also participating in the larger patterns.

The viability of observable patterns emerges from the viability of individual participants, rather than being imposed upon them. When individual people can maintain their own purposefulness and adaptive responses, the larger patterns that emerge tend to be more resilient and creative.

Instead of asking “how do we get people to serve the organization’s purpose”, we ask “how do we create conditions where each person’s independent viability contributes to emerging patterns that enhance collective viability?”

Collective viability is not itself an entity or fixed goal. It is an emergent, dynamic pattern arising from the interactions of individual viable systems. It shifts as individual purposes evolve, as new people join the system, as conditions change.

Quality of Life and Practical Implications:

Quality of life is not something organizations provide to employees like a benefit package. It is something individuals construct through their lived experience of pursuing their purposes within particular conditions. But quality of life is both an input and output of the system. When people experience high quality of life, they bring different energy and capability to their interactions.

This reframe has practical implications. If we want to change observable patterns of behavior, we need to understand and work with the actual purposes of the actual people involved. What do people actually care about? How do their purposes complement or conflict? What conditions support the expression of these purposes?

Sustainable change happens through shifts in the interaction of purposes, not through compliance with new directives. People adapt their behavior when conditions change in ways that better enable them to pursue what they already care about, or when they develop new purposes through their lived experience of interaction with others.

Final Words:

Let go of the fiction that your organization has a purpose. Instead, get curious about the actual purposes of the actual people involved and observe the patterns of behavior that emerge through POSIWID analysis. What do they care about? How do their purposes interact? What behavioral patterns emerge from these interactions?

Then, experiment with conditions. What structures and processes support the kinds of interactions that produce the behavioral patterns you want to see more of? Pay attention to emergence while remaining aware of your position as observer. Use POSIWID as your reality check. If the observable patterns do not match the stated intentions, look to the interaction of individual purposes within current conditions for explanation.

This shift from organizational purposes to human purposes is not merely theoretical. It is practical. When we stop pretending that abstractions have agency and start working with the actual agency of actual people, we discover possibilities for organizing that honor both individual viability and collective capability.

In the next post, we will explore what this means for leadership as condition creation, boundary critique, and the challenge of supporting diverse purposes within structured contexts.

I will finish this post with a quote from Ralph Stacey:
There is no possibility of standing outside human interaction to design a program for it since we are all participants in that interaction.

Stay curious and always keep on learning…

The Monkey’s Prose – Cybernetic Explanation:

Imagine that you are on your daily walk in the park. You see a monkey on a park bench, busily typing away. You become curious as to what is happening. You slowly approach him from behind, and try to see what is being typed on the paper. Strange enough, what you see typed on the paper so far is legible prose; complete with grammar and semantics. What could be an explanation for this phenomenon?

This example was given by the great anthropologist cybernetician, Gregory Bateson. He used the example to explain “cybernetic explanation”, as he termed it. He said:

Causal explanation is usually positive. We say that billiard ball B moved in such and such a direction because billiard ball A hit it at such and such an angle. In contrast to this, cybernetic explanation is always negative… In cybernetic language, the course of events is said to be subject to restraints, and it is assumed that, apart from such restraints, the pathways of change would be governed only by equality of probability. In fact, the “restraints” upon which cybernetic explanation depends can in all cases be regarded as factors which determine inequality of probability If we find a monkey striking a typewriter apparently at random but in fact writing a meaningful prose, we shall look for restraints, either inside the monkey or inside the typewriter… Somewhere there must have been a circuit which could identify error and eliminate it.

Bateson’s use of the word “restraints” is comparable to “constraints”. Larry Richards notes that Bateson used the term “restraint” referring to the approach of Cybernetics as “negative explanation”, focusing on what is not desirable, rather than what is. When there are no constraints, we can say that all events are equally likely. If we have enough chances, we will see at least one event, where a monkey can type out a work of Shakespeare (sometimes referred to as Infinite Monkey theorem). But here, we are looking at cybernetic phenomenon where constraints are present, and they guide the outcome. In the case of the monkey’s prose, one possibility could be that the typewriter is programmed in such a fashion that no matter what key is pressed, a preprogrammed prose is generated. This would be an example of a circuit that Bateson referred to.

Let’s consider another example. Let’s say that every hour you take two measurements, measurement A and measurement B. What you find is that measurement A goes up and down, while measurement B remains fairly steady. From this dataset, what correlation can you determine between A and B? Without any additional knowledge, the general consensus would that there is no correlation between the two measurements. What if we consider the mechanism of a thermostat? The thermostat does not turn ON until the temperature goes outside a tight range. Only when the temperature goes outside the range does the thermostat turn ON. It maintains the internal temperature of the house based on how the external temperature impacts the internal temperature. In the example above, the external temperature was A and the internal temperature was B. Without a knowledge of thermostat, if we were given just the two datasets, we would not be able to see any connection between the two datasets. This idea is sometimes referred to Friedman’s Thermostat after the American economist, Milton Friedman.

The thermostat is a very basic example of cybernetic explanation. Even though, we may perceive that the thermostat’s goal is to maintain the room temperature at a constant value, the thermostat does not have a goal per se. It does not stay ON to ensure that the temperature is maintained at a constant value. Instead, it turns ON when the temperature goes outside a limit. The thermostat negatively “moves away” from the outside range value of the temperature and stays ON until it is inside a determined range. The thermostat acts only when it hits a constraint or it is guided by the restraint, to use Bateson’s language. It is not a movement towards a goal temperature of say 70 degrees F, but rather a movement away from a current temperature of say 68 degrees F. Larry Richards explained this wonderfully:

Any system with constraints appears to have a purpose as there are outcomes precluded from the set of possibilities. 

Another example we can consider is that of driving a car. When you drive a car, you apply gas or brake only when needed. You don’t steer the car to try to keep it running in a straight line. You engage when the car is moving towards the edges of your lane. To continuously work towards a goal requires high energy, and a person driving is not suitable for this.

This idea of cybernetic explanation brings forth valuable insights when we look at social systems such as an organization. Richards proposes that assigning or designing a purpose for a social system can lead to problems.

I suggest avoiding or suspending… the idea of purpose. The idea of teleological systems – that systems have a purpose first, with structure following – implies that systems are created or evolve to achieve a goal or objective.

The problem in Second Order Cybernetics arises when the observers/designers specify the purpose of their designs, giving conscious intent to their actions. Gregory Bateson (1972a, 1972b) warned of the dysfunctions of conscious purpose when the actions taken do not and cannot account for all the ecological circularities of the situation and the unanticipated consequences inherent in taking such actions. Yet, humans have needs, desires, preferences and values; we are self-aware of our actions and alternatives; and, we can act with intent to satisfy our needs and desires. To act without self-awareness of our desires and the possible consequences of our actions would be irresponsible. 

 Richards advises to look for present constraints that guide actions.

Specifying a set of constraints treats desires as a spatial concept, focusing attention on the states we wish to exclude from happening, leaving open a space of possible outcomes deemed currently acceptable. This approach is present-oriented, merging ends and means: the set of constraints that represent our desires and the actions we take to avoid what we do not want are here and now, and our evaluation of possible consequences is based on current best available knowledge. Our desires, actions and evaluations can change as we experiment, learn and change, making it important to be careful about excluding outcomes that could become useful as circumstances change. Treating desires as constraints and intention as an awareness of desires as constraints opens the door for an alternative to the consciousness of purpose about which Bateson was concerned.

The idea of cybernetic explanation and constrains raise the importance of dialogue amongst the coparticipants of the social realm. Rather than going after a narrow purpose, we may be better served if we can explore the space of constraints to identify conditions that promote outcomes that we desire. When we utilize a constancy of purpose, we are utilizing a narrow view that is not able to accommodate the various interpretations and desires of the many coparticipants of our social realm. Bateson viewed the pursuit of conscious purpose as being damaging to the very ecology that supports being human. (Klaus Krippendorff). Krippendorff came out with an Empirical Imperative to support this idea:

Empirical Imperative: Invent as many alternative constructions as you can and actively explore the constraints on their affordances.

I will finish with more wise words from Richards that provides further insights about cybernetic explanation:

If I know what I want and I know it is possible to achieve it, I do not need cybernetics—I just go and do what I need to do to achieve the outcome. However, when I only have a vague idea about what I want or do not want and I do not know how to pursue or avoid it in the current society, the vocabulary of cybernetics can be useful. Cybernetics is not about success and the achievement of goals; it is about the reconfiguration of constraints (resources) in order to make possible what was not previously possible, including the avoidance of what was previously inevitable. 

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was Complexity – Only When You Realize You Are Blind, Can You See:

The Cybernetic Aspects of OODA Loop:

Boyd2

I had briefly discussed OODA loop in my previous post. In today’s post, I will continue looking at OODA loop and discuss the cybernetic aspects of OODA loop. OODA loop was created by the great American military strategist, John Boyd. OODA stands for Observe-Orient-Decide-Act. The simplest form of OODA loop, taken from Francis Osinga, is shown below.

Simple OODA

The OODA loop is a framework that can be used to describe how a rational being acts in a changing environment. The first step is to take in the available information as part of Observation. With the newly gathered information, the rational being has to gage the analyzed and synthesized information against the previous sets of information, relevant schema and mental models. The relevant schema and mental models are updated as needed based on the new set of information. This allows the rational being to better Orient themselves for the next step – Decide. The rational being has to decide what needs to be done based on their orientation, and at this point, the rational being Acts. The loop is repeated as the action triggers some reaction, which demands additional observation, orientation, decision and action. The loop has to be repeated until, a stable equilibrium is reached. Boyd was a fighter pilot and was often called as “40 second Boyd” because of his ability to get the better of his opponents in 40 seconds or less. The OODA loop was a formalization of his thoughts. See my previous post for additional information.

The key points of Boyd’s teachings are:

  • A rational being has to have a link with the external world to keep updating their orientation.
  • The absence of this live link will trigger an inward spiral that leads to disorientation and entropy.
  • Based on this, a rational being has to ensure that they maintain their internal harmony, and stay in touch with the external environment.

Osinga summarized this beautifully as:

The abstract aim of Boyd’s method is to render the enemy powerless by denying him the time to mentally cope with the rapidly unfolding, and naturally uncertain, circumstances of war, and only in the most simplified way, or at the tactical level, can this be equated with the narrow, rapid OODA loop idea… This points to the major overarching theme throughout Boyd’s work: the capability to evolve, to adapt, to learn, and deny such capability to the enemy.

In “John Boyd and John Warden – Air Power’s Quest for Strategic Paralysis”, David S. Fadok explained Boyd’s ideas as:

Boyd’s theory of conflict advocates a form of maneuver warfare that is more psychological and temporal in its orientation than physical and spatial.  Its military object is “to break the spirit and will of the enemy command by creating surprising and dangerous operational or strategic situations.” To achieve this end, one must operate at a faster tempo or rhythm than one’s adversaries. Put differently, the aim of Boyd’s maneuver warfare is to render the enemy powerless by denying him the time to mentally cope with the rapidly unfolding, and naturally uncertain, circumstances of war.  One’s military operations aim to: (1) create and perpetuate a highly fluid and menacing state of affairs for the enemy, and (2) disrupt or incapacitate his ability to adapt to such an environment.

Cybernetic Aspects:

The simplest explanation of Cybernetics is (from Paul Pangaro):

Cybernetics is about having a goal and taking action to achieve that goal. Knowing whether you have reached your goal (or at least are getting closer to it) requires “feedback”, a concept that was made rigorous by cybernetics.

The term cybernetics comes from a Greek word than means “steering”. Cybernetics is the art of steering towards the goal. The feedback loop allows for the regulatory component of the system to adjust itself and steer the system towards the goal. An example is a thermostat where a set temperature is inputted as the goal, and the thermostat kicks on when the temperature goes below the set point. It stops once it reaches the set temperature. This is achieved due to the feedback loop in the system. Pangaro continues:

The idea is this: You have goals and I have goals. If we’re in conversation, the way we find a shared goal is through probing, experimentation, alignment on means, revision of the goals, mistakes…and recursion. The recursive process of seeing a goal, aiming for it, seeing the “error” or gap and then moving to close the gap…that’s cybernetics. And the principles of cybernetics really are a way to think about everything. Or, rather…anything that has a purpose, goals, intention. So, orgs that need to shift business models, teams that need to tighten timelines…getting your friends to pick a restaurant for next week…So, everything that really matters!

Any closed loop is capable of feedback and thus has cybernetic functionality. One can see that the OODA loop has cybernetic aspects to it. You, the rational being, are trying to get inside the opponent’s OODA loop. This essentially means that you are working at a tempo faster than your opponent, and that you are able to go through your OODA loop more efficiently and effectively than your opponent. In order to do this, you should have a better equipped orientation which can also adapt as needed to the changing needs of the environment.

A key idea in Cybernetics is Ross Ashby’s Law of requisite variety (LRV). Variety in cybernetics means the number of available states of a system. In order for a system to control and regulate another system, the regulating system should have more variety than the one that is being regulated. For example, a light switch has two varieties (on or off). Depending upon the two states, the switch can control the light bulb to be either lit or not lit. If the demand is to have the brightness dimmed by the switch, the switch lacks the requisite variety. If we can add an adjustable resistor to the switch, then we are increasing the variety of the switch, and the switch now has the requisite variety to have the light’s brightness adjusted in more varieties (on, dim, bright, off).

One of the ways the regulator can handle the excess variety from the environment is to attenuate it or in other words filter out the excess variety. Our brains are very good at this. For example, if you are driving your car, most of the information coming at you gets filtered out by your brain. Your brain does not want you focusing on the color of the shirt of the driver of the car coming in the opposite direction.

Another way the regulator can attempt controlling a system is to amplify its variety so that it has a better capability to control certain factors. An example of this is the use of sabermetric approach to assemble a baseball team as narrated in the book and movie, Moneyball.

Ultimately, in order to regulate a system, the regulating system must attenuate unwanted variety, and amplify its variety so that the requisite variety is achieved.

John Boyd was aware of the power of cutting off the variety of the opponent.

Fadok explains:

Boyd proposes that success in conflict stems from getting inside an adversary’s OODA loop and staying there. The military commander can do so in two supplementary ways.

First, he must minimize his own friction through initiative and harmony of response. This decrease in friendly friction acts to “tighten” his own loop (i.e., to speed up his own decision-action cycle time).

Second, he must maximize his opponent’s friction through variety and rapidity of response. This increase in enemy friction acts to “loosen” the adversary’s loop (i.e., to slow down his decision-action cycle time). Together, these “friction manipulations” assure one’s continual operation within the enemy’s OODA loop in menacing and unpredictable ways. Initially, this produces confusion and disorder within the enemy camp. Ultimately, it produces panic and fear which manifest themselves in a simultaneous paralysis of ability to cope and willingness to resist.

Fadok’s thesis details that Boyd is actually looking at variety attenuation and amplification, referred to as “variety engineering” in Management Cybernetics.

In Cybernetics, information is of paramount importance. Information in many regards can be seen as the fuel in the “feedback engine”. Stale or wrong information can steer the system in the wrong direction sometimes at its own peril. The most important phase of OODA loop is the Orientation phase. This refers to the phase where the internal schema and mental models are reviewed and updated as needed based on incoming information. Boyd identified this really well. From Fadok:

The operational aim should be to ensure the opponent cannot rid himself of these menacing anomalies by hampering his ability to process information, make decisions, and take appropriate action. In consequence, he can no longer determine what is being done to him and how he should respond. Ultimately, the adversary’s initial confusion will degenerate into paralyzing panic, and his ability and/or willingness to resist will cease.

Final Words:

Most of us, I hope, are not engaged in wars. What can we then learn from OODA loop?

OODA loop gives us a good framework to understand how we make decisions and interact. OODA loop points out the utmost importance of staying connected to the source (gemba) and getting “fresh” information as much as possible. We should keep our feedback loops short, and this provides us security even if our decisions are slightly imperfect. The feedback allows us to steer as needed. But having a long feedback loop makes the information stale or incorrect, and we would not be able to steer away from trouble. We should update our mental models to match our reality. We should ensure that the new piece of information coheres well with our constructed schema and mental models. We should understand how to minimize our internal friction. We should attenuate unwanted variety and amplify our variety to better adapt to a changing environment. If we are in an inward spiral and feel disoriented, we should ground ourselves to reality by observing our surroundings, and stop engaging in a perilous inward spiral. Understanding the constraints in the surroundings may help us understand why some people make certain decisions.

I will finish with some wise words from John Boyd (taken from The Essence of Winning and Losing)

Without analyses and synthesis, across a variety of domains or across a variety of competing/independent channels of information, we cannot evolve new repertoires to deal with unfamiliar phenomena or unforeseen change.

 Without OODA loops, we can neither sense, hence observe, thereby collect a variety of information for the above processes, nor decide as well as implement actions in accord with those processes… Without OODA loops embracing all the above and without the ability to get inside other OODA loops (or other environments), we will find it impossible to comprehend, shape, adapt to, and in turn be shaped by an unfolding, evolving reality that is uncertain, everchanging, unpredictable 

In case you missed it, my last post was OODA Loop at the Gemba:

OODA Loop at the Gemba:

Boyd

In today’s post, I am looking at OODA Loop, the brainchild of Col. John Boyd, a highly influential American military strategist. OODA is an acronym for Observe, Orient, Decide and Act. Boyd did not write any book detailing his ideas. However, he did write several papers and also gave lectures detailing his ideas. Boyd was a fighter pilot with the US Air Force. He was famously dubbed as the “40-second Boyd.” Legend goes that he could defeat any pilot who took him on in less than 40 seconds.

Francis Osinga, in his excellent book “Science, Strategy and War”, explained the OODA loop as:

OODA stands for observation, orientation, decision, action. Explained in brief, observation is sensing yourself and the world around you. The second element, orientation, is the complex set of filters of genetic heritage, cultural predispositions, personal experience, and knowledge. The third is decision, a review of alternative courses of action and the selection of the preferred course as a hypothesis to be tested. The final element is action, the testing of the decision selected by implementation.  The notion of the loop, the constant repetition of the OODA cycle, is the essential connection that is repeated again and again.  Put simply, Boyd advances the idea that success in war, conflict, competition even survival hinges upon the quality and tempo of the cognitive processes of leaders and their organizations.

The OODA loop is generally shown as the schematic below:

Simple OODA

John Boyd’s final version of the OODA loop is given below:

1920px-OODA.Boyd.svg

From Osinga:

(Boyd) was the first to observe that the common underlying mechanism involved tactics that distort the enemy’s perception of time. He identified a general category of activities to achieve this distortion, the ability to change the situation faster than the opponent could comprehend, which he called “operating inside the Observation– Orientation–Decision–Action (OODA) loop.”

Boyd wonderfully explains the idea of getting inside the opponent’s OODA loop in his paper, “Destruction and Creation.”

Destruction and Creation:

Boyd starts with explaining that we have conceptual models of the external world, the reality. We interact with reality, and we update this model based on our continuous interaction. He stated:

To comprehend and cope with our environment we develop mental patterns or concepts of meaning. The purpose of this paper is to sketch out how we destroy and create these patterns to permit us to both shape and be shaped by a changing environment. In this sense, the discussion also literally shows why we cannot avoid this kind of activity if we intend to survive on our own terms. The activity is dialectic in nature generating both disorder and order that emerges as a changing and expanding universe of mental concepts matched to a changing and expanding universe of observed reality.

Boyd said that we are in a continuous struggle to remove or overcome physical and social environmental obstacles. This means that we have to take actions and decisions on an ongoing basis for our survival. We have to keep modifying our internal representation of reality based on new data. He called this destruction and creation, which he further detailed as analysis and synthesis. We have to use a reductive process of taking things apart, and assembling things together to gather meaning.

There are two ways in which we can develop and manipulate mental concepts to represent observed reality: We can start from a comprehensive whole and break it down to its particulars or we can start with the particulars and build towards a comprehensive whole.

Readers of this blog might see that the ideas of analysis and synthesis are very important in Systems Thinking. Boyd was an avid reader and he was able to see similar ideas in various fields and bring them all together. His sources of inspiration varied from Sun Tzu, Toyota to Kurt Godel.

Boyd continued that the acts of analysis and synthesis require verification to ensure that the newly created mental representation is appropriate.

Recalling that we use concepts or mental patterns to represent reality, it follows that the unstructuring and restructuring just shown reveals a way of changing our perception of reality. Naturally, such a notion implies that the emerging pattern of ideas and interactions must be internally consistent and match-up with reality… Over and over again this cycle of Destruction and Creation is repeated until we demonstrate internal consistency and match-up with reality.

Boyd brilliantly brings in the ideas of the great logician, mathematician, and analytic philosopher Kurt Godel. Godel in 1931 shook the world of mathematics and logic with his two phenomenal theorems – the Incompleteness Theorems. He proved that in any formal systems there will always be statements that cannot be proven within the logical structures of the system, and that any formal system cannot demonstrate its own consistency. Godel’s ideas were so powerful that the great polymath von Neumann is said to have remarked, “it’s all over!”

Boyd used ideas from Godel, Heisenberg’s uncertainty principle and entropy to further explain his OODA loop. Boyd explained Godel’s ideas as:

“You cannot use a system’s own workings to determine if a system is consistent or not…One cannot determine the character and nature of a system within itself. Moreover, attempts to do will lead to confusion and disorder.”

This was the great insight that Boyd had. One has to continuously stay in touch with his environment to have a consistent internal representation of reality. If the link to the environment is cut off, then the internal representation gets faulty, and the continuous destruction and creation of the internal representation is then based on faulty references.

“If I have an adversary out there, what I want to do is have the adversary fold back inside of himself where he cannot really consult the external environment he has to deal with, if I can do this then I can drive him to confusion and disorder, and bring him into paralysis.”

Boyd stated:

According to Gödel we cannot— in general—determine the consistency, hence the character or nature, of an abstract system within itself. According to Heisenberg and the Second Law of Thermodynamics any attempt to do so in the real world will expose uncertainty and generate disorder. Taken together, these three notions support the idea that any inward-oriented and continued effort to improve the match-up of concept with observed reality will only increase the degree of mismatch. Naturally, in this environment, uncertainty and disorder will increase as previously indicated by the Heisenberg Indeterminacy Principle and the Second Law of Thermodynamics, respectively. Put another way, we can expect unexplained and disturbing ambiguities, uncertainties, anomalies, or apparent inconsistencies to emerge more and more often. Furthermore, unless some kind of relief is available, we can expect confusion to increase until disorder approaches chaos— death.

Orient – the Most Important Step:

Orient

In the OODA loop, the most important step in OODA is the second O – Orient. This is the step about our mental models and internal representation of the external world. This is where all the schema reside.

Boyd wrote:

The second O, orientation—as the repository of our genetic heritage, cultural tradition, and previous experiences—is the most important part of the O-O-D-A loop since it shapes the way we observe, the way we decide, the way we act.

From Osinga:

Orientation is the schwerpunkt (center of gravity). It shapes the way we interact with the environment.

In this sense, Orientation shapes the character of present observations-orientation- decision-action loops – while these present loops shape the character of future orientation.

Chet Richards, friend of Boyd, writes about orientation:

Orientation, whether we want it to or not, exerts a strong control over what we observe. To a great extent, a person hears, as Paul Simon wrote in “The Boxer,” what he wants to hear and disregards the rest. This tendency to confirm what we already believe is not just sloppy thinking but is built into our brains (Molenberghs, Halász, Mattingley, Vanman. and Cunnington, 2012) … Strategists call the tendency to observe data that confirm our current orientations “incestuous amplification”.

Final Words:

OODA loop is a versatile framework to learn and understand. We already use the concept unconsciously. The knowledge about the OODA loop helps us prepare to face uncertainty in the everchanging environment. You can also see in today’s world that intentional misinformation can heavily disorient people and distort reality.

We should always stay close to the source, the gemba, to gather our data. We should keep updating our mental models, and not rely on old mental models. We should not try to find only data that corroborates our hypotheses. We should continuously update/improve our orientation. We should start learning from varying fields.

We should allow local autonomy in our organization. This allows for better adaptation since they are close to the source. The idea of not being able to adapt with a fast changing environment can also be explained by Murray Gell-Mann’s maladaptive schemata. From Osinga:

One of the most common reasons for the existence of maladaptive schemata is that they were once adaptive, but under conditions that no longer prevail. The environment has changed at a faster rate than the evolutionary process can accommodate.

In case you missed it, my last post was AQL/RQL/LTPD/OC Curve/Reliability and Confidence:

Clausewitz at the Gemba:

vonClausewitz

In today’s post, I will be looking at Clausewitz’s concept of “friction”. Carl von Clausewitz (1780-1831) was a Prussian general and military philosopher. Clausewitz is considered to be one of the best classical strategy thinkers and is well known for his unfinished work, “On War.” The book was published posthumously by his wife Marie von Brühl in 1832.

War is never a pleasant business and it takes a terrible toll on people. The accumulated effect of factors, such as danger, physical exertion, intelligence or lack thereof, and influence of environment and weather, all depending on chance and probability, are the factors that distinguish real war from war on paper. Friction, Clausewitz noted, was what separated war in reality from war on paper. Friction, as the name implies, hindered proper and smooth execution of strategy and clouded the rational thinking of agents. He wrote:

War is the realm of uncertainty; three quarters of the factors on which action in war is based are wrapped in a fog of greater or lesser uncertainty.

Everything in war is very simple, but the simplest thing is difficult. The difficulties accumulate and end by producing a kind of friction that is inconceivable unless one has experienced war.

Friction is the only conception which, in a general way, corresponds to that which distinguishes real war from war on paper. The military machine, the army and all belonging to it, is in fact simple; and appears, on this account, easy to manage. But let us reflect that no part of it is in one piece, that it is composed entirely of individuals, each of which keeps up its own friction in all directions.

Clausewitz viewed friction as impeding our rational abilities to make decisions. He cleverly stated, “the light of reason is refracted in a manner quite different from that which is normal in academic speculation… the ordinary man can never achieve a state of perfect unconcern in which his mind can work with normal flexibility.” In a tense situation, as most often the case is in combat, the “freshness” or usefulness of the available information is quickly decaying and reliability of the information is also in question.

Friction is what happens when reality differs from your model. Although Clausewitz’s concept of friction contains other elements, I am interested in is the friction coming from ambiguous information. Uncertainty and information are related to each other. In fact, one is the absence of the other. The only way to reduce uncertainty (be certain) is to have the required information that counters the uncertainty. To quote Wikipedia, Uncertainty refers to epistemic situations involving imperfect or unknown information. If we have full information then we don’t have uncertainty. It’s a zero-sum game.

We have two options to deal with the uncertainty due to informational friction:

  1. Reduce uncertainty by making useful information readily available to required agents when needed and where needed
  2. Come up with ways to tolerate uncertainty when we are not able to reduce it further.

As Moshe Rubinstein points out in his wonderful book, Tools for Thinking and Problem Solving, uncertainty is reduced only by acquisition of information and you need to ask three questions, in the order specified, when acquiring information.

  1. Is the information relevant? (is it current, and is the context applicable?)
  2. Is the information credible? (is it accurate?)
  3. Is the information worth the cost?

How should we proceed to minimize the friction?

  1. We should try to get the total picture, an understanding of the forest before we get lost in the trees. This helps us in realizing where our epistemic boundaries might be, and where we need to improve our learning.
  2. We should have the courage to ask questions and cast doubts on our world views. Even with our belief system, we can ask whether it is relevant and credible. We should try to ask – what is wrong with this picture? What am I missing?
  3. We should always keep on learning. We should not shy away from “hard projects.” We should see the challenges as learning experiences.
  4. We should know and be ready to have our plan fail. We should understand what the “levers” are in our plan. What happens when we push on one lever versus pulling on another? We should have models with the understanding that they are not perfect but they help us understand things better. We should rely on heuristics and flexible rules of thumbs. They are more flexible when things go wrong.
  5. We should reframe our understanding from a different perspective. We can try to draw things out or write about it or even talk about it to your spouse or family. Different viewpoints should be welcomed. We should generate multiple analogies and stories to help tell our side of the story. These will only help in further our understanding.
  6. When we make decisions under uncertainty and risk, each action can result in multiple outcomes, and most of the times, these are unpredictable and can have large-scale consequences. We should engage in fast and safe-to-fail experiments and have strong feedback loops to change course and adapt as needed.
  7. We should have stable substructures when things fail. This allows us to go back to a previous “safe point” rather than go back all the way to the start.
  8. We should go to gemba to grasp the actual conditions and understand the context. Our ability to solve a problem is inversely proportional to the distance from the gemba.
  9. We should take time, as permissible, to detail out our plan, but we should be ready to implement it fast. Plan like a tortoise and run like a hare.
  10. We should go to the top to take a wide perspective, and then come down to have boots on ground. We should take time to reflect on what went wrong and what went right, and what our impact was on ourselves and others. This is the spirit of Hansei in Toyota Production System.

Final Words:

Although not all of us are engaged in a war at the gemba, we can learn from Clausewitz about the friction from uncertainty, which impedes us on a daily basis. Clausewitz first used the term “friction” in a letter he wrote to his future wife, Marie von Brühl, in 1806. He described friction as the effect that reality has on ideas and intentions in war. Clausewitz was a man ahead of his time, and from his works we can see elements of systems thinking and complexity science.

We propose to consider first the single elements of our subject, then each branch or part, and, last of all, the whole, in all its relations—therefore to advance from the simple to the complex. But it is necessary for us to commence with a glance at the nature of the whole, because it is particularly necessary that in the consideration of any of the parts the whole should be kept constantly in view. The parts can only be studied in the context of the whole, as a “gestalt.

Clausewitz realized that each war is unique and thus what may have worked in the past may not work this time. He said:

Further, every war is rich in particular facts; while, at the same time, each is an unexplored sea, full of rocks, which the general may have a suspicion of, but which he has never seen with his eye, and round which, moreover, he must steer in the night. If a contrary wind also springs up, that is, if any great accidental event declares itself adverse to him, then the most consummate skill, presence of mind and energy, are required; whilst to those who only look on from a distance, all seems to proceed with the utmost ease.

Clausewitz encourages us to get out of our comfort zone, and gain as much variety of experience as we can. The variety of states in the environment always is larger than the variety of states we can hold. He continues to advise the following to reduce the impact of friction:

The knowledge of this friction is a chief part of that so often talked of, experience in war, which is required in a good general. Certainly, he is not the best general in whose mind it assumes the greatest dimensions, who is the most overawed by it (this includes that class of over-anxious generals, of whom there are so many amongst the experienced); but a general must be aware of it that he may overcome it, where that is possible; and that he may not expect a degree of precision in results which is impossible on account of this very friction. Besides, it can never be learnt theoretically; and if it could, there would still be wanting that experience of judgment which is called tact, and which is always more necessary in a field full of innumerable small and diversified objects, than in great and decisive cases, when one’s own judgment may be aided by consultation with others. Just as the man of the world, through tact of judgment which has become habit, speaks, acts, and moves only as suits the occasion, so the officer, experienced in war, will always, in great and small matters, at every pulsation of war as we may say, decide and determine suitably to the occasion. Through this experience and practice, the idea comes to his mind of itself, that so and so will not suit. And thus, he will not easily place himself in a position by which he is compromised, which, if it often occurs in war, shakes all the foundations of confidence, and becomes extremely dangerous.

US President Dwight Eisenhower said, “In preparing for battle I have always found that plans are useless, but planning is indispensable.” The act of planning helps us to conceptualize our future state. We should strive to minimize the internal friction, and we should be open to keep learning, experimenting, and adapting as needed to reach our future state. We should keep on keeping on:

“Perseverance in the chosen course is the essential counter-weight, provided that no compelling reasons intervene to the contrary. Moreover, there is hardly a worthwhile enterprise in war whose execution does not call for infinite effort, trouble, and privation; and as man under pressure tends to give in to physical and intellectual weakness, only great strength of will can lead to the objective. It is steadfastness that will earn the admiration of the world and of posterity.”

Always keep on learning…

In case you missed it, my last post was Exploring The Ashby Space: