The -isms of a Man Who Rejected -isms:

In today’s post, I am exploring one of the most fascinating aspects of Heinz von Foerster’s work: his complete rejection of philosophical labels and -isms. Von Foerster, the Austrian-American physicist and cybernetician, in his later years did not want to be pinned down by any single philosophical position. This was not philosophical indecision but a carefully crafted stance that reflected his deepest insights about observation, responsibility, and the nature of knowledge itself.

Von Foerster’s view that he was an -ist only of the -ism he could laugh at. While there is no definitive record of this exact phrase, in my opinion it perfectly captures his approach to philosophical thinking. He would only commit to philosophical positions that he could maintain with lightness and humor, positions that did not take themselves too seriously. This prevented his thinking from becoming rigid or dogmatic. He treated thought as an ongoing exploration, not a fixed doctrine.

To understand why this matters, let me walk through the major -isms that von Foerster consistently stepped around, and show you how his alternative approach offers something far more powerful than any single philosophical position.

Objectivism – The View from Nowhere:

Objectivism claims there is a world “out there,” independent of us, that we can know through careful observation and measurement. It insists on a sharp separation between the observer and what they observe.

Von Foerster had no patience for this illusion. As he put it:

Objectivity is the delusion that observations could be made without an observer.

This was not just philosophical wordplay. Von Foerster understood something deeper about responsibility:

Objectivity is one of the great tricks to get rid of responsibility.

When we claim to simply observe “what is,” we avoid acknowledging our role in bringing forth what we see. This is particularly important in his work on self-organizing systems. As he demonstrated, “There are no such things as self-organizing systems!” What we observe is always a system in close contact with an environment, in a state of perpetual interaction. The observer and the observed emerge together.

This insight reverses the logic of classical science. Instead of trying to eliminate the observer, von Foerster insisted that the observer must be included in the description of the observing system.

Subjectivism – The Prison of the Self:

You might think that if objectivism is wrong, then subjectivism must be right. Subjectivism is the idea that reality is purely a personal interpretation. Von Foerster rejected this view as well.

Von Foerster explicitly refuted the notion that reality is solely the product of an individual’s imagination. He used what philosophers call a “reductio ad absurdum” argument to show the logical impossibility of pure subjectivism.

As he put it:

If I assume that I am the sole reality, it turns out that I am the imagination of somebody else, who in turn assumes that he is the sole reality.

This paradox is easily resolved, by postulating the reality of the world in which we happily thrive.

He also addressed the idea of isolated experience directly. When people talk about being alone with their thoughts, von Foerster pointed out: “The man alone? He would just have to re-member that he is only alone when compared to others.” Even the concept of being “alone” requires the existence of others as a reference point.

Subjectivism treats knowledge as trapped inside individual minds. But von Foerster understood that knowledge emerges through interaction, not isolation. When discussing cognition, he clarified that subjectivism fundamentally misunderstood what knowing is. As he explained, your nervous activity is just your nervous activity and, alas, not mine. Knowledge and information cannot simply be “passed on” as commodities from one person to another because they are processes of individual nervous systems.

Instead, von Foerster showed us that Reality appears as a consistent reference frame for at least two observers. This is crucial. Meaning does not reside in isolated subjects but arises between them, through their coordinated actions and mutual orientation.

This insight connects to what von Foerster called the fundamental structure of communication. Following Maturana’s theorem that “Anything said is said by an observer,” von Foerster added his corollary: “Anything said is said to an observer.” These two propositions establish what he called a nontrivial connection between three concepts: the observers; second, the language they use; and third, the society they form by the use of their language. He compared this to the chicken, egg, and rooster problem: You need all three in order to have all three.

Human consciousness, including self-awareness and self-reflection, emerges from this social foundation. As von Foerster explained: “Self-awareness and self-reflection arise in ‘languaging’, which is necessarily a social affair.” We are conscious, he argued, because “we ‘know with’ ourselves” precisely because we “‘know with’ others.” This awareness of mutual interdependence is “the root of conscience.” The “other” is what makes us a “self.”

Even the objects we perceive are not pre-existing entities but what von Foerster called “tokens for eigenbehaviors”. They are “indications of processes” that arise from our interactions. “In the process of observation, we interact with ourselves and with the world to produce stabilities that become the objects of our perception.”

Von Foerster distinguished between “the reality” (as confirmed by independent observations) and “a reality” (as constructed through correlations). He preferred the latter approach: “My sensation of touch in correlation with my visual sensation generate an experience that I may describe by ‘here is a table.'”

This is why subjectivism fails. There is no pure inner experience independent of interaction with others and environment. The subject and the world they know arise together through recursive processes of coordination. As von Foerster put it: “without language and outside language there are no objects, because objects only arise as consensual coordinations of actions in the recursion of consensual coordinations of actions that languaging is.”

Relativism – The Collapse of Commitment:

Many people assume that if you reject both objectivism and subjectivism, you must be a relativist. A relativist is someone who thinks all views are equally valid. Von Foerster avoided this trap too.

He did not believe all truths were equal. He believed we are responsible for the truths we construct. This led to his famous ethical imperative:

Act always so as to increase the number of choices.

This was not tolerance born from “anything goes” thinking. It was responsibility born from understanding that we are the ones drawing distinctions, and we must accept responsibility for what those distinctions allow and exclude.

Von Foerster’s approach to education illustrates this perfectly. He distinguished between “legitimate questions” (questions to which the answers are unknown) and “illegitimate questions” (questions to which the answers are already known). Some ways of questioning are simply more generative than others. A relativist might say all questions are equally valid. Von Foerster insisted that only legitimate questions open up new possibilities for learning and growth.

The Constructive Alternative – Cybernetic Constructivism:

So what was von Foerster’s alternative? If we must give it a label, we might call it cybernetic constructivism. But he would probably laugh at that label too.

His key insight was this: “The environment contains no information. The environment is as it is. Information is a cognitive function.”

This leads to a profound recognition. Meaning does not exist “out there” waiting to be discovered, nor is it trapped “in here” within individual minds. It emerges through the recursive process of observation itself.

As von Foerster put it: “If you want to see, learn how to act.”

This brings us to his most challenging statement: “I shall act always as if I were the creator of the world I perceive.”

This is not solipsism. Solipsism is the idea that only your mind exists. It’s about responsibility. The world we bring forth is shaped by our choices, our distinctions, our attention, our participation with others.

The Ethics of Observation:

This is where von Foerster’s approach becomes deeply ethical. If there is no privileged position outside the system, then every observer is responsible for their constructions. There is no place to hide.

Von Foerster expressed this beautifully: “At any moment we are free to act toward the future we desire.”

You are the Copernican revolution. The observer is not the center of the universe, but every observation necessarily has them at its center. That realization is not humbling or arrogant but liberating. Because if you are responsible for your world, you are also free to change how you perceive and engage with it.

This is the deepest insight of von Foerster’s anti-philosophy. In taking responsibility for our constructions, we discover our freedom to construct differently. The future “will be as we wish and perceive it to be” not because we can impose our will on reality, but because future and perceiver arise together through the choices we make and the possibilities we keep open.

Communication as Dance:

Von Foerster understood communication differently too. As he explained:

“Language for me is an invitation to dance… When we are talking with each other, we are in dialogue and invent what we both wish the other would invent with me.”

We are not separate minds trying to transmit fixed meanings, but partners in an ongoing creative process. This view of communication aligns perfectly with his rejection of fixed philosophical positions. Every conversation is an opportunity to create something new together.

Final Words:

Here we encounter the deepest irony. In trying to understand von Foerster’s rejection of -isms, we risk creating the very thing he warned against: a fixed doctrine, a systematic position, an -ism of anti-isms.

But perhaps this is exactly the point. We are free to choose our -ism, and in choosing, we become responsible for what that choice enables and forecloses. Von Foerster would not exempt us from this responsibility, not even when engaging with his own work.

Von Foerster’s approach embodies what we might call epistemic humility. He understood that our knowledge is always partial, always constructed from a particular perspective, always open to revision. This humility does not lead to paralysis but to a productive pluralism. Multiple ways of knowing can coexist without one needing to eliminate the others. His ethics of observation, always acting to increase choices, becomes particularly relevant when our information systems often work to narrow them.

This is why von Foerster’s thinking remains provisional. He demonstrated that thought need not crystallize into fixed positions. We can remain responsive to what emerges in the very act of thinking. This provisionality is not philosophical indecision but intellectual courage. It is the willingness to stay with questions that matter more than the answers they might produce.

In our current moment of polarized certainties, von Foerster reminds us that we are not spectators of the world, but co-creators of it.

Von Foerster would likely laugh at being treated as the final word on anything. His laughter would remind us that the best thinking often begins where our certainties end. And if that insight threatens to become an -ism? Von Foerster might simply smile and remind us that we are only -ists of the -isms we can laugh at.

Stay curious and always keep on learning…

The Right Thing and the Right Reason:

In today’s post, I am exploring the notion of “doing the right thing.”

We encounter this expectation everywhere in workplaces, personal relationships, and civic life. The phrase appears in mission statements, performance reviews, and everyday conversations. At first glance, it feels simple and reassuring. Of course we should do the right thing.

In regulated industries, this mantra becomes even more clearly pronounced. Every procedure, every record, and every audit echoes that expectation. It appears in training sessions, quality policies, and compliance frameworks.

I want to add an important layer: do the right thing for the right reason.

The distinction may seem subtle, yet it initiates a reflexive turn. It moves us from mechanical compliance to ethical responsibility.

A statement by itself carries no value. “Do the right thing” means nothing until someone makes it their own. The phrase appears to describe a fact, but it actually expresses a value judgment. Value enters only when a person acts from conviction, not from blind obligation. The second part, “for the right reason,” is where responsibility begins. It asks a crucial question about why I am doing this. That question transforms an empty slogan into a deliberate act grounded in personal values.

If I follow orders or check boxes without reflection, I might appear to do the right thing. But in truth, I have surrendered ownership. From the perspective of cybernetic constructivism, meaning is not handed down from the outside. It emerges within the observer. As Heinz von Foerster showed in his work on observing systems, we do not simply receive reality but construct it through our interactions and decisions.

When we speak of “the right thing,” the phrase suggests precision, as if a decision could fit reality without error. In practice, this rarely happens. Thought and reality belong to different domains. A decision formed in thought appears complete because ideas do not encounter resistance until they are acted out. The flaws surface only when they meet real conditions.

This is the illusion of completeness in the right thing, the comforting belief that something can be fully correct. It persists because thought gives us a sense of closure that reality cannot guarantee.

Here is where the phrase “for the right reason” matters. It does not make the decision perfect; it acknowledges that it never was. Adding this second part challenges the belief in absolute correctness and invites humility about what we can know. It says you cannot guarantee the outcome, but you can own the reasoning. That ownership gives the action its integrity. The emphasis shifts from claiming completeness to accepting responsibility. This matters because it prevents us from confusing the clarity of thought with the complexity of life.

I want to focus on this more with a question: When the time comes, can I do the right thing? This question seems simple, but it hides a deeper issue. What exactly is the right thing? We often talk as if the right thing exists “out there,” waiting for us to discover, a fixed fact like the boiling point of water. But this assumes that what appears complete in thought will remain complete in practice. That assumption is an illusion.

In many situations, the right thing is not given. It is what von Foerster calls an undecidable.

The Nature of the Undecidable:

Von Foerster introduced this term for questions that cannot be answered by logic, rules, or computation alone. An undecidable resists algorithmic resolution. Regulations provide structure and consistency, and they are essential. Yet they do not eliminate undecidables. They never will.

Undecidables exist because the variety of real-world situations far exceeds what any rulebook can anticipate. In cybernetics, variety means the number of possible states a system can take. The more possible situations, the greater the variety. And the world does not just throw edge cases at us. It quite often generates entirely new scenarios. Each innovation, each unique user context, and every unexpected failure mode creates conditions no standard procedure can fully capture.

No rulebook, whether corporate policy or government regulation, can provide ready-made answers to every question. Rules may reduce some complexity and provide crucial guidance, but they cannot close the gap between their finite scope and the indefinite creativity of reality. That gap is where undecidables live, and where human judgment becomes indispensable.

Von Foerster put it clearly:

“Only those questions that are in principle undecidable, we can decide.”

This is not a logical contradiction. It is an ethical imperative. The undecidable is not an error to fix or a loophole to close. It is an invitation to take responsibility. And responsibility cannot be delegated to systems or rules.

Many people resist this truth. We want the comfort of certainty. We prefer to believe the right thing exists as a fixed point, like a law of physics. If that were true, we would not bear the weight of decision. But ethics begins where algorithmic certainty ends. When we say “Just tell me the rule,” we try to trade agency for comfort. And in doing so, we risk betraying the very principles we claim to uphold.

The uncomfortable insight is this: the right thing has validity only as something we decide and own.

A Practical Question:

In the medical device industry, when I encounter an undecidable, my first question is always:

“How does this help or hurt the end user?”

That question brings the undecidable into focus. Regulations cannot cover every nuance. They can only guide. The decision remains mine. The responsibility cannot be outsourced.

Doing the right thing for the right reason is not about perfection. It is not about moral grandstanding. It is about intentionality, the choice to act from internal commitment rather than external command. It is the courage to decide when certainty is impossible and when existing protocols do not apply.

Von Foerster understood this deeply. When he spoke of undecidables, he was not describing a flaw in logic or a failure of system design. He was describing the essence of ethical life: that there are decisions no one can make for us. This insight formed the heart of his second-order cybernetics, which places the observer and their responsibility at the center of any system.

The Ladder We Must Throw Away:

Here I must acknowledge an irony. In adding the phrase “for the right reason,” I am still using the word “right.” By doing so, I risk introducing the very assumption I wanted to question: that rightness exists as something fixed and pre-given. This reflects a pattern throughout the article, where language itself hints at the various complexities we grapple with in an attempt to grasp or cope with the external world.

This is where Wittgenstein helps. In the Tractatus Logico-Philosophicus, he wrote that the propositions in his book were like a ladder. Once you have climbed it, you must throw it away. These propositions were tools, not eternal truths. They guide you to a vantage point, and then you move beyond them.

The phrase “do the right thing,” and even my expanded version, “do the right thing for the right reason,” works the same way. These are useful as orienting principles in regulated industries. They provide direction in moments of uncertainty. But if we cling to them as ultimate truths, we miss their purpose.

Like Wittgenstein’s ladder, their role is pragmatic and temporary. They guide us to a place where we can make responsible decisions. Once we understand that responsibility cannot be outsourced to a phrase or a rule, we can discard the ladder, not by abandoning the principle, but by letting go of the illusion that the phrase absolves us of thinking.

The deeper insight is this: the right thing does not exist as a given. It exists as something we must decide. And that decision, by its very nature, will always belong to us.

The next time you hear the phrase “do the right thing,” pause and ask:

What undecidable am I facing, and will I have the courage to decide it for the right reason, knowing that even the word “right” is only a ladder?

Final Words:

The tension between following rules and taking responsibility is not a flaw to fix. It is a fundamental condition of ethical life in complex systems. Von Foerster’s cybernetics teaches us that we cannot escape this tension by creating better rules or more comprehensive procedures. The variety of situations we face will always exceed the variety our systems can anticipate.

This does not diminish the value of regulations. They provide the backbone of responsible practice and create the conditions for ethical decisions. But they cannot substitute for judgment when the genuinely novel situation arises.

The courage to decide undecidables belongs to every professional who encounters the limits of the rulebook. When we recognize that meaning emerges within the observer, we are called to decide thoughtfully, with full awareness of our role in shaping the meaning of our actions.

This is neither comfortable nor easy. But it is the price of genuine ethical responsibility. The ladder remains useful until we no longer need it. The goal is to reach the place where we can make decisions worthy of the trust placed in us.

Always keep on learning…

If you enjoyed this post and find my work valuable, I would appreciate your support. You can explore more of my ideas in my latest book, Second Order Cybernetics, Essays for Silicon Valley, hard copy available at the Lulu Store.

A Good Enough Post:

In today’s post, I am exploring the notion that viability depends on our capacity for action, and that this capacity may not entirely rely on having a perfect grasp of “Truth.” This possibility, drawn from evolutionary theory, invites us to reconsider a deeply rooted assumption in human thought: that knowledge aims to reflect the world as it is. Perhaps organisms do not carry mirrors of an objective environment. Perhaps they generate workable patterns that allow action. If so, truth in the sense of full correspondence might be not only unnecessary for survival but impossible to achieve.

This shift from truth to adequacy might be more than a semantic difference. It suggests we could reconsider how perception, cognition, and action evolve under the pressure of complexity. Our nervous systems may not have emerged to catalog every detail of reality. They might have emerged to enable viable engagement. They filter, reduce, and transform. They make the unmanageable manageable. This economy of attention could be what allowed life to persist in an environment whose complexity always exceeds the capacity of any single organism.

The Evolutionary Logic of Selective Attention

The earliest organisms had comparatively simple structures. Their survival depended on detecting a few vital differences: light and dark, motion and stillness, hunger and satiation. These differences were not representations of reality in its full richness. They were pragmatic distinctions, selected by evolution because they mattered for survival.

As ecosystems diversified, so did the organisms within them. Greater complexity in the environment favored organisms with richer internal structures. These structures allowed them to absorb more variety and generate more flexible responses. But this expansion had limits. No organism could ever match the full complexity of its environment. Every adaptation remained selective.

Yet evolution’s relationship with cognitive economy appears more nuanced than simple efficiency maximization. Many organisms maintain seemingly “wasteful” capacities (elaborate plumage, complex social behaviors, or redundant sensory systems) that prove crucial during rare but catastrophic events. This apparent contradiction might reveal something deeper. Evolution does not eliminate selectivity; it shapes what gets selected and how. The peacock’s tail represents a different kind of cognitive economy, one that trades metabolic efficiency for reproductive advantage. Even redundancy involves choices about what to duplicate and what to ignore.

Here we see why the word “better” seems always contextual. An organism appears better only in relation to its ecological niche and temporal horizon. There may be no universal scale of improvement. Adequacy appears always local, contingent on the demands of the situation, and provisional across time scales.

The Law of Requisite Variety and Regulatory Challenges

This principle finds a formal expression in W. Ross Ashby’s Law of Requisite Variety: only variety can absorb variety. A regulator must have as much variety in its responses as exists in the disturbances it faces. If the environment can vary in ten ways and the organism can respond in only five, some disturbances will remain unchecked, threatening viability.

Ashby’s law applies specifically to regulatory systems maintaining homeostasis, but its insights extend to cognitive systems facing similar challenges. Both must manage variety mismatches between their internal organization and environmental complexity. Yet matching variety does not mean copying the environment. No finite system can track every detail. Instead, regulation depends on attenuation and amplification. Organisms attenuate the vast variety of the environment into a reduced set of distinctions. They amplify the significance of certain cues to prioritize action.

This does not seem to be a flaw in design. It might be a condition of survival. The key point is this: attenuation may not be about discovering truth but about achieving functional adequacy within specific contexts and time frames. And here is a critical implication – what works today may fail tomorrow. Adequacy is dynamic because the variety we face today may not be the variety we face tomorrow. If we are not able to adapt to new disturbances, viability collapses. Our current struggle to integrate artificial intelligence into the workplace illustrates this point. Many organizational models were built on assumptions of human exclusivity in cognitive labor. Those assumptions worked for decades. Today, they are brittle because the environment has changed. Ashby’s law prevails.

The Shortcut Analogy: Logarithms and Cognitive Compression

To appreciate the elegance and risk of attenuation, consider a good enough historical analogy. Before the age of electronic calculators, navigation and astronomy depended on logarithmic tables. Multiplying large numbers was time-consuming and error-prone. Logarithms offered a remarkable shortcut: turn multiplication into addition. By converting numbers into their logarithmic values, sailors could compute distances and bearings quickly, reducing the cognitive load of calculation.

Crucially, these tables were extremely accurate within their domain of application. Lives depended on precise calculations, and navigators understood both the power and limitations of their tools. They built in multiple redundancies and cross-checks. This compression did not deliver the full detail of multiplication, but it delivered enough precision for safe passage across oceans when used with appropriate awareness of its boundaries.

Our minds seem to prefer operating in a linear way. Sequential thinking appears natural most likely because it proves cognitively economical. It reduces overwhelming complexity to manageable sequences we can follow. Like logarithmic tables, our conceptual frameworks trade completeness for efficiency. They allow us to act without drowning in detail. But there is an important difference. It is that logarithmic tables are mathematically precise within their defined limits. Human cognitive shortcuts however are bias-prone and culturally shaped, and they rarely come with warning labels. When we mistake our tools for the territory itself, the cost becomes invisible. Information is lost. Subtleties disappear. And when the environment changes, what once worked can become dangerous. This is the paradox: what enables us to cope also constrains what we can see. Our abstractions could be both our superpower and our vulnerability.

Pragmatism and Cybernetic Constructivism

This brings us to the philosophical dimension of the topic. Pragmatism, particularly as articulated by William James and John Dewey, treats knowledge as a tool for action rather than a mirror of reality. A belief is “true” not because it corresponds to some ultimate fact but because it proves useful in guiding behavior within a specific context. Truth is redefined as what works, but this “working” must be understood across multiple time scales and contexts. Adequacy is not fixed. It requires constant revision as the environment shifts.

This is not a license for arbitrary belief or wishful thinking. Pragmatic truth remains constrained by consequences. A bridge designed on faulty engineering principles will collapse regardless of the designer’s confidence. A medical treatment based on wishful thinking will fail regardless of the practitioner’s intentions. The pragmatic test is whether our frameworks enable effective action in the world as it actually responds to our interventions. Reality provides feedback, even if we cannot access it directly.

Cybernetic constructivism shares this orientation. Heinz von Foerster reminds us that “the environment contains no information”. What we call information arises in the interaction between an organism and its surroundings. The world does not impose meaning; meaning is enacted. Maturana and Varela describe this as structural coupling. Organisms and environments co-determine each other through ongoing interactions.

Seen in this light, our nervous system does not passively record inputs but brings forth distinctions through its own organization, maintaining coherence in continuous interaction with its surroundings. Knowing becomes an adaptive dance rather than a passive recording. The goal is not to represent an independent world but to maintain viability within a world that is partially brought forth by the act of knowing. This does not mean stability is irrelevant. Reliable patterns of interaction matter. Some regularities can be engaged in ways that allow prediction and engineering. Scientific methodology succeeds not because it removes simplification but because it manages it systematically, using feedback processes such as replication and peer review to adjust and refine adequacy over time and in a social realm.

The Double-Edged Sword: Superpower and Kryptonite

The ability to compress complexity seems to have made life possible. Yet this same ability becomes dangerous when compression becomes rigidity. When abstractions are treated as final truths, systems lose their capacity for adaptation. Stafford Beer captured this danger when he observed that ignorance becomes “the lethal attenuator”. When we lose track of what our simplifications exclude, adequacy transforms into vulnerability.

Let’s look at some examples. The use of algorithms in hiring often reduces the complexity of human potential to a few simplified metrics, which can perpetuate bias. Climate models, although highly advanced, still miss certain feedback loops and critical tipping points. Social media recommendation engines compress human interests into engagement-focused categories, which can push users toward more extreme views by filtering out moderating influences. This is evident in the world nowadays.

Heinz von Foerster reminded us that although the map may not be the territory, the map is all we have. Our ways of making sense are always partial and limited, yet they are the only tools we can use to navigate complexity. Recognizing this helps us remain aware of our cognitive blind spots.

In each case, the problem is not the use of shortcuts but forgetting their limits combined with insufficient feedback. The map is never the territory. When we mistake our ways of making sense for reality itself, fragility follows. What helps us stay viable can also make us blind.

Ethical Implications: What Do We Choose to Ignore?

If we accept that knowledge is constructed for adequacy, not truth, then the question of responsibility becomes unavoidable. Every act of attenuation involves a choice about what to include and what to ignore. These choices shape not only individual survival but collective futures.

In social systems, ignoring complexity can marginalize voices that do not fit dominant abstractions. In technological systems, it can produce biases that perpetuate injustice. The ethic of constructivism is not to abandon simplification (without it, we could not act) but to cultivate awareness of its costs and remain open to revision.

At the individual level, deliberate exposure to dissenting views, reflective journaling on hidden assumptions, and iterative sensemaking can help maintain cognitive flexibility.

We can restate Ashby’s law by saying that viability requires variety. A society that suppresses diversity of thought and perspective reduces its internal variety and becomes brittle in the face of unforeseen challenges. To design for resilience, we must design for plurality.

Final Words:

Survival does not seem to require perfect knowledge. It has required workable distinctions, compressed into forms that enable timely action. This logic of adequacy explains why our minds favor shortcuts, why linear thinking feels natural, and why abstraction is indispensable. Yet it also warns us that what we simplify to live by can, in time, limit what we live for.

The challenge, or more precisely the necessity, might be to balance economy with humility. To remember that our conceptual logarithms, like the tables once used by navigators, are tools for a journey, not the journey itself. They serve us best when we keep them provisional, open to correction, and sensitive to the richness they cannot capture.

Managing attenuation wisely is itself a complex adaptive challenge without simple solutions. It requires not just awareness of our limitations but active practices that surface hidden costs and maintain cognitive flexibility. It demands that we ask not whether our ways of making sense mirror reality, but whether they continue to support effective action in the conditions we now face, and whether we have ways to notice when they no longer do.

Engaging with complexity means getting better at being good enough, continuously. Our task is not to eliminate attenuation but to manage it wisely. And that begins with a question we often neglect. What do we choose to ignore, and how do we ensure that choice remains conscious, provisional, and responsive to feedback?

Always keep learning…

If you enjoyed this post and find my work valuable, I would appreciate your support. You can explore more of my ideas in my latest book, Second Order Cybernetics, Essays for Silicon Valley, hard copy available at the Lulu Store.

A Tale of a Thousand Models:

In today’s post, I am further exploring the notion of models and mental models. We often speak of mental models as though they are neat packages of knowledge stored somewhere in the mind. These models are typically treated as internal blueprints and as simplified representations of the world that help us navigate and make decisions. But what exactly do we mean when we call something a model? And are we always speaking about the same kind of thing?

The term model, in both technical and informal contexts, carries more ambiguity than we often acknowledge. In classical cybernetics, W. Ross Ashby gave the concept a central role. For Ashby, a model was a representation that could simulate the behavior of a system. A good regulator, he argued, must contain a model of the system it seeks to control. This model did not need to be a literal image or a complete mirror. It simply needed to have the right kind of functional correspondence with just enough structure to predict and act upon.

Ashby’s definition is rigorous and functional. The model need not share the same physical form or medium as the system it regulates. What matters is not material resemblance but structural correspondence across selected variables. The model must preserve the relations and transformations that enable viable regulation. Ashby called this ‘isomorphism’. This isomorphism does not demand total replication. It requires that the model preserve only those relations necessary for viable control. This is the basic premise of First Order Cybernetics.

This isomorphic correspondence is what makes the model useful for regulation. The regulator can manipulate the model, run it forward, test interventions, explore possibilities, and trust that the results will map back to the actual system. The model becomes a kind of structural analogue: a way of capturing pattern without requiring material similarity.

When we look deeper, something about this view of models can feel distant. It risks separating the observer from the observed, the knower from the known. It tends toward a view of knowledge that is separated from lived experience. What does it mean for an organism to contain a model of its world, if that organism is not a computer but a living, breathing being?

This is where the Thousand Brains Hypothesis (TBH) offers a helpful contrast. Jeff Hawkins, in developing this hypothesis, suggests that intelligence arises not from a single unified model of the world, but from many partial models working in parallel. Here, however, Hawkins seems to use ‘model’ in a markedly different sense than Ashby’s isomorphic structures. For Hawkins, a cortical column’s model is not a representation that stands apart from experience but a learned pattern of prediction embedded within sensorimotor engagement itself.

Each cortical column builds what Hawkins calls a model of objects in the world, but this model is constituted by the column’s capacity to predict sensory sequences as the body moves through space. The column does not store a picture of a coffee cup. Instead, it develops expectations about what sensations will follow from particular movements when encountering cup-like patterns. Some of these may be visual, some tactile, while others may be of a different sense altogether. The model is not a static thing, but a dynamic process. It is a way of being attuned to specific sensorimotor regularities.

While Hawkins retains the term “model,” his usage stretches its meaning. These patterns may not be models in the traditional sense at all. When we say a cortical column builds a model or learns expectations, we may still be trapped in representational thinking. The cortical column does not store information about objects. It maintains patterns of connectivity shaped by experience. These patterns do not represent the world per se. Instead, they enact a way of being responsive to it. A column’s knowledge of a coffee cup is not a stored description, but a readiness to engage with cup-like affordances. This is the key nuance I would like to offer.

This view of modeling resonates with Heidegger’s phenomenological understanding of being-in-the-world. Heidegger once noted that a hammer is not first known through its shape or composition, but through its use. It becomes present to us as ready-to-hand, as something we know by doing. Similarly, a cortical column knows an object by interacting with it, not by storing a detached image of it. As Heinz von Foerster once said, if you want to see, learn how to act.

In earlier reflections, I explored the limitations of treating mental models as internal representations. When we interact with a system or object, we are not retrieving stored pictures. Instead, we are drawing upon a history of lived engagement. Our orientation is not merely cognitive, but bodily and situated. The notion of a model here becomes something that reveals itself through action, not inspection. The Thousand Brains Hypothesis reinforces this idea by showing how perception and prediction are distributed. A single cortical column may only know part of an object in a specific sensory dimension, but through movement and integration with other columns, it participates in a kind of collective intelligence. There is no master map but only partial perspectives constantly updating and coordinating with one another. The columns are not comparing models. They are participating in a dynamic process of mutual constraint and coordination. This is what Maturana and Varela would recognize as structural coupling. Each column’s activity is shaped by its coupling with other columns, with the body, and with the environment. The result is a network of mutual specification rather than a collection of independent representations.

Intelligence, in this view, emerges not from the integration of discrete models but from the ongoing attunement of multiple sensorimotor streams. This attunement is guided not by accuracy but by viability. Viability is the organism’s capacity to maintain its structure and continue its pattern of living. It is often misunderstood that accuracy directly correlates with viability. The external world presents more complexity than any cognitive system can represent in full. The response, shaped by both constraint and energetic efficiency, is not to build exhaustive models but to maintain abstractions that are good enough. These are not symbolic summaries, but embodied dispositions formed through recurrent interaction.

This is not a flaw, but a feature of adaptive beings. Cognitive structures are not designed to capture the world exhaustively, but to filter it selectively. The principle of structural coupling rests on repetition. It rests on the organism’s ability to reinforce useful patterns over time. What endures are not accurate representations but habits of orientation that have proven viable. Cortical columns do not construct truthful depictions of the world. They cultivate ways of engaging that preserve continuity and coherence within the organism’s domain of living.

This stands in contrast to the classical view where the model is assumed to be singular, coherent, and representational. The model is not something we hold apart from the world, but something we become a part of through interaction with it*. This framing aligns with the constructivist view that organisms are informationally closed. An organism does not passively receive information from an objective world. It brings forth a world through its own structural coupling. What we call a model, then, is not a mirror of external reality but a structure of engagement, a dynamic fit between the organism and its environment.

The language of structure is important. Rather than thinking of models as things organisms have, we might think of them as patterns organisms are. A cortical column’s responsiveness to a coffee cup is not something it possesses but something it enacts. The pattern of connectivity is not a representation of the cup, but a way of being coupled to the cup’s affordances. Whether we call these models, structures of prediction, or patterns of skilled engagement, what unites them is that they are not static descriptions. They are emergent dispositions, formed through repeated interaction. Each term foregrounds a different aspect such as structure, process, or habit. However, they all point to intelligence as enacted rather than mirrored.

This is not to dismiss Ashby’s insight. His use of the term model was never about mirroring for its own sake. It was about enabling viable regulation and constructing just enough structure to explain and act. Perhaps it is more accurate to think of such models as habits of expectation. They are not representations but anticipations. They do not describe the world as it is but orient us toward what is likely to come. They are pragmatic, situated, and always in motion. Or perhaps the term model itself is too burdened. What we call a model may be better understood as a form of skilled attunement. It becomes a pattern of responsiveness that is cultivated through history, shaped by constraints, and sustained by viability. The cortical column does not model the coffee cup. It simply becomes responsive to it.

This reframing opens up deeper questions. If intelligence is not the construction of better representations but the cultivation of more viable engagements, what does this mean for artificial intelligence? Can machines learn to be responsive rather than simply predictive? Can they participate in the world, rather than map it?

The Thousand Brains Hypothesis, interpreted through the lens of structural coupling and lived engagement, suggests that intelligence emerges not from central models but from richly distributed interactions. It implies that robust intelligence does not require more accurate representations, but more diverse ways of being coupled to the world.

To model, in this deeper sense, is to engage. It is to live into a world that reveals itself not all at once, but gradually through action, adjustment, and care. Perhaps, the real power of what we call a model may not lie in what it represents, but in what it enables us to do. Or more accurately, in what it allows us to become.

Final Words:

This shift from models as internal representations to models as patterns of skilled engagement challenges deeply held assumptions about knowledge, cognition, and intelligence. It is not merely a technical redefinition. It is a philosophical turning. If cognition is not about mirroring the world but about maintaining a viable relation to it, then intelligence becomes a matter of fitting rather than mapping. It is not about what we store, but about how we respond. Even this post is not free of modeling. It draws distinctions, frames structures, and builds conceptual pathways. But it does so with an orientation toward viability, not toward finality. The second order reflexive nature of this inquiry (modeling the limits of models) underscores the point. Intelligence is not found in having the final answer, but in remaining open to reframing, recoupling, and reengaging as the world shifts around us.

This reframing also casts new light on the ambitions of artificial intelligence. If intelligence is not the construction of better representations but the cultivation of more viable engagements, then it becomes clear that AI systems, as currently conceived, may be fundamentally limited. The limitation is not merely technical. It is existential. Intelligence, in this deeper sense, emerges from embodied interaction, historical coupling, and recursive responsiveness to a world that matters. Machines that manipulate symbols or detect statistical regularities may approximate aspects of intelligent behavior, but they remain ungrounded in the affective, bodily, and experiential dynamics that make living cognition what it is. Responsiveness is not a product of prediction alone. It emerges from vulnerability, concern, and the need to maintain coherence amid complexity.

Without changes in their environment shaping how they persist, machines may simulate participation, but they do not truly engage. They act without inhabiting. They process without perspective. Perhaps this is one of the main reasons artificial intelligence may fall short of achieving sentience. It relies on static internal representations and lacks the embodied, experiential living necessary for understanding, concern, or care. Without lived coupling, there may be behavior, but not presence. There may be processing, but not perspective.

While navigating complexity, my hope is that this reframing offers both humility and hope. Humility, because it reminds us that our understanding is always partial and situated. Hope, because it suggests that intelligence is not a fixed capacity, but a living process which is co-created, and transformed through our engagements with the world and with each other in a social realm. I will finish with an excellent quote from Di Paolo, Rhohde and De Jaegher:

Organisms do not passively receive information from their environments, which they then translate into internal representations. Natural cognitive systems are simply not in the business of accessing their world in order to build accurate pictures of it. They participate in the generation of meaning through their bodies and action often engaging in transformational and not merely informational interactions; they enact a world.

Always keep learning…

* Hat tip to Heinz von Foerster’s wonderful quote. Am I apart from the universe or am I a part of the universe?

On Probability…

In today’s post, I am exploring the nature of probability. Is probability an intrinsic feature of events that evolves over time, or is it something else entirely? My view is that probability is best understood as a measure of an observer’s uncertainty that can change as new information becomes available, rather than as a property that events themselves possess.

Probability is not an intrinsic property of events that evolves over time. It is a measure of an observer’s uncertainty that changes as the observer gains new information.

This insight becomes clear when we consider what happens before and after an event of interest occurs. You might assign a 35% probability that your favorite team will win their championship match in 2025 based on their team, coaching staff, recent performance, and other factors. When your team does indeed win the championship in 2025, you no longer speak of a 35% chance afterward. You know they won, so your uncertainty about whether your team would capture the 2025 title is gone. The event itself has not changed. What has changed is simply your information about it.

This example reveals something fascinating. The event does not have a probability that flows through time. Your favorite team winning the 2025 championship does not possess an inherent “35% chance property” that somehow transforms into a “100% chance property” when they claim victory. Rather, probability expresses your epistemic state. It expresses what you know and do not know about the event. As your knowledge updates, so does the probability you assign.

Before the season, the probability of 35% captured your uncertainty given incomplete information about how this specific championship race would unfold. After they win, your uncertainty about whether your team won the 2025 championship disappears because you have complete information about this particular outcome. The players were competing and making decisions throughout the season, but your knowledge of the final result was incomplete and then became complete. Probability tracks this change in knowledge, not a change in the event itself.

Your favorite team winning the 2025 championship is a singular, unrepeatable event. This singularity principle applies to every event, whether it is the outcome of a coin toss or whether you miss a train. Even when we consider the 2026 championship, that represents a completely separate event requiring its own probability assessment. You might again assign some probability to your team winning in 2026, but this concerns a different season with different players, different opponents, and different circumstances. The fact that your team won in 2025 provides information that might influence your assessment of their 2026 chances, but each championship stands as a distinct event with its own associated uncertainty.

Different philosophical schools interpret probability in various ways. Frequentists focus on long-run patterns, while others emphasize physical propensities in systems. I adopt the Bayesian perspective here, which treats probability as quantifying an observer’s degree of belief about uncertain outcomes. This framework excels at handling partial information and belief updating as new evidence arrives.

The Bayesian approach formalizes how rational observers should revise their beliefs. You start with a prior probability based on available information. When new evidence arrives, Bayes’ theorem shows how to calculate an updated posterior probability, which then serves as the prior for the next update. Certainty represents probability at its extremes (belief of 1 or 0), but most real-world knowledge involves intermediate probabilities reflecting justified but incomplete information.

Let us return to the championship example with this framework in mind. Your initial 35% probability assignment reflects partial knowledge about the 2025 season that remains open to revision. When your favorite team wins the championship, your belief updates to certainty: probability 1. This transition represents a shift in your epistemic state, not a change in some objective property of the championship outcome. The probability assigned to the event changes only because your information changes.

Your team winning the 2025 championship might influence how you assess their chances for future seasons, but each championship represents a separate event. The 2026 championship is not the same event as the 2025 championship because it involves different circumstances, different player development, different opponents, and different strategic decisions that create their own uncertainty. Your experience from the 2025 season provides information for assessing future championship races, but the probability you assign to the 2026 contest addresses a distinct event with its own epistemic challenges.

Once an event’s outcome becomes known, assigning forward-looking probabilities to that specific completed event loses predictive meaning. However, probabilities retain important roles in other contexts. We use explanatory probabilities to reason about hidden causes of observed effects, and counterfactual probabilities to explore alternative scenarios for learning and decision-making. These applications all involve managing uncertainty about things we do not fully know.

Some philosophers argue for objective chances embedded in physical reality, claiming that the world itself has genuine probabilistic features. Even these can be understood through a Bayesian lens as rational betting odds conditioned on our best current knowledge about physical laws and initial conditions. From this epistemic perspective, probability fundamentally reflects our relationship to knowledge and uncertainty, not immutable features of external events.

Understanding probability as observer-dependent rather than event-dependent has practical implications. It explains why different people can reasonably assign different probabilities to the same event because they possess different information. It clarifies why probabilities can seem to “change” as we learn more: our knowledge evolves while events themselves follow deterministic or genuinely random processes. Most importantly, it positions probability as a dynamic tool for rational reasoning under uncertainty rather than a mysterious property that events carry through time.

Finally, it is important to recognize that while our beliefs may remain probabilistic, our decisions in the real world must ultimately resolve into binary choices. We decide to carry an umbrella or not, to take the highway or not, to treat a patient or not. Practical action demands that we collapse our probabilistic beliefs into definitive commitments. This reinforces that probability serves as a bridge between uncertainty and action, not as a property that events carry through time.

Final Words:

This epistemic view of probability transforms how we think about uncertainty and prediction. Rather than searching for probabilities “out there” in the world, we recognize them as tools for managing our own knowledge and ignorance.

As Pierre Simon Laplace eloquently put it: “Probability theory is nothing but common sense reduced to calculation.”

Once we embrace probability as a measure of what we know rather than what events are, we can use it more effectively as the rational tool it was always meant to be.

Always keep learning…

The Arbitrariness of Objectivism:

The readers of my blog might be aware that I appreciate the nuances of cybernetic constructivism. Cybernetic constructivism rejects the idea that we have access to an objective reality. It does not deny that there is an external reality independent of an observer. However, we do not have direct access to it. Additionally, the external world is more complex than us. As part of staying viable, we construct a version of reality that is unique to our interpretative framework. This takes place in a social realm, and error corrections happen because the construction occurs in the social realm.

Heinz von Foerster, the Socrates of Cybernetics, formulated two imperatives that provide insight into this framework. The first is the ethical imperative that states “act so as to increase the number of choices.” The second is the aesthetical imperative that states “if you desire to see, learn how to act.” I welcome the reader to check out previous posts on these concepts. This worldview supports pluralism, the idea that there can be multiple valid versions of reality. This emerges primarily because the external world being more complex than our cognitive apparatus, we maintain viability by constructing particular versions of reality rather than accessing reality directly.

Common Mischaracterizations:

A primary criticism I encounter involves misrepresenting this worldview as relativism or solipsism. Critics suggest that acknowledging multiple perspectives means that anything goes, or that nothing is shared between observers. This represents a caricature rather than a substantive critique.

Precision is necessary here. Some forms of relativism claim that all views are equally valid, including contradictory ones. In that model, if claim A asserts “only A is valid,” then relativism must also treat that assertion as valid. It has no mechanism for comparison or critique. The result is a flattening of all claims into mere equivalence, where strength, coherence, or context carry no weight.

Solipsism advances an even more extreme position. It claims that only one’s own mind is knowable, denying shared reality altogether. It discards the very possibility of meaningful intersubjectivity. No systems thinker, and certainly no pluralist, takes this position seriously.

Pluralism as a Distinct Position:

Pluralism is neither relativism nor solipsism. It does not claim all views are valid. Rather, it asserts that no view is valid by default. Pluralism insists that perspectives must be made visible, situated in context, and evaluated through dialogue. It resists automatic authority, including authority derived from its own assertions.

Consider what objectivism accomplishes by contrast. It selects a single claim and declares that only this claim is valid while all others are not. But on what basis does it make this selection? Often, no external justification is offered. The grounding remains internal, context-bound, or inherited, yet it is presented as if it were neutral, universal, and self-evident.

This selection process reveals a potential arbitrariness of objectivist claims. The view appears arbitrary because its assumptions may remain hidden from examination. Without transparent justification for why one view should be privileged, objectivism risks the appearance of arbitrariness. What presents itself as necessity may simply be preference in disguise. From a pluralist standpoint, this represents concealment rather than clarity.

The Paradox of Objectivist Authority:

Paradoxically, this form of objectivism begins to mirror the very relativism it claims to oppose. Relativism declares that all claims are valid, including any particular claim A. Objectivism declares that only claim A is valid while offering no method to interrogate why this should be so. Each approach shuts down evaluation through different mechanisms. Relativism dissolves differences into sameness. Objectivism excludes all but one view from consideration at the outset.

This dynamic reveal what objectivism risks becoming, not solipsism in the strict philosophical sense, but functional solipsism. When a worldview refuses to acknowledge its own perspective and denies legitimacy to all others, it ceases to see the world. It sees only itself, reflected and reinforced. This represents the erasure of other ways of seeing under the illusion that one’s own interpretative lens is the world itself.

The Hidden Nature of Objectivist Claims:

The danger of objectivism lies in its method: selecting a single view, designating it as truth, and treating alternatives as error, noise, or confusion. It dresses up a personal, historical, and situated position as universal and eternal. This approach is not more objective than pluralism. It is simply better concealed.

Frameworks that prioritize ontology over epistemology tend to overlook the epistemic humility that characterizes pluralism. When we claim to know what reality is before examining how we come to know it, we bypass the very process of inquiry that might reveal the limitations and situatedness of our perspective. This ontological presumption becomes particularly problematic when it denies its own epistemological foundations.

Pluralism does not collapse into solipsism. Objectivism risks this collapse precisely when it denies that it operates from a particular perspective. The refusal to acknowledge one’s interpretative framework does not eliminate that framework. It merely renders it invisible to examination.

Pluralism is not weakness, indecision, or relativistic drift. It represents a disciplined humility and a refusal to collapse complexity into certainty prematurely. It does not reject standards but demands that they be made visible, questioned, and held accountable to the context in which they arise.

Pluralism increases the space for dialogue, choice, and possibility. It reminds us that what we do not question becomes invisible, not because it is true, but because it hides within the taken-for-granted assumptions of our frameworks.

In a world increasingly polarized between loud certainties and quiet disillusionment, pluralism offers something increasingly rare: the courage to remain open, to ask how we know what we claim to know, and to stay in conversation with perspectives we might otherwise reject.

Final Words:

Not everything is permissible under pluralism. But no single view should escape questioning. The cybernetic constructivist position maintains that our constructions of reality emerge from our particular biological, cognitive, and social constraints. These constructions prove viable not because they correspond to an objective reality we cannot access, but because they enable us to navigate the complexity we encounter.

I will finish with a quote from Heinz von Foerster:

Objectivity is the delusion that observations could be made without an observer.

The task before us is not to eliminate the observer but to acknowledge the observer’s role in every observation. This acknowledgment does not lead to relativism or solipsism. It leads to a more rigorous understanding of how knowledge emerges from the interaction between observer and observed within particular contexts and constraints.

Always keep learning.

The Ethics of Choice: Ackoff Meets von Foerster

In today’s post I am exploring the need for ethics in Systems Thinking using the ideas of Heinz von Foerster and Russell Ackoff. Russell Ackoff and Heinz von Foerster came from different traditions within systems thinking. Ackoff comes from operations research and organizational design, and von Foerster comes from physics and second-order cybernetics. Yet, in their mature work, they both arrived at a strikingly similar ethical stance: that “systems” ought to be structured in ways that expand the capacity of their parts to choose, act, and develop.

Von Foerster’s ethical imperative is deceptively simple: “Act always so as to increase the number of choices“. It is easy to misread this as a general appeal to openness, ambiguity, or liberal tolerance. But that would miss its depth. For von Foerster, the notion of “choices” is rooted in constructivism. We are not passive recipients of a pre-given world. We are active participants in the construction of our realities. Therefore, every action we take contributes to shaping the world that others, too, will inhabit.

I have written about my corollary to Heinz von Foerster’s ethical imperative before: always opt for situations that preserve and expand future possibilities.

To increase the number of choices is not merely to keep options open. It is to take responsibility for the kind of world we are helping bring into being. It is to recognize that our models, narratives, and designs are not neutral. They create constraints or possibilities. The ethical dimension emerges from this constructivist insight. If we are the ones constructing meaning and order, then we are also responsible for ensuring that others can participate in that construction.

Russell Ackoff, coming from a different intellectual lineage, spoke in similar terms about purposeful systems. In his view, a social system, unlike a machine or an organism, is composed of parts that have purposes of their own. This is not just a descriptive claim. It is a normative one. To treat an enterprise as a social system is to treat its people as agents. That means enabling them to select both ends and means relevant to them. It means expanding the variety of behaviors available to the parts of the system. And it means refusing to reduce individuals to roles, procedures, or interchangeable units.

As Ackoff said: [1]

An enterprise conceptualized as a social system should serve the purposes of both its parts and the system of which it is a part. It should enable its parts and its containing systems to do things they could not otherwise do. They enable their parts to participate directly or indirectly in the selection of both ends and means that are relevant to them. This means that enterprises conceptualized as social systems increase the variety of both the means and ends available to their parts, and this, in turn, increases the variety of behavior available to them.

Ackoff does not describe freedom in abstract terms. Instead, he frames it in terms of viable behavior. If systems are to be purposeful and adaptive, they must support the ability of their parts to choose and act. This is not a luxury. It is an imperative in turbulent environments. Ackoff continues:

The parts of a completely democratic system must be capable of more than reactive or responsive behavior. They must be able to act. Active behavior is behavior for which no other event is either necessary or sufficient. Acts, therefore, are completely self-determined, the result of choice. Choice is essential for purposeful behavior. Therefore, if the parts of a system are to be treated as purposeful, they must be given the freedom to choose, to act.

This parallels von Foerster’s call to increase choices. It also deepens it. Ackoff is not only speaking of choice as a moral principle. He is showing that without choice, systems cannot act purposefully. They can only react. In complex systems, where change is constant, such reactivity is insufficient.

Though Ackoff and von Foerster rarely cited one another, their parallel conclusions suggest a convergence shaped by a shared moral sensitivity to the role of agency in system design.

Von Foerster’s imperative finds its most serious grounding in historical trauma. His insistence on the responsibility of the observer was not theoretical. He lived through the Nazi era when many claimed they “had no choice”. His ethical imperative arose in opposition to this very notion. The idea that one was simply “following orders” was, to him, a denial of personhood. To say “I had no choice” is not merely an evasion. It is a collapse of moral responsibility. It turns the observer into an automaton and ethics into compliance.

Ackoff, like von Foerster, saw how ethical collapse begins when systems are designed to remove agency under the guise of order. When systems are designed to remove or suppress choice, they not only become unethical but also incapable of long-term success. The suppression of choice results in stagnation, in the inability to deal with novelty, and in the eventual failure to match the variety of the environment.

As he explained:

Enterprises conceptualized and managed as social systems, and their parts, can respond to the unpredictable changes inherent in turbulent environments and can deal effectively with increasing complexity. They can expand the variety of their behavior to match or exceed the variety of the behavior of their environments because of the freedom of choice that pervades them. They are capable not only of rapid and effective passive adaptation to change but also of active adaptation. They can innovate by perceiving and exploiting opportunities for change that are internally, not externally, stimulated.

This ability to innovate from within is exactly what von Foerster meant by ethical action. It is not enough to survive. We must be able to imagine alternatives, to create futures. That can happen only when participants are seen as observers and constructors, not as passive components.

Ackoff takes this one step further by reminding us that systems have multiple levels of purpose.

The social-systemic view of an enterprise is based on considering three ‘levels’ of purpose: the purposes of the larger system of which an enterprise is a part, the purposes of the enterprise itself, and the purposes of its parts.

The ethical task is not to enforce alignment but to cultivate conditions where these levels support and enhance one another. That means making space for new forms of participation. It means resisting the urge to simplify or to eliminate tensions.

Both thinkers were concerned with the future. Ackoff warned:

Today, however, we frequently make decisions that reduce the range of choices that will be available to those who will occupy the future.

For example, future options are significantly reduced by destruction and pollution of our physical environment, extinction of species of plants and animals, and exhaustion of limited natural resources. War – perhaps the most destructive of human activities – removes some or all future options for many. We have no right to deprive future generations of the things they might need or desire, however much we may need or desire them.

Here again, von Foerster would agree. The responsibility of the observer extends through time. Ethics is primarily oriented toward the future. To act ethically is to preserve and enlarge the set of future choices, not just present ones.

This is the intersection between Ackoff and von Foerster. It is not primarily about designing for freedom, as Stafford Beer might have framed it, but about cultivating the ethical awareness that we are always shaping what freedom becomes. Ethical systems are not those that impose order from above. They are those that create the conditions for others to choose, to act, and to become.

To act ethically, then, is to act in a way that enlarges the scope of agency around us. It is to refuse the claim that “there was no other way.” It is to question not only the actions of individuals but also the design of systems that make those actions seem inevitable. Von Foerster challenges us to build systems that do not foreclose choice but rather multiply it. Ackoff challenges us to design organizations in which people can act with purpose, both their own and that of the larger system.

The convergence of these two thinkers gives us a powerful way to think about ethics in complexity. It is not about controlling outcomes. It is about enabling emergence. It is not about defending what is. It is about creating the conditions for what could be.

What is common between them is not method, but ethos. They both believed that how we think about systems shapes how we act within them. And how we act, in turn, shapes what becomes possible for others. In a world increasingly constrained by the consequences of past decisions, we must always opt for situations that preserve and expand future possibilities.

Final Words:

Heinz von Foerster knew too well the cost of systems that suppress choice. His ethical imperative was not a poetic suggestion but a moral demand born from lived experience. For him, the statement “I had no choice” was a warning sign. It was a marker of ethical blindness. To live ethically, he believed, was to remain aware that we are always constructing reality, whether we recognize it or not.

Ethics, then, is not a separate layer added to action. It is embedded in every decision, every design, every interpretation. By increasing the number of choices for others, we resist systems that close down alternatives and silence difference. We push back against the machinery of obedience. We make space for novelty, for learning, and for the dignity of self-determined action.

Von Foerster did not ask us to design perfect “systems”. He asked us to remain awake to our role within them. To be a responsible observer is to see how our ways of seeing shape what is possible. That is the ethical task he left us. Not to necessarily control the future, but to leave it open.

I will finish with a very wise quote from Ackoff:

The righter we do the wrong thing, the wronger we become.

Always keep learning…

[1] The Democratic Corporation, Russell L. Ackoff (1994)

When is a ‘System’?

In today’s post, I would like to explore the question, “When is a system?” and reflect on how cybernetics invites us to think differently about systems. This shift in phrasing may seem minor, but it opens up a deeper understanding of what we are truly doing when we speak of systems.

The Cybernetic Shift:

Cybernetics offers a different path. Rather than asking, “What is a system?”, it invites us to ask, “When is a system?” As a student of Cybernetics, I came across Herbert Brün’s question, “When is Cybernetics?”. He was challenging the obsession with an observer devoid pursuit of knowledge. When we ask “What is..” questions, we are focusing on reification. As Paul Pangaro notes[1]:

Let me show this by first asking the question, ‘What is a rock?’ The question as phrased and by its nature implies that rocks exist and that they can be known and defined. This existence stands on its own to such an extent that an answer can be given, ‘A rock is — dot dot dot’; and this description is given as independent of time, context, and observer. The act of providing an answer is to buy into the position that there is a reality that can be expressed in this independence.

Of course the reality is in one sense in the description, not any ‘object itself.’ We do invest in the description as a thing, an ‘objectification’ that exists on its own, which is what we call knowledge. The contribution of personal experience is lost or elided. What is left is the dead description, devoid of a maker and the context and purpose in which it is made.

This change in perspective alters everything. It reminds us that systems are not found in the world as pre-existing objects. They are drawn into being. They do not exist without a point of view, without a purpose, and without a participant. A system is not discovered; it is declared. It does not precede our involvement. Instead, it arises with it.

Consider a simple example: When is healthcare a system? For a hospital administrator, healthcare becomes a system when she tracks patient flow, bed occupancy, and discharge rates. For a public health researcher, healthcare becomes a system when he maps disease patterns, social determinants, and community interventions. For a patient with chronic illness, it becomes a system when they navigate insurance approvals, specialist referrals, and medication management. The same collection of clinics, professionals, and treatments becomes different systems depending on who is looking and why.

Beyond Fixed Definitions:

In this way, cybernetics is not about systems as fixed or definable things. It is about how we observe, how we construct, and how we participate in interrelated processes. As Paul Pangaro explained, the “What is …?” question leads us into traps. When we ask “What is a rock?” we imply that rocks exist independently and can be known and defined outside of time, context, and observer. This creates a “dead description, devoid of a maker and the context and purpose in which it is made.” The act of asking “What is…?” itself creates an investment in notions of absolute reality that cybernetics seeks to question.

Cybernetics is better understood as a way of thinking rather than a field of things. Herbert Brün’s insight, substitute “When is…?” for “What is…?”, captures the essence of the cybernetic act: taking an apparent absolute and providing necessities for taking it as a relative. This shift makes the relativity of knowing explicit, relativity that exists as a function of ever different contexts: time, the observer, purpose. Cybernetics draws our attention to the fact that observation changes what is observed. Descriptions are never neutral. They arise from somewhere and from someone. Meaning does not reside in isolation. It arises through interaction.

The Moment of System-Drawing:

This is why the question “When is a system?” is important. It makes visible the choices we make when we describe a situation as systemic. It pushes us to be aware of our own cognitive blind spots and promotes epistemic humility. It reminds us that the context, including who is asking, when, and for what purpose, decisively shapes what we call “the system.”

As Herbert Brün emphasized:[1]

The by far most important, most significant context, overriding in power every other[,] even ever[-]so-blatantly[-]perceivable context, the context decisive in the beginning and in the end, in the speaker and in the receiver, the context which gives its meaning to a statement, the context in which a statement is most undebatably made, is that context which we call “The person who makes the statement.” And let the period after the quotation mark be legal. For to be quoted is not my statement but “The person who makes the statement” and the context he is, not I make.

Systems come into being when we draw boundaries. They begin to make sense when we ask certain questions. They become stable or unstable, depending on who is involved and what they are trying to do.

This insight was central to the work of C. West Churchman, who reminded us that the systems approach begins when one is open to see the world through another’s eyes. This does not mean agreement. It means recognizing that what we call “the system” already reflects a point of view. What seems essential to me may seem irrelevant to you. What I include, you may exclude.

We are recognizing the observer dependent quality of systems, noting that different observers of the same phenomena might conceptualize them into different systems entirely. For one person, a transportation system may refer to trains, roads, and schedules: the physical infrastructure that moves people from point A to point B. For another, it may refer to access, fairness, and opportunity: who can get where, when, and at what cost. For yet another, it may mean emissions, energy use, and ecological impact. The” system is not one thing. It is always many, depending on how one looks.

The Ethical Dimension:

This orientation opens an ethical space. Cybernetics, epecially second order cybernetics, teaches us that we do not stand outside the world we describe. We bring forth a world through our living, through our speaking, and through our caring. Werner Ulrich took this further by asking us to consider who gets excluded when a system is drawn. The question is not only “What is the system?” or “When is the system?” It is also “Who decides?” and “Who is left out?”

When a city planning department draws the boundaries of a “transportation system” around roads and parking meters, they may inadvertently exclude sidewalks, bike lanes, and public transit, effectively marginalizing pedestrians, cyclists, and those who cannot afford cars. When a hospital defines its “patient care system” in terms of clinical procedures and bed management, it might exclude the experiences of family members, community health workers, or the social determinants that brought patients there in the first place.

To declare a system is to draw a boundary. To draw a boundary is to make a choice. With that choice comes responsibility. Cybernetics is not simply a science of regulation or control. It is a reflection on participation and perspective. It is a reminder that the observer is always part of what is observed.

Final Words:

So when is a system?

A system is whenever someone chooses to see one. It is when relationships are noticed, when patterns are made meaningful, when intentions begin to shape perception. It is not a thing in the world. It is an event in understanding.

To speak of systems, then, is to accept the weight of that declaration. It is to notice that every system includes and excludes. It frames some possibilities and hides others. Cybernetics does not eliminate this fact. It simply asks us to be honest about it.

This awareness changes how we approach systems work. Instead of searching for the “right” system, we might ask: What system-drawing serves our purposes? Whose perspectives are we including or excluding? What becomes visible when we draw the boundaries here rather than there? How might our system-drawing empower or marginalize different groups?

We may never define a system in final terms. But we can choose to be thoughtful in how and when we draw them. We can remain attentive to the ethical and practical consequences of those drawings. And we can remember that every system boundary is a hypothesis about what matters, one that can be questioned, revised, and redrawn as our understanding deepens.

I will finish with a quote from West Churchman that provides further food for thought:

The problem of systems improvement is the problem of the ‘ethics of the whole system’.

Always keep learning…

[1] New Order from Old: The Rise of Second-Order Cybernetics and Implications for Machine Intelligence. A Play in 25 Turns – Paul Pangaro, 1988

Cybernetics of Kindness – 2

In today’s post, I want to explore what I have been thinking of as the Cybernetics of Kindness. In my recent reflections, I have been drawn to the quiet power of compassion and kindness, particularly in a world increasingly fascinated by toughness, dominance, and the mythology of machismo. I want to step back from all that noise, and spend some time examining what actually helps us hold together. What allows systems to remain viable. What allows people to remain human.

Ross Ashby, one of the early pioneers of Cybernetics, gave us the Law of Requisite Variety (LRV). LRV states only variety can absorb variety. Variety, in this context, refers to the number of distinguishable states a system can occupy. A coin, for instance, has a variety of two: heads or tails. It can help resolve a binary choice. But if the number of options increases, say to six, a single coin is no longer sufficient. You need more variety, such as a six-sided die.

This idea anchors a fundamental principle in cybernetics: in order to regulate a system, the controller must match or exceed the complexity of the disturbances it encounters. Otherwise, essential variables, those tied to the survival of the system, start to drift beyond safe limits.

Ashby’s insight was later extended by Aulin-Ahmavaara, who formalized the dynamics of regulation as follows:

H(E) ≥ H(D) − H(A) + H(A|D) − B

Here:

H(E) is the entropy of the essential variables, representing the uncertainty we seek to minimize.

H(D) is the entropy of external disturbances, representing the variety the system must absorb.

H(A) is the entropy of the actions available to the controller.

H(A|D) represents the uncertainty in selecting the right action for a given disturbance, reflecting our ignorance, in a sense.

B is the buffering capacity, representing our passive resilience, such as slack or social safety nets.

Setting aside the formal nature of the equation, this inequality makes something quite clear. If we want to maintain low H(E), to keep our core variables stable and viable, we must either reduce external disturbances H(D), increase the range of available actions H(A), reduce the uncertainty in choosing the appropriate response H(A|D), or increase our buffer (B). When H(E) rises, we begin to lose grip on the things that matter most.

So what does all of this have to do with compassion and kindness?

Kindness as a Variety Amplifier:

There is often a temptation to reach for control by enforcing uniformity through rules, rigid processes, standardization or exclusion. It can offer a sense of order, especially in the short term. But over time, such enforced uniformity reduces H(A), the range of meaningful action within the system. What emerges may appear efficient, but it is brittle. It lacks depth and cannot adapt when disturbances grow or shift. This brittleness becomes visible in bureaucracies that crumble under stress, in supply chains that falter when pushed, in institutions that sacrificed resilience for efficiency.

Systemically speaking, callousness acts as a suppressor of H(A). It narrows the range of potential responses, disconnects individuals, and isolates perspectives. And when we limit the possibilities available to others, we also limit the future options available to ourselves. The adjacent possible, the wellspring of creativity, regeneration, and learning, starts to shrink.

Compassion, in contrast, expands H(A). When we approach others with care, humility, and openness, we create space for more configurations of interaction. This means more ways to respond and more chances to adapt. This kind of engaged kindness also reduces H(A|D), the uncertainty in deciding what to do, because trust and mutual respect improve our collective sensemaking. In addition, compassionate action builds B. It contributes to buffering. A kind gesture, a moment of patience, a willingness to listen: these are not just social niceties. They accumulate into a resilient web of support that makes systems more robust.

Compassion is not soft in the sense of being weak. It is structural. It is a systemic resource that allows viable systems to emerge and sustain themselves without relying on dominance or top-down control.

When we encourage horizontal variety, diversity distributed across people, perspectives, and functions, we enable innovation and responsiveness.

In the Viable System Model (VSM), systems must manage variety along both horizontal and vertical dimensions. Horizontally, we encounter differences between teams, roles, or individuals. Vertically, we deal with differences between operational reality and strategic guidance. Compassion has a place in both. Horizontally, it enables coordination without coercion. Vertically, it allows for meaningful feedback from the front lines to reach decision-makers, and for leadership to guide with empathy and contextual awareness.

Rigid hierarchies may seem to reduce complexity, but they do so at the cost of resilience. They simplify often by silencing. Compassionate engagement, by contrast, helps absorb variety rather than suppressing it. It preserves individuality while allowing for coherence. It creates a connective tissue that allows people to remain distinct without becoming divided.

This is a subtle but important distinction in the VSM. Horizontal variety contributes to richness and adaptability without overloading the center. Vertical variety, meanwhile, requires a capacity for transduction, the ability to translate and make sense of signals across levels of the system. Here again, compassionate attention matters. It reduces the friction and distortion that often creep into communication. It allows transduction to occur more fluidly, because when people feel heard and valued, they are more likely to share what matters, and more likely to hear what is offered in return. Compassion, in this framing, enhances coherence.

A Reentry Perspective: Second-Order Responsibility:

In Spencer-Brown’s Laws of Form, the act of drawing a distinction is the basic move through which meaning arises. But once distinctions reenter their own space, the system becomes reflexive. It observes itself. This is the moment where second-order cybernetics begins, when the observer becomes part of the system.

From this perspective, callousness often begins when we treat people as problems to be solved, rather than as observers with their own valid distinctions. Callousness denies reentry. It insists on fixed categories. It treats systems as closed, and boundaries as final. This increases H(A|D) not only by generating fear or confusion, but by disabling the our ability to learn from observing ourselves. It blinds us to emergent intelligence.

Compassion, in contrast, is a form of second-order responsibility. It allows reentry to take place with integrity. It treats others not as objects to be managed, but as co-observers. It creates space for us to learn from the distinctions others draw. It is, at its core, an epistemic stance, an ethics of perception.

Final Words:

Heinz von Foerster’s ethical imperative states – act always so as to increase the number of choices. My corollary to this is – always opt for situations that preserve and expand future possibilities.

When we increase H(A), we are expanding our collective capacity to act. This is not just about having more tools; it is about having more meaningful responses under pressure. Compassionate leadership creates conditions where people are more likely to contribute, collaborate, and improvise. In a team where people feel psychologically safe, resilience emerges naturally. In a society where people are not afraid to speak up or to try something new, new pathways remain available. Kindness encourages shared authorship. It distributes ownership and allows us to carry forward together rather than collapse under the weight alone.

When we reduce H(A|D), we decrease collective uncertainty. When people are isolated, fearful, or in survival mode, they second-guess themselves. Even when the right response is available, it may go unrecognized or unused. Compassionate engagement, through listening, transparency, and acknowledgment, cuts through this fog.

When we build B, we create shared capacity to absorb the shocks that are always coming. Buffering is not about hoarding resources. It is about building slack and forgiveness into our relationships and institutions. It is the margin that allows recovery. Acts of kindness add this margin. They offer redundancy that may appear inefficient in the short term, but becomes critical when crises hit. You do not build the buffer when the blow arrives. You build it in advance, through everyday acts of care and connection.

And when we keep H(E) low, we protect what we cannot afford to lose. Essential variables like trust, legitimacy, health, and integrity are not self-sustaining. They require ongoing attention. Compassion helps anchor these values. It reduces volatility, grants time to recalibrate, and holds the space within which people and systems can breathe. We do not wait for collapse. We act now, in small, steady ways, to keep the core intact.

Compassion and kindness, in this light, are not optional. They are strategic capacities.

It is how we expand our range of action, instead of retreating into helplessness. It is how we align perception, rather than drown in confusion. It is how we absorb impact, instead of breaking under it. It is how we hold on to what matters, even when the terrain is shifting. It is how we remain in relationship with the future.

I will finish with a quote from Heinz von Foerster:

A is better off, when B is better off.

Always keep learning…

The Form of Decency

At a recent exhibition, I saw a sign that read: “Exit Only. No Re-Entry.” It seemed not just as a logistical instruction but as a metaphor. Around the same time, I came across a photo of a sign demanding that people speak the local dialect. What struck out to me was that the sign was written in English. These moments echoed something I have long been thinking about: the contradictions that arise when our distinctions fold back on themselves, what George Spencer-Brown called “reentry.”

I am a longtime admirer of Spencer-Brown’s Laws of Form, and in today’s post, I explore how his notion of reentry helps illuminate the paradoxes and blind spots in modern ideologies, especially the rise of xenophobia and extreme nationalism. These rigid ideologies depend on distinctions between us versus them, lawful versus unlawful that appear neat but collapse under their own logic when viewed recursively. We pretend we are only exiting, drawing sharp lines, while ignoring the inevitability and necessity of reentry in our sensemaking.

Drawing Distinctions

Spencer Brown opened his mathematical-philosophical treatise with a simple instruction: Draw a distinction. This simple act of marking a boundary between “this” and “that” forms the foundation of how we structure knowledge, meaning, and identity. We create categories and define what is “in” and what is “out.” This is how form arises through distinction.

In Laws of Form, he also introduced the notion of reentry: the act of folding a distinction back into itself. Simply put, this is a self-referential act. By doing this, the tidy separations we created begin to blur. This move, abstract as it sounds, has powerful consequences for how we think, live, and treat each other. Especially in a world torn by polarization, nationalism, and fear of the “other,” reentry reveals the paradoxes that rigid ideologies try to hide and points us toward a more humane way of navigating complexity.

The Pot and the Form

Let us use a simple example to understand the form better. Consider a pot of boiling water. Here, we can make three identifications:

  • Pot = the mark, or the distinction
  • Water inside the pot = what is indicated, the marked space, the inside
  • Outside the pot = the unmarked space, the outside

Together, all three constitute the form. The pot, as a boundary, plays the role of the mark in Spencer-Brown’s terms. It creates a distinction between what is inside and what is outside. The pot itself is not part of what is inside; it is what makes “inside” possible by drawing a boundary. The mark exists in a meta-position: it defines inside and outside but cannot be reduced to either. It is the operation of drawing the distinction. The pot allows us to interact with what is inside and allows what is inside to interact with the surroundings.

We can use the same example to introduce reentry. Imagine placing that pot inside another pot, creating a double boiler. The inner pot is held by the outer one. The boundary remains, but now it is nested and refers to something beyond itself. This is reentry: when a form does not just define something but begins to refer to its own act of defining. This becomes an act of second-order observation. In the double boiler metaphor, the inner pot (the reentered form) exists within the outer pot (the original distinction), creating a ‘system’ that is both distinct and self-contained.

Reentry challenges the simplicity of binary logic, revealing that ‘systems’ can be self-referential and dynamic. This concept is pivotal in understanding complex systems, where elements influence and are influenced by themselves.

The Purpose of Reentry: Revealing Cognitive Blind Spots

We love binaries: true/false, us/them, lawful/unlawful. But reentry destabilizes these neat categories. Who defines what is “lawful”? The law itself. When the law governs the making of laws (as in constitutional law), we enter a recursive loop. What is legal becomes a matter of interpretation, not clarity. The binary collapses into ambiguity. Reentry shows us that binaries are useful simplifications, not absolute truths. Dogmatic ideas rely on such binaries, and reentry becomes an effective tool for challenging dogma.

Similarly, in language, terms like “normal” are defined by cultural norms, which are themselves shaped by collective perceptions of normality. This circularity demonstrates how meanings are not fixed but evolve through self-reference. Reentry is not merely a logical twist. It reveals something crucial about how we construct meaning.

When we draw a distinction between “lawful” and “unlawful,” we assume clarity. But as soon as we ask who defines the law and realize it is the law itself, we see that the boundary is recursive. It defines itself. This is not a flaw but a feature of complexity.

The Second-Order View: Observing Observation

This leads us to second-order thinking: the act of observing the act of observing. In logic, when a ‘system’ includes itself in its model, it can become unstable. However, it also owns its position. Blind spots can be revealed, opening the door to creativity, paradox, and deeper understanding. Reentry is how we shift from first-order systems (clear categories, fixed forms) to second-order ones (reflexivity, contradiction, emergence). It is how we move from saying “we are right” to asking “how do we know?”

As the cybernetician Heinz von Foerster observed: “The observer must be included in the observed system.”

This represents the leap from first-order thinking (observing the world) to second-order thinking (observing how we observe). Reentry is the mechanism of that leap. Recognizing and thinking along the lines of reentry is deeply needed today because some of the most dangerous ideas we face rely on distinctions that collapse under their own logic.

Reentry and the Illogic of Xenophobia

Xenophobic ideologies often define “us” versus “them,” asserting superiority or purity. However, when these distinctions undergo reentry, when the criteria for inclusion are applied to the in-group, they often fail to hold consistently. Similar to the sign that demanded the use of the local dialect but was written in English, xenophobic logic contradicts itself when examined through reentry.

What does it mean to be a person from country “X”? Is it geography? Culture? Language? Legal status? Values? The more we examine these criteria, the fuzzier they become. Yet we use such labels as if they were clean boundaries, pots that perfectly contain identity. Reentry challenges this assumption by turning the form inward.

If being from country “X” means standing for freedom, justice, and decency, how can one uphold those values while treating outsiders with cruelty? If your culture preaches respect, how can you use that culture to justify disrespect? If your national identity is built on moral ideals, then those ideals must apply to how you treat everyone, not just those inside your imaginary boundaries.

Bigotry collapses under reentry. Its internal logic folds in on itself. The principle violates the practice. The mirror reflects itself and reveals the contradiction. Racism, xenophobia, and nationalism, when examined through the lens of reentry, are not just morally wrong. They are logically incoherent.

The Ethical Need for Redundancy

In complex systems, one of the most powerful safeguards is redundancy. In engineering, redundancy prevents collapse. In ethics, it serves the same function.

Hope is redundancy in action, as are other humanistic notions such as kindness, compassion, and forgiveness. These are not luxuries; they are second-order buffers. They activate when logic stalls. They hold the ‘system’ together when paradox threatens to tear it apart. Reentry exposes the instability of our forms. Redundancy helps us live with that instability.

Ethical redundancy functions like the inner pot in a double boiler. It buffers the heat. It allows care to emerge where rigidity would cause harm. It creates space for ambiguity, reflection, and repair. This is why, in the face of bigotry and rigid ideologies, we must design for ethical reentry. We must build in second chances. We must speak gently even when the logic breaks.

Final Words

In a world obsessed with efficiency, clarity, and being right, reentry is a radical act. It turns the ‘system’ inward. It reveals our blind spots. It shows us where our ideals betray themselves. But reentry does more than expose contradictions; it opens pathways to wisdom. When we embrace reentry, we move from the arrogance of first-order certainty to the humility of second-order inquiry.

The rise of extreme nationalism and xenophobia reflects our collective failure to practice reentry. These ideologies thrive on the illusion of clear boundaries, pure identities, and simple answers. They collapse when subjected to their own logic, but only if we have the courage to apply that logic. Only if we are willing to let our mirrors reflect.

Reentry teaches us that our most cherished distinctions are provisional, our certainties are constructed, and our boundaries are more porous than we dare admit. This is not cause for despair but for hope. It means we can rebuild. We can redesign. We can choose compassion over cruelty, and in that act, we can stay human.

In the end, reentry invites us to remain human and to include kindness as a design principle, building ‘systems’ that can reflect on themselves without breaking. It asks us to hold our beliefs lightly enough that they do not harden into weapons, yet firmly enough that they can guide us toward justice. This is the form of decency: recursive and reflective.

Always keep learning…