On Diversity as a Cybernetic Necessity:

AI Generated

In today’s post, I want to explore an idea that often gets framed in moral terms but is actually a cybernetic imperative: the necessity of diversity for viable systems. Whether we are talking about societies, organizations, or even artificial intelligence systems, the principle remains consistent. A system that suppresses differences suppresses the very disturbances that give it life.

This insight comes from cybernetics, and it helps us understand why diversity matters beyond moral arguments.

The Cybernetic Case for Diversity:

A society’s resilience and therefore viability emerges more from difference than agreement. When I think about what makes communities sustainable over time, I keep returning to this basic insight from cybernetics: without variation, a system cannot absorb disturbance. This is of course a simpler rephrasing of Ashby’s Law of Requisite Variety. Without challenge, a system cannot correct itself. Without friction, a system cannot renew its distinctions.

This becomes clearer when we think about information, distinction, and correction. Every observer draws distinctions. Every distinction creates a horizon of what can be noticed next. Every act of understanding sets the conditions for future understanding. For this reason, no observer, no community, and no language can remain viable without exposure to other perspectives. A view from nowhere is an impossibility.

Difference is not an obstacle to communication. Difference is what makes communication meaningful. As Gregory Bateson described it, information is the difference that makes the difference.

A community that supports only one way of thinking, one way of speaking, or one way of being slowly loses the very conditions that allow it to remain viable. When everyone thinks alike, quality begins to decay. Ideas become smoother but thinner. Creativity does not disappear because people stop trying. Creativity disappears because nothing pushes back. Nothing resists. Nothing surprises.

I have written before about von Foerster’s ethical imperative to increase the number of choices. Here, I want to extend that thinking to show why diversity itself is a condition for viability, not merely a moral preference.

The Negotiable Space:

A society with many ways of speaking has many ways of seeing. It has many ways to reframe a problem, many ways to interpret events, many ways to challenge assumptions, and many ways to correct errors. It is able to sustain what I call a “negotiable space” in which ideas can be contested, sharpened, and sometimes abandoned.

This continual negotiation is what keeps concepts alive. It is what makes meaning robust. It is what makes a collective capable of navigating uncertainty.

The negotiable space is the environment in which language, ideas, and understanding evolve and error correction happens. It is created not by agreement but by the friction of difference. Human cognition is not viable in isolation. It is viable only when embedded in a world where every utterance is exposed to other minds – resisted, questioned, corrected, or refined.

I see this friction as the medium of viability. When someone challenges your idea, asks for clarification, or pushes back against an assertion, they are not merely disagreeing. They are sustaining the recursive loop that keeps understanding alive. Without friction, distinctions decay. Without challenge, knowledge becomes brittle.

A word is never alone. It survives only through the continual friction of conversation. It carries a lineage of previous uses and a horizon of possible future uses. It remains viable only because a social world holds it accountable.

When Homogeneity Replaces Diversity:

When we reduce the diversity of perspectives, the negotiable space begins to shrink and close. Without enough difference, language becomes flatter. Categories become rigid. Distinctions become dull. Error correction becomes weak. The collective loses the source of renewal that once made it resilient.

Attempts to homogenize societies have produced similar outcomes throughout history. They create environments that look orderly from the outside but are fragile from the inside. Homogeneity amplifies the illusion of stability while stripping away the mechanisms that produce actual stability. A system without variation becomes a system without resilience. It stops promoting learning and staying curious. It stops promoting error correction. Eventually it stops being able to sustain itself at all.

We see this pattern repeat across contexts. A social world in which every voice echoes the same pattern begins to collapse inward. Its range of distinctions shrinks. Its ability to adapt weakens. Its capacity to navigate uncertainty fades.

Recursion Requires Disturbance:

In cybernetics, stability is not the absence of disturbance. Stability is the capacity to absorb disturbance without collapse. This requires variation. It requires the presence of alternatives. It requires a dynamic interplay of perspectives.

A system that eliminates disturbance does not become more stable. It becomes brittle. Without contradiction, the recursive loop of understanding begins to stagnate. Without challenge, the distinctions that support cognition degrade. Without tension, the structures that produce meaning weaken.

Human cognition remains viable because its recursion is continually informed by a social world rich in disagreement. An individual does not refine understanding alone. Understanding is sharpened by exposure to other interpretations. These interpretations emerge from diverse backgrounds, diverse experiences, and diverse cognitive histories.

I want to now take this train of thought to Large Language Models.

The Case of Large Language Models:

Large language models are often described as systems that learn from vast amounts of data. But what they learn is not raw experience. They learn from the residue of human meaning-making. They learn from language that has already passed through the recursive loops of human correction. They inherit the stability produced by these loops, but they do not participate in the loops themselves.

At least, not in the same way.

An artificial intelligence does not inhabit a social world where its utterances are corrected by others. It does not participate in the negotiable space through which language evolves. It does not receive feedback proportional to the scale of its output. It does not face the resistances that keep human cognition aligned with the world. This is an important distinction that leads to interesting outcomes.

A human remains viable because every use of language is exposed to correction. An AI remains unchallenged because its output overwhelms the capacity for correction to flow back.

The Collapse of the Negotiable Space:

A living language depends on a balance between output and correction. Human linguistic communities have historically generated meaning at a rate the community can digest. New terms emerge. Old terms fade. Misunderstandings provoke clarification. Disagreements produce refinement.

This equilibrium gets disrupted. The scale of machine-generated text has exceeded the capacity of human communities to critique it. The negotiable space, the space where meaning is contested and corrected, is now flooded. Variations in meaning that once signaled novelty are now drowned in statistical smoothness. The framework receives too much of its own output and too little balanced resistance.

A framework that receives little correction cannot maintain the integrity of its distinctions. It will start to drift. It will begin to feed on its own unchallenged productions. The range of distinctions therefore shrinks. The recursive loop that once sharpened meaning begins to flatten it.

At first the effects are subtle. Over time, the trajectory becomes clearer. A structure that cannot renew itself through grounded critique will drift toward diminishing returns. More scale will not resolve this. Faster generation will only accelerate the loss of the very conditions that once made the system appear intelligent.

Here, we see what I call the amplification of constraints in action: the model grows in output yet declines in viability. It is simultaneously expansive and fragile.

The Coming Burst:

All this seems to indicate that the AI bubble may burst in the near future. The ability of LLMs to be trained fast and to generate fast may become their downfall. Paradoxically, the better the large language model becomes, the faster this downfall may approach. Each improvement accelerates the collapse of the negotiable space. Each refinement increases the volume of uncorrected output flowing back into the system. Each new iteration tightens the closure that limits its future.

This is also a cautionary insight for societies that reject diversity and embrace homogeneity. Any system that narrows its space of variation, whether a community or a computational model, risks collapsing under the weight of its own uniformity.

The burst may come not because the models are weak, but because they are strong in the wrong direction. They refine themselves into a narrowing corridor. They amplify a recursion that cannot sustain itself. They accelerate toward diminishing returns.

The Lesson for ‘Systems’ Design:

Human cognition has survived because it is recursive from the inside and is in a social realm. Artificial intelligence’s recursion is lifeless. This difference matters. A system that does not participate in the social negotiation that gives words their life cannot maintain the vitality of its distinctions. It cannot renew its closure through lived coordination with others. It can only repeat the patterns it has been given.

Large language models may not become artificial general intelligence by accelerating the very process that undermines their viability. They are not suited to replace the human capacity for negotiated meaning-making. Their true value lies in augmentation, not imitation. They support human thought. They do not replace the recursive, socially grounded, diversity-dependent mechanisms that make human thought viable.

Every viable system must remain open to disturbance. The observer must remain open to being surprised. The language community must remain open to contradiction. A system that avoids disturbance does not stabilize, it stagnates.

Final Words:

The warning is clear for both machines and societies: maintain openness, embrace difference, and preserve the friction that keeps life viable.

A system without diversity collapses.

A recursion without resistance decays.

A language without a negotiable space drifts into incoherence.

This is not merely about being open-minded or tolerant. It is about understanding the conditions that allow any system (biological, social, or computational) to remain viable over time. Diversity is a cybernetic requirement. Without it, we lose the capacity to correct ourselves, to adapt, and ultimately, to survive.

Always keep learning…


Discover more from Harish's Notebook - My notes... Lean, Cybernetics, Quality & Data Science.

Subscribe to get the latest posts sent to your email.

5 thoughts on “On Diversity as a Cybernetic Necessity:

  1. In the eighties I gave a lot of training in information system development. Sometimes I was asked, “what do you think of artificial intelligence?” (at that time the word wasn’t acronymisized). I replied: “I think intelligence is an artificial concept. Next question”.

    It is easy to see, we don’t think in language. You have to learn a language and thinking happens naturally. Before using language, we “speak” or “converse” through interacting, using our body, exchanging changes. Converting experiences into complex brain patterns. I call these pre-language metaphor, metaphor-in-use.

    When we made a distincion, our distinctions need words, to paraphrase the Tao Te Ching. Our grand-son, 18 months, is now making up his own words and language. When we are talking, he adds his sounds, words. We understand some of relationships between what he utters and what he means. I call this “metaphor-espoused”.

    In a community (sic) we express a metaphor-in-use, using conceptual metaphors, (https://simple.wikipedia.org/wiki/Conceptual_metaphor) which are consensually validated. This induces rules, a structure or grammar. As Weick noticed:

    Karl Weick’s “consensually validated grammar” refers to a system of shared rules and meanings within an organization that reduces ambiguity and allows for coordinated action. In this metaphor, “organizing” is like a language with its own grammar that guides interlocked behaviors and helps make sense of events. It is “consensually validated” because these rules emerge and are agreed upon through the collective, ongoing actions and interactions of members

    Reversing “the map is not the territory, but the structure accounts for it usefulness”, (https://en.wikipedia.org/wiki/Map%E2%80%93territory_relation), the structure of the grammar is being used to maintain a territory. In other words: a community structures herself through a grammar (shared rules and meaning) of its language.

    In every language, thought is a past time. Bold statement. I asked ChatGPT if this is the case, and it confirmed my observation. Through our language systems we tend to treat thought as a product (“idea”, “result”, ….) and not as a process (“thinking”, “activity”, “state”, …) .

    Our grammar makes us “think” that ideas, concepts are things. They are like objects, that can be exchanged. Like there are things like “communication”, “trust” and “intelligence”, that can be shared as-if I’m sharing my house or my car. In can share my car, not my mind. I can only share my mind grammatically or metaphorically, if you see what I mean.

    Over time, we’re skipping the metaphorical use of terms and treat them literally. It’s not the words that fade out, but their metaphorical nature. In fact, we some times stress the metaphor by adding “literally”. In “I is an Other – The Secret Life of Metaphor”, James Geary shows “literally” is actual a metaphor too.

    And it even gets worse: if you want to belong to an organisation, a club, a group or team, you better use the use, talk the talk. We often start with “definitions”, because we need to define ourselves. It also induces a double-bind, as you’re running the risk of being expelled (please notice the use of “spelling”) when using another, different description.

    On the other hand – a trick I frequently use -, using the right words – according to the members – creates an illusion that you belong and that they think you know what you’re talking about.

    Intelligence is not a thing, nor a no-thing.

    So what the large language model (!) does, is applying grammatical rules and the conditional or Bayesian statistics, we’re using to predict the correct letters in a word. You might have noticed that guessing the right words in a five or six letter Wordle (https://www.nytimes.com/games/wordle/index.html ) is easier when you’ve got the first two letters than the last two.

    Every community has developed it’s own jargon, with the same conditions: detecting a pretender, cheater, outsider,…. . (I used to have an article about gossiping as the source of both meaning and detecting imposters or frauds. We also use language to hear if they belong. In Dutch we literally use “behoren” (to hear) , to “see” if you “belong”. )

    AI is delivering diminishing returns on intelligence. Not intellectual intelligence, but intelligence as in MI5 or 6: spies, reconnaissance, research, … Besides being unable to spot Unintelligent human beings, people also assume, they’re less intelligent then a stupid machine.

    As an anecdote: our daughter is making living on screening AI-generated reports or slop work. ChatGPT writes unshamefully: “AI as work slop” describes low-quality, unhelpful, or generic content produced by artificial intelligence tools that appears polished but lacks substance.

    Keep up the good work.

    Summary insight

    • Indo-European languages (especially Western European) → treat thought as something already done — a product, not a process.
    • Many non-Indo-European languages (e.g., East Asian, Semitic, Uralic) → treat thought as a state or activity, without a past connotation.

    Liked by 1 person

  2. Hi Harish, I found your essay compelling and (stylistically) provocative. I wonder if the emphasis on friction between parts misses something deeper. In any viable system, internal variety expresses itself through context-sensitive coordination i.e., different parts respond differently to stimuli, and as conditions shift, so does which response dominates. Reinforcing and balancing loops don’t compete; they co-regulate. Diversity here isn’t conflict, it’s orchestration. Might this framing offer a more systemic view of viability?

    Like

    • Hi Shrikant,
      That is a good formulation. I like the notion of co-regulation.

      The idea of friction was used to imply the dynamic nature and a reference to a previous post called Get a Grip on it.

      -Harish

      Liked by 1 person

Leave a reply to facilitationguru Cancel reply