The Two Dogmas of Complexity Science: How Our Best Tools Can Mislead Us

I borrow the term ‘dogma’ from W. V. Quine’s classic essay Two Dogmas of Empiricism, where he showed that unquestioned assumptions can quietly shape an entire field. Complexity science, too, rests on its own dogmas that deserve examination.

In today’s post, I want to explore what I see as two fundamental dogmas with how we think about complexity science. These dogmas are deeply embedded in our thinking, and they shape how we create tools, design interventions, and understand organizational life without us realizing it.

To explain these dogmas, let me use the chart of Ashby Space by Max Boisot and Bill McKelvey. It appears clean, scientific, and objective. The kind of visualization that makes the science feel rigorous and mathematical.

This framework comes from Ross Ashby’s Law of Requisite Variety. It maps organizational viability across different complexity regimes. It seems to offer clear insights. Systems in the ordered regime operate through routine procedures. Those in the complex regime require learning and adaptation. Those in the chaotic regime lose coherence when environmental variety exceeds their response capacity.

The 45° diagonal represents Ashby’s famous law. Only variety can absorb variety. Systems above this line face more environmental complexity than they can handle. Systems below it have excess capacity for response. From a conventional perspective, an organization might assess their position by measuring environmental turbulence against internal response capabilities. They might conclude they need to increase internal variety to match external complexity.

It is worth noting that Ashby himself understood variety as observer-dependent. His cybernetic work emphasized that distinctions are made by observers, not discovered in objective reality. The challenge arises when we operationalize such insights into frameworks and tools. What began as a nuanced understanding of observer-enacted variety becomes translated into seemingly measurable coordinates. This transformation from process to representation exemplifies the dogmas I want to examine.

This transformation reveals two fundamental dogmas that have shaped complexity science.

The First Dogma: Ontological Complexity Realism

The chart treats “variety of stimuli” as if it were an objective quantity that exists independently in the environment. It waits to be measured and plotted on the Y-axis. This reflects what I call ontological complexity realism. This is the belief that complexity is an intrinsic property of systems that exists regardless of who observes them.

Here lies the fundamental problem. Variety does not exist “out there” in any objective sense. What counts as variety depends entirely on the distinctions made by the observer or system. The environment does not contain variety. Variety emerges through the interaction between system and environment, mediated by the system’s capacity for making distinctions.

Let me give you a concrete example from healthcare. Is an emergency room “complex”? For a patient’s family member, the ER appears chaotic and overwhelming. Multiple alarms sound. Staff rush between rooms. Medical terminology flies around that they cannot understand. Life-and-death decisions happen at bewildering speed.

For an experienced ER physician, the same environment reveals familiar patterns. They recognize the rhythm of triage protocols. They understand the meaning behind different alarm sounds. They know the standard procedures that guide most interventions. The complexity is not inherent in the ER itself. It emerges from the coupling between the medical environment and each observer’s capacity for clinical distinction-making.

But this observer-dependence extends equally to the horizontal axis. What counts as “variety of responses” depends entirely on the distinctions the observer can make about available actions. The same ER situation reveals entirely different response repertoires to different observers.

The family member might see only binary options. Panic or wait helplessly. The nurse sees a rich array of possible interventions. The attending physician distinguishes even more nuanced response possibilities. The hospital administrator observes yet another set of responses. None of these response varieties exists independently in the situation. Each emerges from the specific capacity of the observer to make distinctions about what constitutes meaningful action.

John Dewey understood this when he argued that organism and environment must be understood as parts of a single transaction rather than separate things that interact. Traditional thinking assumes we have an organism “here” and an environment “there.” Then we study how they interact. But Dewey argues this separation is itself an artificial division that obscures the primary reality. The ongoing transaction between organism and environment creates experience itself.

The key insight is that stimulus and response are not external to each other. They are “always inside a coordination and have their significance purely from the part played in maintaining or reconstituting the coordination”. The stimulus is not something that happens to the organism from outside. It is something “to be discovered,” something “to be made out.” It is “the motor response which assists in discovering and constituting the stimulus.”

As Dewey puts it, “The stimulus is that phase of the forming coordination which represents the conditions which have to be met in bringing it to a successful issue. The response is that phase of one and the same forming coordination which gives the key to meeting these conditions”.

This transactional view transforms how we understand knowledge. Instead of a mind representing an external world, we have knowing as a mode of transaction between organism and environment. Knowledge emerges from this transaction rather than copying something pre-existing. This is not purely subjective nor purely objective, but relational.

Applied to complexity science, Dewey’s approach reveals why Ashby Space fails. The chart treats “variety of stimuli” and “variety of responses” as if they were separate, measurable quantities. But these are artificial divisions of the ongoing transaction between system and environment. There is no variety “out there” waiting to be counted. There are no responses “in here” waiting to be catalogued. There is only the ongoing transaction through which system and environment mutually specify each other.

The Second Dogma: Epistemological Representationalism

The chart presents itself as a neutral representation of complexity regimes. This embodies what I call epistemological representationalism. This is the belief that our task is to discover and measure pre-existing complexity through better methods and tools.

This dogma assumes we can create objective maps of complexity that correspond to how the world really is. The clean boundaries between regimes suggest we are mapping objective territory. The precise diagonal line suggests objective measurement. The measurable axes suggest neutral observation rather than conceptual construction.

But the moment you try to actually use this framework, its claims about objectivity break down. Where exactly would you locate a specific organization on these coordinates? How would you measure “variety of stimuli” independently of the system’s own distinction-making processes?

The chart cannot answer these questions because it treats as measurable quantities what are actually dynamic processes of distinction-making. It tries to map what can only be enacted.

Humberto Maturana and Francisco Varela’s work on structural coupling reveals why this approach fails. Living systems do not represent an independent environment. They enact their world through their structure and history of coupling. As Maturana put it, “everything said is said by an observer to an observer.” The boundaries we draw around “systems” and “environments” are distinctions made by observers, not features of an objective world waiting to be mapped.

The Fundamental Contradiction: Mapping the Unmappable

Here lies the deeper issue that cuts to the heart of what we mean by complexity itself. The very notion that complexity can be mapped contradicts the fundamental nature of what it means for something to be complex.

If something is indeed complex, it resists reduction to mappable coordinates. Complexity implies emergence, unpredictability, context-sensitivity, and observer-dependence. These are not accidental features that better measurement tools might eventually overcome. They are defining characteristics of complexity itself.

Yet the frameworks prevalent in complexity science attempt to do precisely what complexity theory tells us should be impossible. It tries to reduce emergent, context-dependent, observer-enacted phenomena to static, universal, objective coordinates. This creates a performative contradiction. We use the insights of complexity science to argue that phenomena are emergent and context-dependent. Then we immediately create tools that treat those same phenomena as mappable and context-independent.

The contradiction runs deeper still. If complexity truly emerges from the recursive coupling between observers and their domains of inquiry, then any attempt to create a universal map of complexity must necessarily fail. The observer drawing the map cannot step outside the epistemic coupling that generates the complexity in the first place.

Why These Dogmas Generate Persistent Puzzles

These two dogmas create persistent puzzles that are often ignored. The list below is not meant to be an exhaustive list at all.

The Expert-Novice Paradox Why do experts and novices see different levels of complexity in the same system? If complexity emerges from epistemic coupling, then of course they enact different complexities. They have different capacities for distinction-making.

The Measurement Tool Problem Why do different measurement tools reveal different complexities? If complexity is relational, then different tools necessarily enact different varieties by making different distinctions possible.

The Scaling Paradox Why does complexity seem to change when we shift between levels of analysis? Different levels of observation necessarily enact different complexities.

The Intervention Prediction Failure Why do interventions designed based on complexity mappings so often produce unexpected results? Because any intervention changes the observer-system relationship itself. This makes prediction inherently problematic.

These puzzles persist not because of inadequate methods. They persist because they are generated by the assumptions we bring to complexity science.

Beyond the Dogmas: Epistemic Coupling as Transaction

What if we abandoned these dogmas entirely? Instead of asking “How complex is this system?” we might ask this. “How does complexity emerge from the recursive interaction between this knowing system and its environment?”

This shifts focus from measuring pre-existing complexity to understanding epistemic coupling. The dynamic process through which systems and environments mutually specify each other through ongoing interaction. Complexity becomes not a property to be measured but a relationship to be understood.

This framework synthesizes insights from three traditions.

Dewey’s Transaction Theory Instead of separate entities that interact, we have organism-environment as a unified field. The “stimuli” and “responses” in Ashby Space are abstractions from this ongoing transaction.

Maturana and Varela’s Structural Coupling Living systems do not represent an environment but enact their world through their structure. The coupling between system and environment is the source of complexity.

Ashby’s Cybernetics Before the Law of Requisite Variety can even apply, an observer must create variety through distinction-making. The law cannot operate on raw reality. It requires an observer to carve up the world into meaningful categories.

This reinterpretation transforms Ashby’s contribution from a focus on objective regulatory mechanisms to an emphasis on the active and constitutive role of the knowing system in shaping the very “variety” it then seeks to regulate. Rather than discovering pre-existing variety that must be matched, systems participate in enacting the complexity they face through their own distinction-making capacities.

The Chart as Tool, Not Map

This does not mean frameworks like Ashby Space are useless. But we need to understand them differently. Not as maps of objective complexity regimes but as tools for thinking about epistemic coupling processes.

Used this way, the framework serves as what Wittgenstein called a ladder. Something we climb up to reach a new perspective, then kick away once we no longer need it. It helps us think more clearly about complexity without pretending to be complexity itself.

Final Words: Complexity as Participation

The chart looked so clean and objective at first. But complexity is messier, more relational, and more participatory than any representation can capture. That is not a limitation to be overcome. It is the very nature of what we are trying to understand.

Understanding complexity as epistemic coupling opens different possibilities. For designing systems that can remain coherent while staying open to surprise. For cultivating capacities for distinction-making that can expand as we encounter new varieties. For taking responsibility for the complexities we participate in creating.

Heinz von Foerster understood this when he formulated his ethical imperative. “Act always so as to increase the number of choices”. If we are responsible for constructing our realities through our distinctions, then we are also responsible for ensuring that others can participate in that construction.

The challenge is not to model the world but to participate in it more wisely. That participation depends fundamentally on understanding that complexity emerges from epistemic coupling. The recursive interaction between knowing systems and their domains of inquiry. This makes us responsible not just for our actions but for the worlds those actions help bring forth.

 I will finish with wise words from Quine:
No statement is immune to revision.

Stay curious and Always Keep on Learning…

The Truths of Complexity:

The Covid 19 pandemic has given me an opportunity to observe, meditate and learn about complexity in action. In today’s post, I am looking at “truths” in complexity. Humans, more than any other species, have the ability to change their environment at a faster pace. They are also able to maintain belief systems over time and act on them autonomously. These are good reasons to call all “human systems” complex systems.

The Theories of Truth:

Generally, there are three theories of truth in philosophy. They are as follows:

  1. Correspondence theory of truth – very simply put, this means that what you have internally in your mind corresponds one-to-one with the external world. The statement you might make such as – “the cat is on the mat” is true, if there are truly a cat and a mat, and if that cat is on that mat. The main objection to this theory is that we don’t have access to have an objective reality. What we have is a sensemaking organ, our brain, that is trying to make sense based on the data provided by the various sensory organs. The brain over time generates stable correlations which allows it to abstract meanings from the filtered information from the sensory data. The correspondence theory is viewed as a “static” picture of truth, and fails to explain the dynamic and complex nature of reality.
  2. Coherence theory of truth – In this approach, a statement is true if it is coherent with the different specified set of beliefs and propositions. Here the idea is more about a fit and harmony with existing beliefs. The coherence theory is about consistency. An objection to this theory is that the subjective nature of a statement can “bend” to match the existing strong belief systems. Perhaps, a good example of this is the recent poll that found that the majority of democrats fear that the worst is yet to come for the Covid 19 pandemic, while the majority of republicans believe that the worst is over. Another criticism against this is that we can be inconsistent in our beliefs as indicated by cognitive dissonance.
  3. Pragmatic Theory of truth – The pragmatic theory of truth was put forth as an alternative to the static correspondence theory of truth. In this theory, the value of truth is dependent on the utility it brings. Pragmatic theories of truth have the effect of shifting attention away from what makes a statement true and toward what people mean or do in describing a statement as true. As one of the proponents of Pragmatic theory, William James, put it – True beliefs are useful and dependable in ways that false beliefs are not:‘You can say of it then either that “it is useful because it is true” or that “it is true because it is useful”. Both these phrases mean exactly the same thing.’ One of my favorite explanations of pragmatic theory comes from Richard Rorty, who viewed it as coping with reality, rather than copying reality. One of the criticisms against the pragmatic theory of truth is how it explains truth in terms of utility. As John Capps notes, utility, long-term durability, and assertibility (etc.) should be viewed not as definitions but rather as criteria of truth, as yardsticks for distinguishing true beliefs from false ones.

Sensemaking Complexity:

From the discussion of truth, we can see that seeking truth is not an easy task, especially when we deal with complexity of human systems. Our natural tendency is to find order as pleasing and reassuring. We try to find order in all we can, and we try our best to maintain order as long as we can. In this attempt, we often neglect the actual complexity we are dealing with. A common way to distinguish complexity of a phenomenon is – ordered, complicated or complex. We can say a square peg in a square hole is an ordered phenomenon. The correspondence theory of truth is quite apt here because we have a one to one relationship. We have a very good working knowledge of cause and effect. As complexity increases, we get to complicated phenomenon where there is still somewhat a good cause and effect relationship. A car can be viewed as a complicated phenomenon. The correspondence theory is still apt here. Once we add a human to the mix, we get to complexity. Imagine the driver of a car. Now imagine thousands of drivers all at once. The correspondence theory of truth falls apart fast here.

The main source of complexity in the example discussed above comes from humans. We are autonomous, and we are able to justify our own actions. We may go faster than the speed limit because we are already late for the appointment. We may overtake on the wrong side because the other driver is driving slowly. We assign meanings and we also assign purposes for others. We do not always realize that other humans also have the same power.

We have seen varying responses and behavior in this pandemic. We have seen the different justifications and hypotheses. We agree with some of them and strongly disagree with others depending on how they cohere with our own belief systems. The actual transmission of the virus is fairly constrained. It transmits mainly from person to person. The transmission occurs mainly through respiratory droplets. Every human interaction carries some risk of becoming infected if the other person is a carrier of the virus. However, the actual course of the pandemic has been complex.

Philosophical Insights to Sensemaking Complexity:

I will use the ideas of Friedrich Nietzsche and William. V.O. Quine to further look at truth and how we come to know about truth. Nietzsche had a multidimensional view of truth. He viewed truth as:

A mobile army of metaphors, metonyms, and anthropomorphisms—in short, a sum of human relations which have been enhanced, transposed, and embellished poetically and rhetorically, and which after long use seem firm, canonical, and obligatory to a people: truths are illusions about which one has forgotten that this is what they are; metaphors which are worn out and without sensuous power; coins which have lost their pictures and now matter only as metal, no longer as coins.

He emphasized on the abstract nature of truth. One comes to view the abstractions/metaphors as stand in for reality, and eventually falsely equate them to reality.

Every word immediately becomes a concept, in as much as it is not intended to serve as a reminder of the unique and wholly individualized original experience to which it owes its birth, but must at the same time fit innumerable, more or less similar cases—which means, strictly speaking, never equal—in other words, a lot of unequal cases. Every concept originates through our equating what is unequal.

Nietzsche advised us against using a cause-effect, correspondence type viewpoint in sensemaking complexity:

It is we alone who have devised cause, sequence, for-each-other, relativity, constraint, number, law, freedom, motive, and purpose; and when we project and mix this symbol world into things as if it existed ‘in itself’, we act once more as we have always acted—mythologically. 

As Maureen Finnigan notes in her wonderful essay, Nietzsche’s Perspective: Beyond Truth as an Ideal:

As truth is not objective, in like manner, it is not subjective. Since thinking is not wholly rational, disconnected from the body, or independent of the world, the subjective perception, or conception, of truth through the intellect alone is impossible. “The ‘pure spirit’ is pure stupidity: if we subtract the nervous system and the senses—the ‘mortal shroud’—then we miscalculate—that is all!” Inasmuch as the individual is not independent from the world, one can neither subjectively nor objectively explain the world as if detached, but must interpret the world from within. Subjective and objective, like True and apparent, soul and body, thinking thing and material thing, intellect and sense, noumena and phenomena, are dualities that Nietzsche aspires to overcome. Thus, although Nietzsche is not a rationalist, this does not mean he falls into the irrationalist camp. He does not abolish reason but instead situates it within life, as an instrument, not as an absolute.

With complexity, we should not look for correspondence but coherence. Correspondence forces categorization while coherence forces connections. This follows nicely into Quine’s Web of Belief idea. Quine’s idea is a holistic approach. We make meanings in a holistic fashion. When we observe a phenomenon, our sensory experience and the belief it generates do not standalone in our entire belief system. Instead, Quine postulates that we make sense holistically with a web of belief. Every belief is connected to other beliefs like a web.

For example, we can say Experience1(E1) led to Belief1(B1), and Experience2(E2) led to Belief2(B2) etc. This has the correspondence nature we discussed earlier. This view prefers the ordered static approach to sensemaking. However, in Quine’s view, it is more dynamic, interconnected and complex. This has the coherence nature we discussed earlier. The schematic below, inspired by a lecture note from Bryan. Van. W. Norden, shows this in detail.

The idea of Web of Belief is clearly explained by Thomas Kelly:

Quine famously suggests that we can picture everything that we take to be true as constituting a single, seamless “web of belief.” The nodes of the web represent individual beliefs, and the connections between nodes represent the logical relations between beliefs. Although there are important epistemic differences among the beliefs in the web, these differences are matters of degree as opposed to kind. From the perspective of the epistemologist, the most important dimension along which beliefs can vary is their centrality within the web: the centrality of a belief corresponds to how fundamental it is to our overall view of the world, or how deeply implicated it is with the rest of what we think. The metaphor of the web of belief thus represents the relevant kind of fundamentality in spatial terms: the more a particular belief is implicated in our overall view of the world, the nearer it is to the center, while less fundamental beliefs are located nearer the periphery of the web. Experience first impinges upon the web at the periphery, but no belief within the web is wholly cut off from experience, inasmuch as even those beliefs at the very center stand in logical relations to beliefs nearer the periphery.

The idea of degrees rather than a concrete distinction between beliefs is very important to note here. Additionally, Quine proposes that when we counter an experience contradicting our belief, we seek to restore consistency/coherence in the web by giving up beliefs that are located near the periphery rather than the ones near the center.

Final Words:

The dynamic nature of complexity is not just applicable to a pandemic but also to scientific paradigms. This is beautifully explained in the quote from Jacob Bronowski below:

“There is no permanence to scientific concepts because they are only our interpretations of natural phenomena … We merely make a temporary invention which covers that part of the world accessible to us at the moment”

Our beliefs shape our experience as much as our experiences shape our beliefs in a recursive manner. The web gets more complex as time goes on, where some of the nodes become more distinct and some others get hazier. We are prone to getting perpetually frustrated if we try to apply a static framework to the dynamic everchanging domain of complexity. It gets more frustrating because patterns emerge on a continuous basis providing an illusion of order. The static and rigid frameworks break because of their rigidity and inflexibility to tackle the variety thrown upon them.

With this in mind, we should come to realize that we do not have a means to know the external world as-is. All we can know is how it appears to us based on our web of belief. The pragmatic tradition of truth advises us to keep going on our search for truth, and that this search is self-corrective. The correspondence theory fails us because the meaning we create is not independent of us, but very much a product of our web of belief. At the same time, if we don’t seek to understand others, coherence theory will fail us because we would lack the requisite variety needed to make sense of a complex phenomenon. I will finish with an excellent quote from Maureen Finnigan:

Human beings impose their own truth on life instead of seeking truth within life.

Stay safe and Always keep on learning… In case you missed it, my last post was Korzybski at the Gemba: