The Invisibility of Infrastructures:

What can be studied is always a relationship or an infinite regress of relationships. Never a “thing.” – Gregory Bateson

In today’s post, I am continuing to look at the infatuation of blindly pursuing efficiencies. I am utilizing the thinking from the American sociologist Susan Leigh Star. Susan Leigh Star’s concept of invisible infrastructure focuses on the human and social dimensions of ‘systems’ that are often overlooked or undervalued when evaluating their efficiency. Star’s concept of “infrastructure” extends far beyond physical systems like roads or servers. In her framework, infrastructure is a sociotechnical web that includes both material and human elements, all of which are deeply embedded in organizational and social practices. Her key insight was that when things work, what makes them work remains invisible to us.

She wrote [1]:
People commonly envision infrastructure as a system of substrates—railroad lines, pipes and plumbing, electrical power plants, and wires. It is by definition invisible, part of the background for other kinds of work. It is ready-to-hand. This image holds up well enough for many purposes—turn on the faucet for a drink of water and you use a vast infrastructure of plumbing and water regulation without usually thinking much about it.

For Star, the idea of an infrastructure is a complex domain with several underlying attributes. She and her team identified the following attributes [1]:

Embeddedness: Infrastructure is fundamentally integrated within other structures, social arrangements, and technologies. These elements are so interconnected that it becomes difficult to separate the infrastructure from the social and organizational systems it supports.

Transparency: Infrastructure operates invisibly to support tasks without requiring rebuilding or reconfiguration. Expert users understand exactly what needs to be done, making the infrastructure transparent to them. For novices, however, the same infrastructure may appear opaque and challenging to navigate.

Reach and Scope: Infrastructure’s influence extends beyond specific tasks or locations, creating patterns that affect both spatial and temporal aspects of work. These broader impacts often remain subtle until explicitly examined.

Learned as Part of Membership: Users develop familiarity with infrastructure through ongoing participation in their communities. While newcomers might struggle initially, regular users develop an implicit understanding that allows them to work with the infrastructure naturally.

Links with Conventions of Practice: Infrastructure shapes community practices while simultaneously being shaped by them. The QWERTY keyboard exemplifies this relationship – its design constraints have influenced modern computing interfaces despite the original mechanical limitations no longer being relevant.

Embodiment of Standards: While infrastructure incorporates standardized practices, these standards vary across different communities and contexts. This variation reflects local adaptations and specific needs of different groups.

Built on an Installed Base: Infrastructure develops from existing systems, inheriting both their capabilities and their constraints. This inheritance affects how new capabilities can be implemented and integrated.

Becomes Visible Upon Breakdown: Infrastructure remains invisible during normal operation, becoming apparent only when it fails. This invisibility masks the complexity of interactions and dependencies until disruption occurs.

Fixed in Modular Increments: The notion that infrastructure can be fixed comprehensively or globally is problematic, as modifications must occur while maintaining existing operations.

Star highlighted that much of the labor that sustains infrastructure is hidden from view. This includes everyday tasks like troubleshooting, mentoring, and resolving problems that aren’t captured in traditional efficiency metrics. This “invisible labor” is essential for keeping systems running smoothly but is often unacknowledged until it breaks down. She noted that infrastructure is invisible when it works well, meaning people do not usually notice the networks or the labor involved unless something goes wrong. For example, employees who manage crises or adapt systems to unexpected challenges often go unnoticed, but when they are gone, the gaps they filled become painfully obvious.

Star illustrates this further through the example of nursing work in hospitals. When such work remains implicit, it becomes invisible – as one respondent noted, it is simply “thrown in with the price of the room.” However, once this work is made explicit and measurable, it becomes vulnerable to cost-cutting measures and efficiency metrics. This example demonstrates how the very act of making invisible work visible can threaten its existence, despite its crucial role in maintaining the infrastructure.

Human participants are what connect the elements of a network. The values or purposes come from the participants. Star noted that the infrastructure is embedded in social practices and human relationships. This means that the work of employees, how they interact with each other, share knowledge, or resolve conflicts, becomes part of the infrastructure itself. When organizations remove people to streamline operations, they erase these informal networks, which can undermine the functioning of the ‘system’.

Another key insight from Star was that the pursuit of mindless efficiencies can propagate deep social inequalities. Star emphasized that infrastructure is not neutral; it reflects power dynamics, values, and social structures. When efficiencies are pursued without recognizing the human labor behind them, it can perpetuate inequities and make the infrastructure less adaptable and more vulnerable to failure. Most often, layoffs that happen as part of the pursuit of efficiencies affect marginalized workers who help to keep the infrastructure invisible.

From a cybernetic perspective, maintaining viability requires some redundancy or capacity to achieve requisite variety. External variety is always orders of magnitude higher than internal variety. Most of the time, the policies or procedures set in place by the higher-ups are rigid and unable to meet this external variety. The management of variety in these situations is provided by the employees at those levels where the variety is being thrown at by the external world. These are not documented in any of the policies or procedures. This ignores how the real-world messiness is tackled on a daily basis by the employees. Cutting staff removes tacit knowledge and informal networks that are critical to keeping systems running, even if they are not formally acknowledged by management.

Efficiency assumes predictability, and this is not a luxury that organizations can afford. These are tagged by quantifiable metrics such as productivity quotas. These quantifiable metrics have a tendency to obfuscate the complexity of the networks.

Final Words:

The tendency to view infrastructure as merely technical ‘systems’ that can be optimized through efficiency metrics fundamentally misunderstands how complex ‘systems’ actually work. The invisible elements – human relationships, tacit knowledge, informal networks, and social practices – are not inefficiencies to be eliminated but rather critical components that enable ‘systems’ to adapt and survive in an unpredictable world.

When organizations pursue efficiency without recognizing these invisible dimensions, they risk damaging the very mechanisms that make their systems resilient. The human capacity to adapt, solve problems, and maintain relationships forms an essential infrastructure layer that formal processes and metrics cannot capture. This “invisible infrastructure” provides the flexibility and intelligence needed to handle real-world complexity. Removing this impacts the infrastructure’s ability to self-regulate. The error correction of error correction for a network lies within the tacit and social dimensions. This is a key aspect for making networks viable in the sea of complexity. We need to start framing resilience and redundancy as infrastructure investments, not inefficiencies. We need to start valuing the invisible.

I will finish with a thought-provoking quote from Star and Bowker:

But what are these categories? Who makes them, and who may change them? When and why do they become visible? How do they spread?…Remarkably for such a central part of our lives, we stand for the most part in formal ignorance of the social and moral order created by these invisible, potent entities.

References:

[1] The Ethnography of Infrastructure, Susan Leigh Star

On the Presence of Complexity:

In today’s post, I am following up on the theme of complexity by drawing upon ideas from Derrida to further explore these concepts. I will start with a fundamental question regarding the basic premise- Is complexity an inherent property of a situation, independent of the observer, or does it emerge through observation and purpose? In other words, is complexity a given phenomenon in the external world or is it constructed?

This question might seem strange to some, while straightforward to others. Some might argue that this leads us down the path of solipsism, while others might contend that this approach is superior as it pushes us away from naive realism. In this article, we will examine the perspective where complexity manifests as an observer-dependent phenomenon, shaped by intention, purpose, and the limitations of presence. Through Derrida’s philosophical framework, we will explore how complexity emerges not as an absolute property, but as a relational phenomenon tied to observer intention and capability.

When we discuss observer-independent properties, we generally refer to physical properties of a situation that are ‘objective’. Consider the example of a termite hill. The material composition, number of tunnels, number of intersections, and other dimensional properties are indeed independent of the observer. However, I would posit that complexity is fundamentally different, and this difference can be demonstrated through three levels of analysis.

First, at the ontological level, complexity emerges as a second-order property. While first-order properties like mass, dimension, or quantity exist independently, complexity arises from the relationships between these properties. These relationships do not exist in isolation but are perceived and constructed through an observer’s cognitive framework. For instance, in our termite hill example, the mere presence of multiple tunnels does not inherently create complexity – it is the observer’s attempt to understand their interconnections, purpose, and evolutionary significance that generates the perception of complexity.

Second, at the epistemological level, complexity manifests through the limitations and capabilities of the observer. Consider two observers of the same termite hill: an entomologist and a child. The entomologist might find the structure’s organization relatively straightforward due to their understanding of termite behavior and construction patterns. The child, lacking this specialized knowledge, might perceive the same structure as overwhelmingly complex. This demonstrates that complexity is not merely about what is being observed, but about the relationship between the observer’s knowledge framework and the observed phenomenon.

Third, at the teleological level, complexity emerges through purpose and intention. When we declare something as ‘complex’, we are not making a purely objective observation. Instead, this declaration typically arises from a specific purpose or intention. This may be tied to the need to manage a situation, the desire to understand a situation, the need to solve a problem or the obligation to make decisions.

This three-tiered analysis demonstrates that the concept of complexity makes most sense when an observer is involved. As Derrida notes in “Of Grammatology” [1]There is no outside-text. Similarly, there is no complexity outside of our purposeful engagement with situations. The very act of identifying complexity is embedded in our intentions and purposes. Complexity ’emerges’ when we try to understand something, manage something, or achieve something. It is inextricably tied to our purposes and capabilities.

The next point to consider is how différance structures our understanding of complexity. When we identify something as complex, we explain it through emergence. This emergence is further explained through various properties, which in turn point to relationships that lead us back to emergence and complexity. This pattern mirrors Derrida’s différance, where meaning is constantly deferred through a chain of references.

As he notes in “Structure, Sign, and Play in the Discourse of the Human Sciences” [2]:

The center is not the center… the concept of centered structure is in fact the concept of a freeplay based on a fundamental ground, a freeplay constituted upon a fundamental immobility and a reassuring certitude, which is itself beyond the reach of the freeplay.

In his deconstructionist approach, Derrida critiqued the traditional metaphysical idea that meaning or reality is grounded in an immediate, fully present essence—something that can be directly perceived and understood without ambiguity. This notion of “presence” suggests that there is a fundamental truth or meaning that is self-evident and immediately accessible to the mind. However, Derrida challenges this assumption, arguing that meaning is never fully present or directly available. Complexity, in this view, is never completely “present.” It is always understood in relation to other concepts, each of which itself requires further explanation and definition. In this way, complexity can never be reduced to a simple, fixed presence; instead, it is always deferred, dependent on the play of differences and relationships between terms.

Derrida’s concept of différance provides crucial insights into how complexity ‘operates’. In “Margins of Philosophy” [3], he writes:

Différance is what makes the movement of signification possible only if each so-called ‘present’ element… is related to something other than itself, thereby keeping within itself the mark of its past element and already letting itself be vitiated by the mark of its relation to the future element.

Complexity is never purely present. It carries traces of past experiences, and it points toward future implications. In this perspective, complexity exists in a network of relationships. The concept of trace is particularly relevant to understanding complexity. In “Of Grammatology” [1], Derrida explains:

The presence-absence of the trace… carries in itself the problems of the letter and the spirit, of body and soul, and of all the problems whose primary affinity I have recalled.

This suggests that complexity is both present (in our observation) and absent (in its continual deferral). Complexity carries within itself marks of our purposes and intentions. It also contains traces of our past experiences and future expectations. This leads us to a nuanced understanding of complexity as a perspective of possibilities. This is further illuminated by Derrida’s critique of ‘presence’ in “Speech and Phenomena” [4]:

The presence of the perceived present can appear as such only inasmuch as it is continuously compounded with a nonpresence and nonperception, with primary memory and expectation.

Derrida explores the idea that our perception of the present moment is inherently tied to our understanding of time, memory, and anticipation. In other words, we cannot fully experience or recognize the “present” unless it is continuously linked to what is not present—our memory of the past and our expectations for the future. This further supports the view of complexity as being grounded in our capabilities and shaped by our purposes. It is influenced by our past experiences and directed toward future actions.

Final Thoughts

Much like Berkeley’s question of whether a tree falling in the forest will make a sound if there is no one to hear it, I propose that complexity requires an observer. Complexity measures are always knowledge and purpose-relative. This means that different purposes yield different complexities. What is complex to one observer may be merely complicated to another. No absolute measure can exist independent of purpose. Consider the example of a pandemic: there may be objective properties such as transmission rate, virus size, and type, but the notion of complexity makes sense only within the network of relationships, purposes, and meanings. Here, complexity emerges through various needs such as public health management, economic considerations, and social conditions.

As Derrida’s philosophy suggests, complexity exists not as a presence but as a network of relationships, purposes, and meanings. There is no ‘ground’ for complexity as a pure property independent of an observer. This view offers a more nuanced and practical approach to understanding and managing complex situations. This perspective changes how we approach complex challenges, suggesting that effective management requires understanding not just situations, but the purposes, capabilities, and contexts that make them complex in the first place.

Managing complexity within this framework requires understanding the specific purposes of the participants, their capabilities, contextual factors, and available resources. We should appreciate multiple perspectives and not fear provisional solutions. I invite readers to check out this post that goes deeper into Derrida’s deconstruction.

I will conclude with Derrida’s words:

There are things like reflecting pools, and images, an infinite reference from one to the other, but no longer a source, a spring. There is no longer any simple origin. For what is reflected it split in itself and not only as an addition to itself of its image. The reflection, the image, the double, splits what it doubles. The origin of the speculation becomes a difference. What can look at itself is not one; and the law of the addition of the origin to its representation, or the thing to its image, is that one plus one makes at least three.

(Simply put, the above passage suggests that representation or reflection always results in a gap because they are inherently split. This gap creates a difference between the original and its image. As a result, the traditional notion of a stable origin or source is undermined. Instead, meaning emerges through a play of differences. The idea that “one plus one makes at least three” indicates that when an origin is reflected or represented, a third element, the difference or gap between them, emerges. This reveals that neither the original nor its reflection is self-contained or stable.)

Always keep on learning…

[1] Of Grammatology, Derrida. 1967

[2] Structure, Sign, and Play in the Discourse of the Human Sciences, Derrida. 1970

[3] Margins of Philosophy, Derrida. 1972

[4] Speech and Phenomena, Derrida. 1967

The Patron Saint of Complexity:

In today’s post, I am looking at the notion of a patron saint of complexity. I have had the question posed to me – why I am a fan of Ludwig Wittgenstein? In fact, I think that today’s post might get some responses similar to how overrated Wittgenstein is. The answer is simple – I have come to see Wittgenstein as the patron saint of complexity. He stands as philosophy’s patron saint of complexity, reminding us that all systems are fundamentally human constructions. While the world simply is, it’s our minds that weave the intricate web of meanings and patterns we call complexity.

I am of the school that complexity is something that we, humans, attribute to the world around us. It is a form of perspective, a form of expression. As Heinz von Foerster, a distant relative of Wittgenstein and the Socrates of Cybernetics, said – the environment as we perceive it is our invention. Wittgenstein’s point is that our understanding of the world is something we construct socially, and it is unique to our ‘human’ understanding. He sought to use philosophy as a means of therapy to find our way around the world.

Complexity emerges not as an inherent property of a ‘system’ but through how an observer interacts with and frames it. Wittgenstein’s insights suggest that the ‘complexity’ of a situation depends on the observer’s language games and forms of life. This perspective aligns with several key ideas from his later work. I encourage the reader to explore these ideas here.

Language games emphasize that meaning arises from context and use within specific activities. Just as words mean different things in different contexts, a situation’s complexity depends on the framing and engagement of the observer. These meanings are tied to the practices and ‘forms of life’ of a community – our background, values, and experiences shape how we perceive and interpret complexity. Wittgenstein’s rejection of fixed structures supports the idea that ‘systems’, and therefore, complexity, are emergent and non-linear, defying reductionist interpretations. His shift to examining ordinary language and everyday practices focuses on the dynamics of interaction. There is no universal viewpoint – only perspectives grounded in specific contexts.

A Thought Experiment:
I invite the reader to engage in a thought experiment – Imagine a world without language. How would that impact the complexity around us?

Without language, much of our socially constructed complexity would disappear. ‘Systems’ like economics, politics, and science – built on linguistic frameworks – would dissolve, leaving only direct, lived experience. A ‘market’ as we understand it, with its web of transactions, expectations, and regulations, would reduce to immediate barter or interaction, lacking the social conceptual scaffolding of ‘value’ or ‘profit’.

Yet paradoxically, individual perception of complexity might increase because the interpretive burden would shift entirely to the individual. Every interaction or phenomenon would need to be understood in real-time, without the benefit of shared categories or explanations. Consider how a pre-linguistic human might experience a tree – they would see its shape, feel its bark, notice its movement in the wind, and understand functionally that it provides shelter and fruit. But they couldn’t categorize it within abstract concepts like ‘ecosystem’ or ‘life cycle’.

This suggests something interesting – Language does not just describe complexity, it also generates complexity. Through language, we create nested layers of abstraction, build shared conceptual frameworks, accumulate and transmit knowledge across generations.

Without language, the world would be both simpler and more ineffable – but not necessarily less complex. We wouldn’t experience this as “simplicity” because the very concept of “simple vs. complex” is itself a linguistic construct. Like a wolf in the forest, we would simply experience raw reality without the mediating layer of linguistic abstraction.

We can see that language is both a magnifier and a creator of complexity. It allows us to construct shared realities that vastly exceed the sum of our individual experiences. Without it, the world would likely feel simpler in its structure but more intricate in its immediacy. This reminds us that complexity is not just ‘out there’, but also deeply entangled with how we communicate and make sense of the world.

The world would continue in all its intricate interactions – weather patterns would still form, ecosystems would still function, quantum particles would still behave in their strange and mysterious ways. We just wouldn’t have the linguistic frameworks to model and discuss these phenomena. Perhaps this reveals our linguistic bias – the assumption that the world must be either ‘more complex’ or ‘more simple’. Without language, such distinctions wouldn’t exist. The world would just be.

I will finish with an apt quote from Wittgenstein:

The sense of the world must lie outside the world. In the world everything is as it is, and everything happens as it does happen: in it no value exists—and if it did exist, it would have no value.

Always Keep Learning…

The ‘Form’ of Complexity:

In today’s post, I am exploring complexity through the lens of George Spencer Brown’s “Laws of Form”. This philosophical and mathematical treatise explores the foundations of logic and mathematics via a unique symbolic system. Spencer Brown introduces a primary algebra based on a simple mark and the act of drawing a distinction. The mark itself is a fundamental concept that represents both the act of drawing a boundary and the boundary itself. I welcome the reader to explore the main concepts here and here.

Spencer Brown wrote the following in Laws of Form:

A universe comes into being when a space is severed or taken apart. The skin of a living organism cuts off an outside from an inside. So does the circumference of a circle in the plane. By tracing the way we represent such a severance, we can begin to reconstruct, with an accuracy and coverage that appear almost uncanny, the basic forms underlying linguistic, mathematical, physical, and biological science, and can begin to see how the familiar laws of our own experience follow inexorably from the original act of severance.

Imagine a blank sheet of paper, and now imagine drawing a line anywhere on it. Perhaps you drew a vertical line or a horizontal line. Perhaps you drew it near the left edge, or perhaps in the middle. No matter where the line was drawn, you have now created two sides that were not there before. Now select one side. The side you chose might be the left side, or perhaps the smallest of the two sides, or the largest. It could be on your dominant side or the one with a black speck on it. As you can see, there are numerous ways to define the distinction you just made. All this depends on the observer.

The form of the mark is shown below:

The side that you chose is the marked state, and the side that was not chosen is called the unmarked state. The line is called the distinction. The curious thing about the line is that it contains the marked state and yet is no part of the content itself. Consider the name of an object. The name is a word that refers to the object yet is not the object itself. Similar to a fence or a wall around a property, it marks the boundary while not being the property itself. The property is what is contained inside the boundary. It is neither part of the inside nor the outside. The boundary is what allows the observer to see the possibilities of the contained. The mark simultaneously separates and connects.

The reader might now be reminded of Gibson’s ‘affordances’. Affordances lie in the realm of the mark. They are not properties exclusive to the object or the subject. Affordances are action potentials identified by the subject or the person making the distinction. According to Gibson, affordances are opportunities for action that the environment offers to an organism, but these opportunities are defined in relation to the capabilities of the organism.

Let’s use the example of a door. The mark identifies action potentials such as the ability to provide an opening when the door handle is rotated, to hang a wreath on it, or to add a means to peek at the external world through the door. These action potentials are the various possibilities recognized by the observer. They are reliant on the observer’s previous interactions. This points to an important idea in Cybernetics called ‘variety’. Variety refers to the number of distinct states identified by an observer of a ‘system’ constructed by the observer. Variety is also used as a measure of complexity.

Spencer Brown said that the mark provides perfect continence. This means the mark perfectly contains what is inside without any leaks. It creates a boundary that separates the inside from the outside. From this perspective, what is inside the mark is internally coherent since it is perfectly contained by the mark. The observer can hold multiple distinctions within a mark. A door and a window are both framed openings for a building. The observer has distinguished between the two, yet they can be combined into a new grouping – framed openings for a building. A door is an internally coherent concept, as is a window. Both are internally coherent when taken as framed openings for a building. The concept of framed openings for a building is also an internally coherent concept.

In the example above, the reader can see the ‘nestedness’ of various marks. This brings up the next important idea. The boundaries are recursive. What is contained inside the boundary or the mark is self-contained and can contain further marks or be positioned inside a larger mark. We have been discussing the notion of internal coherence. Another way to look at it is through the idea of viability. The various marks drawn that contain and are contained inside larger marks should be viable. When an observer is drawing a boundary around a whole, the whole should be a viable entity. This is also the basis for Stafford Beer’s Viable System Model. VSM offers a framework to diagnose the viability of a given ‘system’. I welcome the reader to explore this further here.

The last concept I want to introduce is the ‘Markovian’ nature of complexity. We have seen that complexity refers to the action possibilities of a situation reliant on the observer and the distinctions made that are internally coherent. The various distinctions go together, yielding new possibilities while maintaining the internal coherence of the larger whole. The action possibilities of a situation are entirely based on the current state – the different possibilities made available and identified by the observer at a given time. In other words, future possibilities are based on the current state only – where we are right now determines where we can go next. It does not depend on previous states. This can seem confusing since where we are right now depended on our past actions. But if you think about it, our next set of actions are made possible through our current states only.

Historical context and path dependency in many fields—from ecology to economics—seemingly suggest that past states fundamentally shape future potentials. While conventional wisdom argues that our trajectory is deeply rooted in historical conditions, this perspective oversimplifies the dynamic nature of complex ‘systems’. The current state is not merely a passive recipient of historical momentum, but an active generative point of emergence.

This means that every moment contains an infinite landscape of possibilities, yet these possibilities are simultaneously constrained and enabled by our present configuration. The past does not directly determine future states. Instead, it provides a contextual substrate from which current possibilities arise. Our current state is a complex compression of historical interactions, not a linear continuation of them.

In complex ‘systems’, the relationship between past and present is not deterministic but probabilistic. In this view, the current state acts as a filter, transforming historical conditions into immediate possibilities. These possibilities are not predetermined but emerge through the intricate interactions of the system’s current elements. The past provides context, but the present provides agency.

This understanding reveals a profound generative principle: potential is fundamentally a property of the present moment. While historical interactions create the conditions for current possibilities, these possibilities are activated and defined solely by the current state’s unique configuration. The past whispers, but the present speaks.

Moreover, this perspective invites a more dynamic understanding of complexity. Instead of viewing systems as predetermined trajectories, we can see them as constantly emerging landscapes of possibility, where each moment represents a unique point of potential transformation. The current state is not bound by historical determinism but is a creative threshold of becoming.

This approach does not negate the importance of historical context but reframes it. Historical interactions are not chains that bind future potential, but rather the rich, complex background from which new possibilities continuously emerge. The present moment is always more than the sum of its historical parts—it is a generative interface where past, present, and potential converge.

Final words:

This viewpoint invites us to see boundaries not as rigid divisions, but as dynamic interfaces of possibility. The concept of affordances and variety provides a rich framework for exploring how systems emerge, interact, and evolve. The true power of this perspective lies in its invitation to reimagine boundaries—not as limitations, but as generative spaces of potential. Whether in scientific inquiry, organizational design, or personal understanding, the act of drawing distinctions becomes a creative process of world-making.

I will finish with a wonderful quote from Spencer Brown:

Thus, we cannot escape the fact that the world we know is constructed in order to see itself. This is indeed amazing. Not so much in view of what it sees, although this may appear fantastic enough, but in respect of the fact that it can see at all. But in order to do so, evidently it must first cut itself up into at least one state which sees, and at least one other state which is seen.

Always keep learning.

The Truths of Complexity:

The Covid 19 pandemic has given me an opportunity to observe, meditate and learn about complexity in action. In today’s post, I am looking at “truths” in complexity. Humans, more than any other species, have the ability to change their environment at a faster pace. They are also able to maintain belief systems over time and act on them autonomously. These are good reasons to call all “human systems” complex systems.

The Theories of Truth:

Generally, there are three theories of truth in philosophy. They are as follows:

  1. Correspondence theory of truth – very simply put, this means that what you have internally in your mind corresponds one-to-one with the external world. The statement you might make such as – “the cat is on the mat” is true, if there are truly a cat and a mat, and if that cat is on that mat. The main objection to this theory is that we don’t have access to have an objective reality. What we have is a sensemaking organ, our brain, that is trying to make sense based on the data provided by the various sensory organs. The brain over time generates stable correlations which allows it to abstract meanings from the filtered information from the sensory data. The correspondence theory is viewed as a “static” picture of truth, and fails to explain the dynamic and complex nature of reality.
  2. Coherence theory of truth – In this approach, a statement is true if it is coherent with the different specified set of beliefs and propositions. Here the idea is more about a fit and harmony with existing beliefs. The coherence theory is about consistency. An objection to this theory is that the subjective nature of a statement can “bend” to match the existing strong belief systems. Perhaps, a good example of this is the recent poll that found that the majority of democrats fear that the worst is yet to come for the Covid 19 pandemic, while the majority of republicans believe that the worst is over. Another criticism against this is that we can be inconsistent in our beliefs as indicated by cognitive dissonance.
  3. Pragmatic Theory of truth – The pragmatic theory of truth was put forth as an alternative to the static correspondence theory of truth. In this theory, the value of truth is dependent on the utility it brings. Pragmatic theories of truth have the effect of shifting attention away from what makes a statement true and toward what people mean or do in describing a statement as true. As one of the proponents of Pragmatic theory, William James, put it – True beliefs are useful and dependable in ways that false beliefs are not:‘You can say of it then either that “it is useful because it is true” or that “it is true because it is useful”. Both these phrases mean exactly the same thing.’ One of my favorite explanations of pragmatic theory comes from Richard Rorty, who viewed it as coping with reality, rather than copying reality. One of the criticisms against the pragmatic theory of truth is how it explains truth in terms of utility. As John Capps notes, utility, long-term durability, and assertibility (etc.) should be viewed not as definitions but rather as criteria of truth, as yardsticks for distinguishing true beliefs from false ones.

Sensemaking Complexity:

From the discussion of truth, we can see that seeking truth is not an easy task, especially when we deal with complexity of human systems. Our natural tendency is to find order as pleasing and reassuring. We try to find order in all we can, and we try our best to maintain order as long as we can. In this attempt, we often neglect the actual complexity we are dealing with. A common way to distinguish complexity of a phenomenon is – ordered, complicated or complex. We can say a square peg in a square hole is an ordered phenomenon. The correspondence theory of truth is quite apt here because we have a one to one relationship. We have a very good working knowledge of cause and effect. As complexity increases, we get to complicated phenomenon where there is still somewhat a good cause and effect relationship. A car can be viewed as a complicated phenomenon. The correspondence theory is still apt here. Once we add a human to the mix, we get to complexity. Imagine the driver of a car. Now imagine thousands of drivers all at once. The correspondence theory of truth falls apart fast here.

The main source of complexity in the example discussed above comes from humans. We are autonomous, and we are able to justify our own actions. We may go faster than the speed limit because we are already late for the appointment. We may overtake on the wrong side because the other driver is driving slowly. We assign meanings and we also assign purposes for others. We do not always realize that other humans also have the same power.

We have seen varying responses and behavior in this pandemic. We have seen the different justifications and hypotheses. We agree with some of them and strongly disagree with others depending on how they cohere with our own belief systems. The actual transmission of the virus is fairly constrained. It transmits mainly from person to person. The transmission occurs mainly through respiratory droplets. Every human interaction carries some risk of becoming infected if the other person is a carrier of the virus. However, the actual course of the pandemic has been complex.

Philosophical Insights to Sensemaking Complexity:

I will use the ideas of Friedrich Nietzsche and William. V.O. Quine to further look at truth and how we come to know about truth. Nietzsche had a multidimensional view of truth. He viewed truth as:

A mobile army of metaphors, metonyms, and anthropomorphisms—in short, a sum of human relations which have been enhanced, transposed, and embellished poetically and rhetorically, and which after long use seem firm, canonical, and obligatory to a people: truths are illusions about which one has forgotten that this is what they are; metaphors which are worn out and without sensuous power; coins which have lost their pictures and now matter only as metal, no longer as coins.

He emphasized on the abstract nature of truth. One comes to view the abstractions/metaphors as stand in for reality, and eventually falsely equate them to reality.

Every word immediately becomes a concept, in as much as it is not intended to serve as a reminder of the unique and wholly individualized original experience to which it owes its birth, but must at the same time fit innumerable, more or less similar cases—which means, strictly speaking, never equal—in other words, a lot of unequal cases. Every concept originates through our equating what is unequal.

Nietzsche advised us against using a cause-effect, correspondence type viewpoint in sensemaking complexity:

It is we alone who have devised cause, sequence, for-each-other, relativity, constraint, number, law, freedom, motive, and purpose; and when we project and mix this symbol world into things as if it existed ‘in itself’, we act once more as we have always acted—mythologically. 

As Maureen Finnigan notes in her wonderful essay, Nietzsche’s Perspective: Beyond Truth as an Ideal:

As truth is not objective, in like manner, it is not subjective. Since thinking is not wholly rational, disconnected from the body, or independent of the world, the subjective perception, or conception, of truth through the intellect alone is impossible. “The ‘pure spirit’ is pure stupidity: if we subtract the nervous system and the senses—the ‘mortal shroud’—then we miscalculate—that is all!” Inasmuch as the individual is not independent from the world, one can neither subjectively nor objectively explain the world as if detached, but must interpret the world from within. Subjective and objective, like True and apparent, soul and body, thinking thing and material thing, intellect and sense, noumena and phenomena, are dualities that Nietzsche aspires to overcome. Thus, although Nietzsche is not a rationalist, this does not mean he falls into the irrationalist camp. He does not abolish reason but instead situates it within life, as an instrument, not as an absolute.

With complexity, we should not look for correspondence but coherence. Correspondence forces categorization while coherence forces connections. This follows nicely into Quine’s Web of Belief idea. Quine’s idea is a holistic approach. We make meanings in a holistic fashion. When we observe a phenomenon, our sensory experience and the belief it generates do not standalone in our entire belief system. Instead, Quine postulates that we make sense holistically with a web of belief. Every belief is connected to other beliefs like a web.

For example, we can say Experience1(E1) led to Belief1(B1), and Experience2(E2) led to Belief2(B2) etc. This has the correspondence nature we discussed earlier. This view prefers the ordered static approach to sensemaking. However, in Quine’s view, it is more dynamic, interconnected and complex. This has the coherence nature we discussed earlier. The schematic below, inspired by a lecture note from Bryan. Van. W. Norden, shows this in detail.

The idea of Web of Belief is clearly explained by Thomas Kelly:

Quine famously suggests that we can picture everything that we take to be true as constituting a single, seamless “web of belief.” The nodes of the web represent individual beliefs, and the connections between nodes represent the logical relations between beliefs. Although there are important epistemic differences among the beliefs in the web, these differences are matters of degree as opposed to kind. From the perspective of the epistemologist, the most important dimension along which beliefs can vary is their centrality within the web: the centrality of a belief corresponds to how fundamental it is to our overall view of the world, or how deeply implicated it is with the rest of what we think. The metaphor of the web of belief thus represents the relevant kind of fundamentality in spatial terms: the more a particular belief is implicated in our overall view of the world, the nearer it is to the center, while less fundamental beliefs are located nearer the periphery of the web. Experience first impinges upon the web at the periphery, but no belief within the web is wholly cut off from experience, inasmuch as even those beliefs at the very center stand in logical relations to beliefs nearer the periphery.

The idea of degrees rather than a concrete distinction between beliefs is very important to note here. Additionally, Quine proposes that when we counter an experience contradicting our belief, we seek to restore consistency/coherence in the web by giving up beliefs that are located near the periphery rather than the ones near the center.

Final Words:

The dynamic nature of complexity is not just applicable to a pandemic but also to scientific paradigms. This is beautifully explained in the quote from Jacob Bronowski below:

“There is no permanence to scientific concepts because they are only our interpretations of natural phenomena … We merely make a temporary invention which covers that part of the world accessible to us at the moment”

Our beliefs shape our experience as much as our experiences shape our beliefs in a recursive manner. The web gets more complex as time goes on, where some of the nodes become more distinct and some others get hazier. We are prone to getting perpetually frustrated if we try to apply a static framework to the dynamic everchanging domain of complexity. It gets more frustrating because patterns emerge on a continuous basis providing an illusion of order. The static and rigid frameworks break because of their rigidity and inflexibility to tackle the variety thrown upon them.

With this in mind, we should come to realize that we do not have a means to know the external world as-is. All we can know is how it appears to us based on our web of belief. The pragmatic tradition of truth advises us to keep going on our search for truth, and that this search is self-corrective. The correspondence theory fails us because the meaning we create is not independent of us, but very much a product of our web of belief. At the same time, if we don’t seek to understand others, coherence theory will fail us because we would lack the requisite variety needed to make sense of a complex phenomenon. I will finish with an excellent quote from Maureen Finnigan:

Human beings impose their own truth on life instead of seeking truth within life.

Stay safe and Always keep on learning… In case you missed it, my last post was Korzybski at the Gemba:

The Free Energy Principle at the Gemba:

FEP

In today’s post, I am looking at the Free Energy Principle (FEP) by the British neuroscientist, Karl Friston. The FEP basically states that in order to resist the natural tendency to disorder, adaptive agents must minimize surprise. A good example to explain this is to say successful fish typically find themselves surrounded by water, and very atypically find themselves out of water, since being out of water for an extended time will lead to a breakdown of homoeostatic (autopoietic) relations.[1]

Here the free energy refers to an information-theoretic construct:

Because the distribution of ‘surprising’ events is in general unknown and unknowable, organisms must instead minimize a tractable proxy, which according to the FEP turns out to be ‘free energy’. Free energy in this context is an information-theoretic construct that (i) provides an upper bound on the extent to which sensory data is atypical (‘surprising’) and (ii) can be evaluated by an organism, because it depends eventually only on sensory input and an internal model of the environmental causes of sensory input.[1]

In FEP, our brains are viewed as predictive engines, or also Bayesian Inference engines. This idea is built on predictive coding/processing that goes back to the German physician and physicist Hermann von Helmholtz from the 1800s. The main idea is that we have a hierarchical structure in our brain that tries to predict what is going to happen based on the previous sensory data received. As philosopher Andy Clarke explains, our brain is not a cognitive couch potato waiting for sensory input to make sense of what is going on. It is actively predicting what is going to happen next. This is why minimizing the surprise is important. For example, when we lift a closed container, we predict that it is going to have a certain weight based on our previous experiences and the visual signal of the container. We are surprised if the container is light in weight and can be lifted easily. We have similar experiences when we miss a step on the staircase. From a mathematical standpoint, we can say that when our internal model matches the sensory input, we are not surprised. This refers to the KL divergence in information theory. The lower the divergence, the better the fit between the model and the sensory input, and lower the surprise. The hierarchical model is top down. The prediction flows top down, while the sensory data flows bottom up. If the model matches the sensory data, then nothing goes up the chain. However, when there is a significant difference between the top down prediction and the bottom up incoming sensory date, the difference is raised up the chain. One of my favorite examples to explain this further is to imagine that you are in the shower with your radio playing. You can faintly hear the radio in the shower. When your favorite song plays on the radio, you feel like you can hear it better than when an unfamiliar song is played. This is because your brain is able to better predict what is going to happen and the prediction helps smooth out the incoming auditory signals. British neuroscientist Anil Seth has a great quote regarding the predictive processing idea, “perception is controlled hallucination.”

Andy Clarke explains this further:

Perception itself is a kind of controlled hallucination… [T]he sensory information here acts as feedback on your expectations. It allows you to often correct them and to refine them.

(T)o perceive the world is to successfully predict our own sensory states. The brain uses stored knowledge about the structure of the world and the probabilities of one state or event following another to generate a prediction of what the current state is likely to be, given the previous one and this body of knowledge. Mismatches between the prediction and the received signal generate error signals that nuance the prediction or (in more extreme cases) drive learning and plasticity.

Predictive coding models suggest that what emerges first is the general gist (including the general affective feel) of the scene, with the details becoming progressively filled in as the brain uses that larger context — time and task allowing — to generate finer and finer predictions of detail. There is a very real sense in which we properly perceive the forest before the trees.

What we perceive (or think we perceive) is heavily determined by what we know, and what we know (or think we know) is constantly conditioned on what we perceive (or think we perceive).

(T)he task of the perceiving brain is to account for (to accommodate or ‘explain away’) the incoming or ‘driving’ sensory signal by means of a matching top-down prediction. The better the match, the less prediction error then propagates up the hierarchy. The higher level guesses are thus acting as priors for the lower level processing, in the fashion (as remarked earlier) of so-called ‘empirical Bayes’.

The question on what happens when the prediction does not match is best explained by Friston:

“The free-energy considered here represents a bound on the surprise inherent in any exchange with the environment, under expectations encoded by its state or configuration. A system can minimize free energy by changing its configuration to change the way it samples the environment, or to change its expectations. These changes correspond to action and perception, respectively, and lead to an adaptive exchange with the environment that is characteristic of biological systems. This treatment implies that the system’s state and structure encode an implicit and probabilistic model of the environment.”

Our brains are continuously sampling the data coming in and making predictions. When there is a mismatch between the prediction and the data, we have three options.

  • Update our model to match the incoming data.
  • Attempt to change the environment so that the model matches the environment. Try resampling the data coming in.
  • Ignore and do nothing.

Option 3 is not always something that will yield positive results. Option 1 is a learning process where we are updating our internal models based on the new evidence. Option 2 show ours strong confidence in our internal model, and that we are able to change the environment. Or perhaps there is something wrong with the incoming data and we have to get more data to proceed.

The ideas from FEP can also further our understanding on our ability to balance between maintaining status quo (exploit) and going outside our comfort zones (explore). To paraphrase the English polymath Spencer Brown, the first act of cognition is to differentiate (act of distinction). We start with differentiating – Me/everything else. We experience and “bring forth” the world around us by constructing it inside our mind. This construction has to be a simpler version due to the very high complexity of the world around us. We only care about correlations that matter to us in our local environment. This matters the most for our survival and sustenance. This leads to a tension. We want to look for things that confirm our hypotheses and maintain status quo. This is a short-term vision. However, this doesn’t help in the long run with our sustenance. We also need to explore to look for things that we don’t know about. This is the long-term vision. This helps us prepare to adapt with the everchanging environment. There is a balance between the two.

The idea of FEP can go from “I model the world” to “we model the world” to “we model ourselves modelling the world.” As part of a larger human system, we can cocreate a shared model of our environment and collaborate to minimize the free energy leading to our sustenance as a society.

Final Words:

FEP is a fascinating field and I welcome the readers to check out the works of Karl Friston, Andy Clarke and others. I will finish with the insight from Friston that the idea of minimizing free energy is also a way to recognize one’s existence.

Avoiding surprises means that one has to model and anticipate a changing and itinerant world. This implies that the models used to quantify surprise must themselves embody itinerant wandering through sensory states (because they have been selected by exposure to an inconstant world): Under the free-energy principle, the agent will become an optimal (if approximate) model of its environment. This is because, mathematically, surprise is also the negative log-evidence for the model entailed by the agent. This means minimizing surprise maximizes the evidence for the agent (model). Put simply, the agent becomes a model of the environment in which it is immersed. This is exactly consistent with the Good Regulator theorem of Conant and Ashby (1970). This theorem, which is central to cybernetics, states that “every Good Regulator of a system must be a model of that system.” .. Like adaptive fitness, the free-energy formulation is not a mechanism or magic recipe for life; it is just a characterization of biological systems that exist. In fact, adaptive fitness and (negative) free energy are considered by some to be the same thing.

Always keep on learning…

In case you missed it, my last post was The Whole is ________ than the sum of its parts:

[1] The free energy principle for action and perception: A mathematical review. Christopher L. Buckley, Chang Sub Kim, Simon McGregor, Anil K. Seth (2017)

Clausewitz at the Gemba:

vonClausewitz

In today’s post, I will be looking at Clausewitz’s concept of “friction”. Carl von Clausewitz (1780-1831) was a Prussian general and military philosopher. Clausewitz is considered to be one of the best classical strategy thinkers and is well known for his unfinished work, “On War.” The book was published posthumously by his wife Marie von Brühl in 1832.

War is never a pleasant business and it takes a terrible toll on people. The accumulated effect of factors, such as danger, physical exertion, intelligence or lack thereof, and influence of environment and weather, all depending on chance and probability, are the factors that distinguish real war from war on paper. Friction, Clausewitz noted, was what separated war in reality from war on paper. Friction, as the name implies, hindered proper and smooth execution of strategy and clouded the rational thinking of agents. He wrote:

War is the realm of uncertainty; three quarters of the factors on which action in war is based are wrapped in a fog of greater or lesser uncertainty.

Everything in war is very simple, but the simplest thing is difficult. The difficulties accumulate and end by producing a kind of friction that is inconceivable unless one has experienced war.

Friction is the only conception which, in a general way, corresponds to that which distinguishes real war from war on paper. The military machine, the army and all belonging to it, is in fact simple; and appears, on this account, easy to manage. But let us reflect that no part of it is in one piece, that it is composed entirely of individuals, each of which keeps up its own friction in all directions.

Clausewitz viewed friction as impeding our rational abilities to make decisions. He cleverly stated, “the light of reason is refracted in a manner quite different from that which is normal in academic speculation… the ordinary man can never achieve a state of perfect unconcern in which his mind can work with normal flexibility.” In a tense situation, as most often the case is in combat, the “freshness” or usefulness of the available information is quickly decaying and reliability of the information is also in question.

Friction is what happens when reality differs from your model. Although Clausewitz’s concept of friction contains other elements, I am interested in is the friction coming from ambiguous information. Uncertainty and information are related to each other. In fact, one is the absence of the other. The only way to reduce uncertainty (be certain) is to have the required information that counters the uncertainty. To quote Wikipedia, Uncertainty refers to epistemic situations involving imperfect or unknown information. If we have full information then we don’t have uncertainty. It’s a zero-sum game.

We have two options to deal with the uncertainty due to informational friction:

  1. Reduce uncertainty by making useful information readily available to required agents when needed and where needed
  2. Come up with ways to tolerate uncertainty when we are not able to reduce it further.

As Moshe Rubinstein points out in his wonderful book, Tools for Thinking and Problem Solving, uncertainty is reduced only by acquisition of information and you need to ask three questions, in the order specified, when acquiring information.

  1. Is the information relevant? (is it current, and is the context applicable?)
  2. Is the information credible? (is it accurate?)
  3. Is the information worth the cost?

How should we proceed to minimize the friction?

  1. We should try to get the total picture, an understanding of the forest before we get lost in the trees. This helps us in realizing where our epistemic boundaries might be, and where we need to improve our learning.
  2. We should have the courage to ask questions and cast doubts on our world views. Even with our belief system, we can ask whether it is relevant and credible. We should try to ask – what is wrong with this picture? What am I missing?
  3. We should always keep on learning. We should not shy away from “hard projects.” We should see the challenges as learning experiences.
  4. We should know and be ready to have our plan fail. We should understand what the “levers” are in our plan. What happens when we push on one lever versus pulling on another? We should have models with the understanding that they are not perfect but they help us understand things better. We should rely on heuristics and flexible rules of thumbs. They are more flexible when things go wrong.
  5. We should reframe our understanding from a different perspective. We can try to draw things out or write about it or even talk about it to your spouse or family. Different viewpoints should be welcomed. We should generate multiple analogies and stories to help tell our side of the story. These will only help in further our understanding.
  6. When we make decisions under uncertainty and risk, each action can result in multiple outcomes, and most of the times, these are unpredictable and can have large-scale consequences. We should engage in fast and safe-to-fail experiments and have strong feedback loops to change course and adapt as needed.
  7. We should have stable substructures when things fail. This allows us to go back to a previous “safe point” rather than go back all the way to the start.
  8. We should go to gemba to grasp the actual conditions and understand the context. Our ability to solve a problem is inversely proportional to the distance from the gemba.
  9. We should take time, as permissible, to detail out our plan, but we should be ready to implement it fast. Plan like a tortoise and run like a hare.
  10. We should go to the top to take a wide perspective, and then come down to have boots on ground. We should take time to reflect on what went wrong and what went right, and what our impact was on ourselves and others. This is the spirit of Hansei in Toyota Production System.

Final Words:

Although not all of us are engaged in a war at the gemba, we can learn from Clausewitz about the friction from uncertainty, which impedes us on a daily basis. Clausewitz first used the term “friction” in a letter he wrote to his future wife, Marie von Brühl, in 1806. He described friction as the effect that reality has on ideas and intentions in war. Clausewitz was a man ahead of his time, and from his works we can see elements of systems thinking and complexity science.

We propose to consider first the single elements of our subject, then each branch or part, and, last of all, the whole, in all its relations—therefore to advance from the simple to the complex. But it is necessary for us to commence with a glance at the nature of the whole, because it is particularly necessary that in the consideration of any of the parts the whole should be kept constantly in view. The parts can only be studied in the context of the whole, as a “gestalt.

Clausewitz realized that each war is unique and thus what may have worked in the past may not work this time. He said:

Further, every war is rich in particular facts; while, at the same time, each is an unexplored sea, full of rocks, which the general may have a suspicion of, but which he has never seen with his eye, and round which, moreover, he must steer in the night. If a contrary wind also springs up, that is, if any great accidental event declares itself adverse to him, then the most consummate skill, presence of mind and energy, are required; whilst to those who only look on from a distance, all seems to proceed with the utmost ease.

Clausewitz encourages us to get out of our comfort zone, and gain as much variety of experience as we can. The variety of states in the environment always is larger than the variety of states we can hold. He continues to advise the following to reduce the impact of friction:

The knowledge of this friction is a chief part of that so often talked of, experience in war, which is required in a good general. Certainly, he is not the best general in whose mind it assumes the greatest dimensions, who is the most overawed by it (this includes that class of over-anxious generals, of whom there are so many amongst the experienced); but a general must be aware of it that he may overcome it, where that is possible; and that he may not expect a degree of precision in results which is impossible on account of this very friction. Besides, it can never be learnt theoretically; and if it could, there would still be wanting that experience of judgment which is called tact, and which is always more necessary in a field full of innumerable small and diversified objects, than in great and decisive cases, when one’s own judgment may be aided by consultation with others. Just as the man of the world, through tact of judgment which has become habit, speaks, acts, and moves only as suits the occasion, so the officer, experienced in war, will always, in great and small matters, at every pulsation of war as we may say, decide and determine suitably to the occasion. Through this experience and practice, the idea comes to his mind of itself, that so and so will not suit. And thus, he will not easily place himself in a position by which he is compromised, which, if it often occurs in war, shakes all the foundations of confidence, and becomes extremely dangerous.

US President Dwight Eisenhower said, “In preparing for battle I have always found that plans are useless, but planning is indispensable.” The act of planning helps us to conceptualize our future state. We should strive to minimize the internal friction, and we should be open to keep learning, experimenting, and adapting as needed to reach our future state. We should keep on keeping on:

“Perseverance in the chosen course is the essential counter-weight, provided that no compelling reasons intervene to the contrary. Moreover, there is hardly a worthwhile enterprise in war whose execution does not call for infinite effort, trouble, and privation; and as man under pressure tends to give in to physical and intellectual weakness, only great strength of will can lead to the objective. It is steadfastness that will earn the admiration of the world and of posterity.”

Always keep on learning…

In case you missed it, my last post was Exploring The Ashby Space:

Solving a Lean Problem versus a Six Sigma Problem:

Model

I must confess upfront that the title of this post is misleading. Similar to the Spoon Boy in the movie, The Matrix, I will say – There is no Lean problem nor a Six Sigma problem. All these problems are our mental constructs of a perceived phenomenon. A problem statement is a model of the actual phenomenon that we believe is the problem. The problem statement is never the problem! It is a representation of the problem. We form the problem statement based on our vantage point, our mental models and biases. Such a constructed problem statement is thus incomplete and sometimes incorrect. We do not always ask for the problem statement to be reframed from the stakeholder’s viewpoint. A problem statement is an abstraction based on our understanding. Its usefulness lies in the abstraction. A good abstraction ignores and omits unwanted details, while a poor abstraction retains them or worse omits valid details. Our own cognitive background hinders our ability to frame the true nature of the problem. To give a good analogy, a problem statement is like choosing a cake slice. The cake slice represents the cake, however, you picked the slice you wanted, and you still left a large portion of the cake on the table, and nobody wants your slice once you have taken a bite out of it.

When we have to solve a problem, it puts tremendous cognitive stress on us. Our first instinct is to use what we know and what we feel comfortable with. Both Lean and Six Sigma use a structured framework that we feel might suit the purpose. However, depending upon what type of “problem” we are trying to solve, these frameworks may lack the variety they need to “solve” the problem. I have the used the quotation marks on purpose. For example, Six sigma relies on a strong cause-effect relationship, and are quite useful to address a simple or complicated problem. A simple problem is a problem where the cause-effect relationship is obvious, whereas a complicated problem may require an expert’s perspective and experience to analyze and understand the cause-effect relationship. However, when you are dealing with a complex problem, which is non-linear, the cause-effect relationship is not entirely evident, and the use of a hard-structured framework like Six sigma can actually cause more harm than benefit. All human-centered “systems” are complex systems. In fact, some might say that such systems do not even exist. To quote Peter Checkland, In a certain sense, human activity systems do not exist, only perceptions of them exist, perceptions which are associated with specific worldviews.

We all want and ask for simple solutions. However, simple solutions do not work for complex problems. The solutions must match the variety of the problem that is being resolved. This can sometimes be confusing since the complex problems may have some aspects that are ordered which give the illusion of simplicity. Complex problems do not stay static. They evolve with time, and thus we should not assume that the problem we are trying to address still has the same characteristics when they were identified.

How should one go from here to tackle complex problems?

  • Take time to understand the context. In the complex domain, context is the key. We need to take our time and have due diligence to understand the context. We should slow down to feel our way through the landscape in the complex domain. We should break our existing frameworks and create new ones.
  • Embrace diversity. Complex problems require multidisciplinary solutions. We need multiple perspectives and worldviews to improve our general comprehension of the problem. This also calls to challenge our assumptions. We should make our assumptions and agendas as explicit as possible. The different perspective allows for synthesizing a better understanding.
  • Similar to the second suggestion, learn from fields of study different from yours. Learn philosophy. Other fields give you additional variety that might come in handy.
  • Understand that our version of the problem statement is lacking, but still could be useful. It helps us to understand the problem better.
  • There is no one right answer to complex problems. Most solutions are good-enough for now. What worked yesterday may not work today since complex problems are dynamic.
  • Gain consensus and use scaffolding while working on the problem structure. Scaffolding are temporary structures that are removed once the actual construction is complete. Gaining consensus early on helps in aligning everybody.
  • Go to the source to gain a truer understanding. Genchi Genbutsu.
  • Have the stakeholders reframe the problem statement in their own words, and look for contradictions. Allow for further synthesis to resolve contradictions. The tension arising from the contradictions sometimes lead us to improving and refining our mental models.
  • Aim for common good and don’t pursue personal gains while tackling complex problems.
  • Establish communication lines and pay attention to feedback. Allow for local context while interpreting any new information.

Final Words:

I have written similar posts before. I invite the reader to check them out:

Lean, Six Sigma, Theory of Constraints and the Mountain

Herd Structures in ‘The Walking Dead’ – CAS Lessons

A successful framework relies on a mechanism of feedback-induced iteration and keenness to learn. The iteration function is imperative because the problem structure itself is often incomplete and inadequate. We should resist the urge to solve a Six Sigma or a Lean problem. I will finish with a great paraphrased quote from the Systems Thinker, Michael Jackson (not the famous singer):

To deal with a significant problem, you have to analyze and structure it. This means, analyzing and structuring the problem itself, not the system that will solve it. Too often we push the problem into the background because we are in a hurry to proceed to a solution. If you read most texts thoughtfully, you will see that almost everything is about the solution; almost nothing is about the problem.

Always keep on learning…

In case you missed it, my last post was Maurice Merleau-Ponty’s Lean Lessons: