A Study of “Organizational Closure” and Autopoiesis:

autopoiesis

In today’s post, I am looking at the phrase “Organizational Closure” and the concept of autopoiesis. But before that, I would like to start with another phrase “Information Tight”. Both of these phrases are of great importance in the field of Cybernetics. I first came across the phrase “Information Tight” in Ross Ashby’s book, “An Introduction to Cybernetics”. Ross Ashby was one of the pioneers of Cybernetics. Ashby said: [1]

Cybernetics might, in fact, be defined as the study of systems that are open to energy but closed to information and control— systems that are “information‐tight”.

This statement can be confusing at first, when you look at it from the perspective of Thermodynamics. Ashby is defining “information tight” as being closed to information and control. The Cybernetician, Bernard Scott views this as: [2]

…an organism does not receive “information” as something transmitted to it, rather, as a circularly organized system it interprets perturbations as being informative.

Here the “tightness” refers to the circular causality of the internal structure of a system. This concept was later developed as “Organization Closure” by the Chilean biologists, Humberto Maturana and Francisco Varela. [3] They were trying to answer two questions:

  • What is the organization of the living?
  • What takes place in the phenomenon of perception?

In answering these two questions, they came up with the concept of Autopoiesis. Auto – referring to self, and poiesis – referring to creation or generation. Autopoiesis means self-generation. Escher’s “Drawing Hands” is a good visualization of this concept. We exist in the continuous production of ourselves.

Escher

As British organizational theorist, John Mingers put it: [4]

Maturana and Varela developed the concept of autopoiesis in order to explain the essential characteristics of living as opposed to nonliving systems. In brief, a living system such as a cell has an autopoietic organization, that is, it is ”self-producing. ” It consists of processes of production which generate its components. These components themselves participate in the processes of production in a continual recursive re-creation of self. Autopoietic systems produce themselves and only themselves.

John H Little provides further explanation: [5]

Autopoietic systems, are self-organizing in that they produce and change their own structures but they also produce their own components… The system’s production of components is entirely internal and does not depend on an input-output relation with the system environment.

Two important principles underlying autopoiesis are “structural determinism” and “organizational closure.” To understand these principles, it is first necessary to understand the difference between “structure” and “organization” as Maturana uses these terms. “Organization” refers to the relations between components which give a system its identity. If the organization of a system changes, its identity changes. “Structure” refers to the actual components and relations between components that make up a particular example of a type of system.

Conceptually, we may understand the distinction between organization and structure by considering a simple mechanical device, such as a pencil. We generally have little difficulty recognizing a machine which is organized as a “pencil” despite the fact that pencil may be structurally built in a variety of ways and of a variety of materials. One organizational type, therefore may be manifested by any number of different structural arrangements.

Marjatta Maula provides additional information on the “organization” and “structure”, two important concepts in autopoiesis.

In autopoiesis theory, the concepts ‘organization’ and ‘structure’ of a system have a specific meaning. ‘Organization’ refers to an idea (such as an idea of airplane or a company in general). ‘Structure’ refers to the actual embodiment of the idea (such as a specific airplane or a specific company). Thus, ‘organization’ is abstract but ‘structure’ is concrete (Mingers, 1997). Over time an autopoietic system may change its components and structure but maintain its ‘organization.’ In this case, the system sustains its identity. If a system’s ‘organization’ changes, it loses its current identity (von Krogh & Roos, 1995). [6]

The most important idea that Maturana and Varela put forward was that an autopoietic system does not take in information from its environment and an external agent cannot control an autopoietic system. Autopoietic systems are organizationally (or operationally) closed. That is to say, the behavior of the system is not specified or controlled by its environment but entirely by its own structure, which specifies how the system will behave under all circumstances. It is as a consequence of this closure that living systems cannot have “inputs” or “outputs”-nor can they receive or produce information-in any sense in which these would have independent, objective reality outside the system. Put in another way, since the system determines its own behavior, there can be no “instructive interactions” by means of which something outside the system determines its behavior. A system’s responses are always determined by its structure, although they may be triggered by an environmental event.[7]

Although organizationally closed, a system is not disconnected from its environment, but in fact in constant interaction with it. Maturana and Varela (1987) call this ongoing process “structural coupling” (p. 75). System and environment (which will include other systems) act as mutual sources of perturbation for one another, triggering changes of state in one another. Over time, provided there are no destructive interactions between the system and the medium in which it realizes itself (i.e., its environment), the system will appear to an observer to adapt to its environment. What is in fact happening, though, is a process of structural “drift” occurring as the system responds to successive perturbations in the environment according to its structure at each moment. [7]

In other words, the idea of an organism as an information processing agent is a misunderstanding. When you look at it further, although it might appear as strange, little by little, it might make sense. Think about a classroom, a teacher is giving a lecture and the same “information” reaches the students. However, what type and amount of “information” is taken in depends on each individual student. Maturana explains it as the teacher makes the selection (in the form of the lecture), however, the teacher cannot make the student accept the “information” in its entirety. A loose analogy is a person pushing a button on a vending machine. The internal structure of the machine determines how to react. If the machine does not have a closed structure inside, it cannot react. The pressing of the button is viewed as a perturbation, and the vending machine reacts based on its internal structure at that point in time. If the vending machine was out of order or if there was something blocking the item, the machine will not dispense even if the external agent “desired” the machine to reach in a specific way.

According to Maturana, all systems consisting of components are structure-determined, which is to say that the actual changes within the system depend on the structure itself at that particular instant. Any change in such a system must be a structural change. If this is the case, then an environmental action cannot determine its own effect on a system. Changes, or perturbations in the environment can only trigger structural change or compensation. “It is the structure that determines both what the compensation will be and even what in the environment can or cannot act as a trigger” (Mingers, 1995, p. 30).

It is the internal structure of the system at any point in time that determines:

  1. all possible structural changes within the system that maintain the current organization, as well as those that do not, and
  2. all possible states of the environment that could trigger changes of state and whether such changes would maintain or destroy the current organization (Mingers, 1995, p. 30).[5]

As we understand the idea of autopoiesis, we start to realize that it has serious implications. Our abstract concept of a process is shown below:[5]

INPUT -> PROCESS -> OUTPUT

In light of autopoiesis, we can see that this abstraction does not make sense. An autopoietic system cannot accept inputs. We treat information and knowledge as a commodity that can be easily coded, stored and transferred. Again, in the light of autopoietic systems, we require a new paradigm. As Little continues:[5]

An organizationally closed system is one in which all possible states of activity always lead to or generate further activity within itself… Organizationally closed systems do not have external inputs that change their organization, nor do they outputs in terms of their organization. Autopoietic systems are organizationally closed and do not have inputs and outputs in terms of their organization. They may appear to have them, but that description only pertains to an observer who can see both the system and its environment, and is a mischaracterization of the system. The idea of organizational closure, however, does not imply that such systems have no interactions with their environment. Although their organization is closed, they still interact with their environment through their structure, which is open.

John Mingers provides further insight: [4]

Consider the idea that the environment does not determine, but only triggers neuronal activity. Another way of saying this is that the structure of the nervous system at a particular time determines both what can trigger it and what the outcome will be. At most, the environment can select between alter­natives that the structure allows. This is really an obvious situation of which we tend to lose sight. By analogy, consider the humming computer on my desk. Many interactions, e.g., tapping the monitor and drawing on the unit, have no effect. Even pressing keys depends on the program recognizing them, and press­ing the same key will have quite different effects depending on the computer’s current state. We say, “I’ll just save this file,” and do so with the appropriate keys as though these actions in themselves bring it about. In reality the success (or lack of it) depends entirely on our hard-earned structural coupling with the machine and its software in a wider domain, as learning a new system reminds us only too well.

Another counterintuitive idea was put forth by the German sociologist Niklas Luhmann, that further elaborates the autopoietic system’s autonomous nature and the “independence” from the external agent:

The memory function never relates to facts of the outer world . . . but only to the states of the system itself. In other words, a system can only remember itself.

An obvious question at this point is – If a system is so independent of its environment, how does it come to be so well adjusted, and how do systems come to develop such similar structures?[4]

The answer lies in Maturana’s concept of structural coupling. An autopoietic organization is realized in a particular structure. In general, this structure will be plastic, i.e., changeable, but the changes that it undergoes all maintain auto poiesis so long as the entity persists. (If it suffers an interaction which does not maintain autopoiesis, then it dies.) While such a system exists in an environ­ment which supplies it with necessities for survival, then it will have a structure suitable for that environment or autopoiesis will not continue. The system will be structurally coupled to its medium. This, however, is always a contingent matter and the particular structure that develops is determined by the system. More generally, such a system may become structurally coupled with other systems-the behavior of one becomes a trigger for the other, and vice versa.

Maturana and Varela did not extend the concept of autopoiesis to a larger level such as a society or an organization. Several others took this idea and went further. [8]

Using the tenets of autopoietic theory (Zeleny: 2005), he interprets organizations as networks of interactions, reactions and processes identified by their organization (network of rules of coordination) and differentiated by their structure (specific spatio-temporal manifestations of applying the rules of coordination under specific conditions or contexts). Following these definitions, Zeleny argues that the only way to make organizational change effective is to change the rules of behavior (the organization) first and then change processes, routines, and procedures (the structure). He explains that it is the system of the rules of coordination, rather than the processes themselves, that defines the nature of recurrent execution of coordinated action (recurrence being the necessary condition for learning to occur). He states: ‘Organization drives the structure, structure follows organization, and the observer imputes function’.

 Espejo, Schumann, Schwaninger, and Bilello (1996)adopt similar terminology, but instead of organization they refer to an organization’s identity as the element that defines any organization, explaining that it is the relationships between the participants that create the distinct identity for the network or the group. Organization is then defined as ‘a closed network of relationships with an identity of its own’. While organizations may share the same kind of identity, they are distinguished by their structures. People’s relationships form routines, involving roles, procedures, and uses of resources that constitute stable forms of interaction. These allow the integrated use and operation of the organization’s resources. The emergent routines and mechanisms of interaction then constitute the organization’s structure. Hence, just like any autopoietic entity, organizations as social phenomena are characterized by both an organization (or identity) and a structure. The rules of interaction established by the organization and the execution of the rules exhibited by the structure form a recursive bond.

Final Words:

I highly encourage the readers to pursue understanding of autopoiesis. It is an important concept that requires a shift in your thinking.

I will finish off with an example of autopoietic system that is not living. I am talking about von Neumann probes. Von Neumann probes are named after John von Neumann, one of the most prolific polymaths of last century. A von Neumann probe is an ingenious solution for fast space exploration. A von Neumann probe is a spacecraft that is loaded with an algorithm for self-replication. When it reaches a suitable celestial body, it will mine the required raw materials and build a copy of itself, complete with the algorithm for self-replication. The new spacecraft will then proceed to explore the space in a different direction. The self-replication process continues with every copy in an exponential manner. You may like this post about John von Neumann.

Always keep on learning…

In case you missed it, my last post was The Illegitimate Sensei:

[1] An Introduction to Cybernetics – Ross Ashby

[2] Second-order cybernetics: an historical introduction – Bernard Scott

[3] Autopoiesis and Cognition: The Realization of the Living – Francisco Varela and Humberto Maturana

[4] The Cognitive Theories of Maturana and Varela – John Mingers

[5] Maturana, Luhmann, and Self-Referential Government – John H Little

[6] Organizations as Learning Systems – Marjatta Maula

[7] Implications of The Theory Of Autopoiesis For The Discipline And Practice Of Information Systems – Ian Beeson

The Cybernetic View of Quality Control:

Shewhart cycle1

My last post was a review of Mark Graban’s wonderful book, Measures of Success. After reading Graban’s book, I started rereading Walter Shewhart’s books, Statistical Method from the Viewpoint of Quality Control (edited by Dr. Deming) and Economic Control of Quality of Manufactured Product. Both are excellent books for any Quality professional. One of the themes that stood out for me while reading the two books was the concept of Cybernetics. Today’s post is a result from studying Shewhart’s books and articles on cybernetics by Paul Pangaro.

The term “cybernetics” has its origins from the Greek word, κυβερνήτης, which means “navigation”. Cybernetics is generally translated as “the art of steering”. Norbert Wiener, the great American mathematician, wrote the 1948 book, Cybernetics: Or Control and Communication in the Animal and the Machine. Wiener made the term “cybernetics” famous. Wiener adapted the Greek word to evoke the rich interaction of goals, predictions, actions, feedback, and response in systems of all kinds.

Loosely put, cybernetics is about having a goal and a self-correcting system that adjusts to the perturbations in the environment so that the system can keep moving towards the goal. This is referred to as the “First Order Cybernetics”. An example (remaining true to the Greek origin of the word), we can use is a ship sailing towards a destination. When there are perturbations in the form of wind, the steersman adjusts the path accordingly and maintains the course. Another common example is a thermostat. The thermostat is able to maintain the required temperature inside the house by adjusting according to the external temperature. The thermostat “kicks on” when a specified temperature limit is tripped and cools or heats the house. An important concept that is used for cybernetics is the “law of requisite variety” by Ross Ashby. The law of requisite variety states that only variety can absorb variety. If the wind is extreme, the steersman may not be able to steer the ship properly. In other words, the steersman lacks the requisite variety to handle or absorb the external variety. The main mechanism of cybernetics is the closed feedback loop that helps the steersman adjust accordingly to maintain the course. This is also the art of a regulation loop –compare, act and sense.

Warren McCulloch, the American cybernetician, explained cybernetics as follows:

Narrowly defined it (cybernetics) is but the art of the helmsman, to hold a course by swinging the rudder so as to offset any deviation from that course. For this the helmsman must be so informed of the consequences of his previous acts that he corrects them – communication engineers call this ‘negative feedback’ – for the output of the helmsman decreases the input to the helmsman. The intrinsic governance of nervous activity, our reflexes, and our appetites exemplify this process. In all of them, as in the steering of the ship, what must return is not energy but information. Hence, in an extended sense, cybernetics may be said to include the timeliest applications of the quantitative theory of information.

Walter Shewhart’s ideas of statistical control works well with the cybernetic ideas. Shewhart purposefully used the term “control” for his field. The term control is a key concept in cybernetics, as explained above. Shewhart defined control as:

A phenomenon is said to be controlled when, through the use of past experience, we can predict at least within limits, how the phenomenon may be expected to vary in the future. Here it is understood that prediction within limits means that we can state, at least approximately, the probability that the observed phenomenon will fall within the given limits.

Shewhart expanded further:

The idea of control involves action for the purpose of achieving a desired end. Control in this sense involves both action and a specified end.

..We should keep in mind that the state of statistical control is something presumable to be desired, something to which one may hope to attain; in other words it is an ideal goal.

Shewhart’s view of control aligns very well with the teleological aspects of cybernetics. From here, Shewhart develops his famous Shewhart cycle as a means to maintain statistical control. Shewhart wrote:

Three steps in quality control. Three senses of statistical control. Broadly speaking, there are three steps in a quality control process: the specification of what is wanted, the production of things to satisfy the specification, and the inspection of things produced to see if they satisfy the specification.

The three steps (making a hypothesis, carrying out an experiment, and testing the hypothesis) constitute a dynamic scientific process of acquiring knowledge. From this viewpoint, it is better to show them as a forming a sort of spiral gradually approach a circular path to which would represent the idealized case, where no evidence is found in the testing of hypothesis indicates a need for changing the hypothesis. Mass production viewed in this way constitutes a continuing and self-corrective method for making the most efficient use of raw and fabricated materials.

The Shewhart cycle as he proposed is shown below:

Shewhart cycle1

One of the criterions Shewhart developed for his model was that the model should be as simple as possible and adaptable in a continuing and self-corrective operation of control. The idea of self-correction is a key point of cybernetics as part of maintaining the course.

The brilliance of Shewhart was in providing guidance on when we should react and when we should not react to the variations in the data. He stated that a necessary and sufficient condition for statistical control is to have a constant system of chance causes… It is necessary that differences in the qualities of a number of pieces of a product appear to be consistent with the assumption that they arose from a constant system of chance causes… If a cause system is not constant, we shall say that an assignable cause is present.

Shewhart continued:

My own experience has been that in the early stages of any attempt at control of a quality characteristic, assignable causes are always present even though the production operation has been repeated under presumably the same essential conditions. As these assignable causes are found and eliminated, the variation in quality gradually approaches a state of statistical control as indicated by the statistics of successive samples falling within their control limits, except in rare instances.

We are engaging in a continuing, self-corrective operation designed for the purpose of attaining a state of statistical control.

The successful quality control engineer, like the successful research worker, is not a pure reason machine but instead is a biological unit reacting to and acting upon an everchanging environment.

James Wilk defined cybernetics as:

Cybernetics is the study of justified intervention.”

This is an apt definition when we look at quality control, as viewed by Shewhart. We have three options when it comes to quality control:

  1. If we have an unpredictable system, then we work to eliminate the causes of signals, with the aim of creating a predictable system.
  2. If we have a predictable system that is not always capable of meeting the target, then we work to improve the system in a systematic way, aiming to create a new a system whose results now fluctuate around a better average.
  3. When the range of predictable performance is always better than the target, then there’s less of a need for improvement. We could, however, choose to change the target and then continue improving in a systematic way.

Source: Measures of Success (Mark Graban, 2019)

Final Words:

Shewhart wrote “Statistical Method from the Viewpoint of Quality Control” in 1939, nine years before Wiener’s Cybernetics book. The use of statistical control allows us to have a conversation with a process. The process tells us what the limits are, and as long as the data points are plotted randomly within the two limits, we can assume that whatever we are seeing is due to chance or natural variation. The data should be random and without any order. When we see some manner of order in the likes of a trend or an outside data point, then we should look for an assignable cause. The data points are not necessarily due to chance anymore. As we keep plotting, we should improve our process, and recalculate the limits.

I will finish off with Dr. Deming’s enhancement of Shewhart’s cycle. This is taken from a presentation by Clifford L. Norman. This was part of the evolution of the PDSA (Plan-Do-Study-Act) cycle which later became famous as PDCA cycle (Plan-Do-Check-Act). This showed only 3 steps with a decision point after step 3.

Shewhart cycle2

The updated cycle has lots of nuggets in it such as experimenting on a small scale, reflecting on what we learned etc.

Always keep on learning…

In case you missed it, my last entry was My Recent Tweets:

Note: The updated Shewhart cycle was added to the post after a discussion with Benjamin Taylor (Syscoi.com).

Exploring The Ashby Space:

Ashby4

Today’s post is a follow-up to an earlier post, Solving a Lean Problem versus a Six Sigma Problem:

In today’s post, I am looking at “The Ashby Space.” The post is based on the works of Ross Ashby, Max Boisot, Bill McKelvey and Karl Weick. Ross Ashby was a prominent cybernetician who is famous for his “Law of Requisite Variety.” The Law of Requisite Variety can be stated as “Only variety can destroy/absorb variety.” Ashby defined variety as the number of distinguishable states of a system. Stafford Beer used variety as a measure of complexity. The more variety a system has the more complex it is. An important concept to grasp with this idea is that the number of distinguishable states (and thus variety) depends upon the ability of the observer. In this regard, variety of a system may be viewed as dependent on the observer.

Max Boisot and Bill McKelvey expanded upon the Law of Requisite Variety and stated that only complexity can destroy complexity. In other words, only internal complexity can destroy external complexity. If the regulatory agency of a system does not have the requisite variety to match the variety of its environment, it will not be able to adapt and survive. Ashby explained this using the example of a fencer:

If a fencer faces an opponent who has various modes of attack available, the fencer must be provided with at least an equal number of modes of defense if the outcome is to have the single value: attacked parried.

Boisot and McKelvey restated Ashby’s law as – the range of responses that a living system must be able to marshal in its attempt to adapt to the world must match the range of situations—threats and opportunities—that it confronts. They explained this further using the graphical depiction they termed as “the Ashby Space.” The Ashby Space has two axes, the horizontal axis represents the Variety of Responses, and the vertical axis represents the Variety of Stimuli. Ashby’s law can be represented by the 45˚ diagonal line. The diagonal line represents the requisite variety where the stimuli variety matches the response variety. To adapt and survive we should be in on the diagonal line or below. If we are above the diagonal line, the external variety surpasses the internal variety needed and we perish. Using Ashby’s fencer example, the fencer is able to defend against the opponent only if his defense variety matches or exceeds that of the opponent’s offense variety. This is shown below.

Ashby1

Boisot and McKelvey also depicted the Ordered, Complex and Chaotic regimes in the Ashby space. In the ordered regime, the cause-effect relationships are distinguishable and generally has low variety. The complex regime has a higher variety of stimuli present and requires a higher variety of responses. The cause-effect relationships are non-linear and may make sense only in hindsight. The chaotic regime has the most variety of stimuli. This is depicted in the schematic below. Although the three regimes may appear equally sized in the schematic, this is just for representational purposes.

Ashby2

The next idea that we will explore on the Ashby Space is the idea of the Adaptive Frontier. Ashby proposed a strong need for reducing the amount of variety from the external environment. He viewed this as the role of regulation. Ashby pointed out that the amount of regulation that can be achieved is limited by the amount of information that can be transmitted and processed by the system. This idea is depicted by the Adaptive Frontier curve. Any variety that lies outside this curve is outside the “adaptation budget” of the system. The system does not have the resources nor capacity to process all the variety that is coming in, and does not have the capacity to allocate resources to choose appropriate responses. The adaptive frontier is shown in the schematic below as the red dotted curve.

Ashby3

Combining all the ideas above, the Ashby Space can be depicted as below.

Ashby Space

Boisot and McKelvey detail three types of responses that a living system might follow in the presence of external stimuli. Consider the schematic below, where the agent is located at “Q” in the Ashby Space, which refers to the stimuli variety, X.

  1. The Behaviorist – This is also referred to as the “headless chicken response”. When presented with the stimuli variety, X, the agent will pursue the headless chicken response of trying to match the high variety in a haphazard fashion and soon finds himself outside the adaptive frontier and perishes. The agent fails to filter out any unwanted stimuli and fails to process meaningful information out of the incoming data.
  2. The Routinizer – The routinizer interprets the incoming stimuli as “seen it all before.” They will filter out too much of the incoming data and fail to recognize patterns or mis-categorize them. The routinizer is using the schema which they already have, and their success lies in how well their schema matches the real-world variety-reducing regularities confronting the agent.
  3. The Strategist – An intelligent agent has to correctly interpret the data first, and extract valid information about relevant regularities from the incoming stimuli. The agent then has to use existing schema and match against existing patterns. If the patterns do not match, the agent will have to develop new patterns. As you go up in the Ashby space, the complexity increases, and as you go down, the complexity decreases. The schemas should have the required complexity to match the incoming stimuli. The agent should also be aware of the adaptive frontier and stay within the resource budget constraints. The strategist will try to filter out noise, use/develop appropriate schemas and generate effectively complex responses.

Ashby4

Final Words:

The Ashby Space is a great representation to keep in mind while coping with complexity. The ability of a system to discern what is meaningful and what is noise depends on the system’s past experiences, world views, biases and what its construes as morals and values. Boisot and McKelvey note that:

Not everything in a living system’s environment is relevant or meaningful for it, however. If it is not to waste its energy responding to every will-o-the wisp, a system must distinguish schema based on meaningful information (signals about real-world regularities judged important) from noise (meaningless signals). Note that what constitutes information or noise for a system is partly a function of the organism’s own expectations, judgments, and sensory abilities about what is important —as well as of its motivations— and hence, of its models of the world. Valid and timely representations (schema) economize on the organism’s scarce energy resources.

This also points to the role of sensemaking. As Karl Weick notes, “an increase in complexity can increase perceived uncertainty… Complexity affects what people notice and ignore… The variety in a firm’s repertory of beliefs should affect the amount of time it spends consciously struggling to make sense. The greater the variety of beliefs in a repertoire, the more fully should any situation be seen, the more solutions identified, and the more likely it should be that someone knows a great deal about what is happening.”

The models or representations we construct to represent a phenomenon do not have to be as complex as the phenomenon itself, just like the usefulness of a map is in its abstraction. If the map was as complex as the city it represented, it would become identical to city, with the roads, buildings etc., an exact replica. The system however should have the requisite variety. The system should be able to filter out unwanted variety and amplify its meaningful variety to achieve this. The agent must wait for “meaningful” patterns to emerge, and keep learning.

The agent must also be aware to not claim victory or “Mission Accomplished” when dealing with complexity. Some portion of the stimuli variety may be met with the existing schema as part of routinizing. However, this does not mean that the requisite variety has been achieved. A broken clock is able to tell time correctly twice a day, but it does not mean that you should assume that the clock is functional.

I will finish off with a great insight from Max Boisot:

Note that we do not necessarily require an exact match between the complexity of the environment and the complexity of the system. Afterall, the complexity of the environment might turn out to be either irrelevant to the survival of the system or amenable to important simplifications. Here, the distinction between complexity as subjectively experienced and complexity as objectively given is useful. For it is only where complexity is in fact refractory to cognitive efforts at interpretation and structuring that it will resist simplification and have to be dealt with on its own terms. In short, only where complexity and variety cannot be meaningfully reduced do they have to be absorbed. So an interesting way of reformulating the issue that we shall be dealing with in this article is to ask whether the increase in complexity that confronts firms today has not, in effect, become irreducible or “algorithmically incompressible”? And if it has, what are the implications for the way that firms strategize?

Always keep on learning…

In case you missed it, my last post was Nietzsche’s Overman at the Gemba:

I welcome the reader to read further upon the ideas of Ross Ashby. Some of the references I used are:

  1. An Introduction to Cybernetics, Ross Ashby (1957)
  2. Requisite variety and its implications for the control of complex systems, Cybernetica 1:2, p. 83-99, Ross Ashby (1958)
  3. Complexity and Organization–Environment Relations: Revisiting Ashby’s Law of Requisite Variety, Max Boisot and Bill McKelvey (2011)
  4. Knowledge, Organization, and Management. Building on the Work of Max Boisot, Edited by John Child and Martin Ihrig (2013)
  5. Connectivity, Extremes, and Adaptation: A Power-Law Perspective of Organizational Effectiveness, Max Boisot and Bill McKelvey (2011)
  6. Counter-Terrorism as Neighborhood Watch: A Socio/Computational Approach for Getting Patterns from Dots, Max Boisot and Bill McKelvey (2004)
  7. Sensemaking in Organizations (Foundations for Organizational Science), Karl Weick (1995)