Consistency over Completeness:

Today’s post is almost a follow-up to my earlier post – The Truth about True Models. In that post, I talked about Dr. Donald Hoffman’s idea of Fitness-Beats-Truth or FBT Theorem. Loosely put, the idea behind the FBT Theorem is that we have evolved to not have “true” perceptions of reality. We survived because we had “fitness” based models and because we did not have “true models”. In today’s post, I am continuing on this idea using the ideas from Heinz von Foerster, one of my Cybernetics heroes.

Heinz von Foerster came up with “the postulate of epistemic homeostasis”. This postulate states:

The nervous system as a whole is organized in such a way (organizes itself in such a way) that it computes a stable reality.

It is important to note here that, we are speaking about computing “a” reality and not “the” reality. Our nervous system is informationally closed (to follow up from the previous post). This means that we do not have direct access to the reality outside. All we have is what we can perceive through our perception framework. The famous philosopher, Immanuel Kant, referred to this as the noumena (the reality that we don’t have direct access to) and the phenomena (the perceived representation of the external reality). All we can do is to compute a reality based on our interpretive framework. This is just a version of the reality, and each one of us computes such a reality that is unique to each one of us.

The other concept to make note of is the “stable” part of the stable reality. In Godelian* speak, our nervous system cares more about consistency than completeness. When we encounter a phenomenon, our nervous system looks at stable correlations from the past and present, and computes a sensation that confirms the perceived representation of the phenomenon. Von Foerster gives the example of a table. We can see the table, and we can touch it, and maybe bang on it. With each of these confirmations and correlations between the different sensory inputs, the table becomes more and more a “table” to us.

*Kurt Godel, one of the famous logicians of last century came up with the idea that any formal system able to do elementary arithmetic cannot be both complete and consistent; it is either incomplete or inconsistent.

From the cybernetics standpoint, we are talking about an observer and the observed. The interaction between the observer and the observed is an act of computing a reality. The first step to computing a reality is making distinctions. If there are no distinctions, everything about the observed will be uniform, and no information can be processed by the observer. Thus, the first step is to make distinctions. The distinctions refer to the variety of the observed. The more distinctions there are, the more variety the observed has. From a second order cybernetics standpoint, the variety of the observed depends upon of the variety of the observer. This goes back to the unique stable reality computation point from earlier. Each one of us are unique in how we perceive things. This is our variety as the observer. The observed, that which is external to us, always has more potential variety than us. We cut down or attenuate this high variety by choosing certain attributes that interests us. Once the distinctions are made, we find relations between these distinctions to make sense of it all. This corresponds to the confirmations and correlations that we noted above in the example of a table.

We are able to survive in our environment because we are able to continuously compute a stable reality. The stability comes from the recursive computations of what is being observed. For example, lets go back to the example of the table. Our eyes receive the sensory input of the image of the table. This is a first set of computation. This sensory image then goes up the “neurochain”, where it is computed again. This happens again and again as the input gets “decoded” at each level, until it gets satisfactorily decoded by our nervous system. The final result is a computation of a computation of a computation of a computation and so on. The stability is achieved from this recursion.

The idea of a consistency over completeness is quite fascinating. This is mainly due to the limitation of our nervous system to have a true representation of the reality. There is a common belief that we live with uncertainty, but our nervous system strives to provide us a stable version of reality, one that is devoid of uncertainties. This is a fascinating idea. We are able to think about this only from a second order standpoint. We are able to ponder about our cognitive blind spots because we are able to do second order cybernetics. We are able to think about thinking. We are able to put ourselves into the observed. Second order cybernetics is the study of observing systems where the observer themselves are part of the observed system.

I will leave the reader with a final thoughtthe act of observing oneself is also a computation of “a” stable reality.

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was Wittgenstein and Autopoiesis:

Cybernetic Explanation, Purpose and AI:

In today’s post, I am following the theme of cybernetic explanation that I talked about in my last post – The Monkey’s Prose – Cybernetic Explanation. I recently listened to the talks given as part of the Tenth International Conference on Complex Systems. I really enjoyed the keynote speech by the Herb. A. Simon award winner, Melanie Mitchell. She told the story of a project that her student did where the AI was able to recognize whether there was an animal in a picture or not with good accuracy. Her student dug deep into the AI’s model. The AI is taught to identify a characteristic by showing a large number of datasets (in this case pictures with and without animals). The AI is shown which picture has an animal and which picture does not. The AI comes up with an algorithm based on the large dataset.  The correct answers reinforce the algorithm, and the wrong answers tweaks the algorithm as needed with the assigned weights to the “incorrectness”. This is very much like how we learn. What Mitchell’s student found was that the AI is assigning probabilities based on whether the background is blurry or not. When the background is blurry, it is more likely that there is an animal in the picture. In other words, it is not looking for an animal, it is just looking to see whether the background is blurry or not. Depending upon the statistical probability, the AI will answer that there is or there is not an animal in the picture.

We, humans, assign the meaning to the AI’s output, and believe that the AI is able to differentiate whether there is an animal in the picture or not. In actuality, the AI is merely using statistical probabilities of whether the background is blurry or not. We cannot help but assign meanings to things. We say that nature has a purpose, or that evolution has a purpose. We assign causality to phenomenon. It is interesting to think about whether it truly matters that the AI is not really identifying the animal in the picture. The outcome still has the appearance that the AI is able to tell whether there is an animal or not in the picture. We are able to bring in more concepts that the AI cannot. Mitchell discusses the difference between concepts and perceptual categories. What the AI is doing is constructing perceptual categories that are limited in nature, whereas what we construct are concepts that may be linked to other concepts. The example that Mitchell provided was that of a bridge. For us, a bridge can mean many things based on the linguistic application. We can say that a person is able to “bridge the gap” or that our nose has a bridge. The capacity for AI, at this time at least, is to stick to the bridge being a perceptual category based on the context of the data it has. We can talk in metaphors that the AI cannot understand. A bridge can be a concept or an actual physical thing for us. For a simple task such as the question of an animal in the picture carries no risk. However, as we up the ante to a task such as autonomous driving, we can no longer rely on the appearances of the AI being able to carry out the task. This is demonstrated in the morality or ethics debate with regards to AI, and how it should carry out probability calculations in the event of a hazard. This involves questions such as the ones in the trolley problem.

This also leads to another idea that has the cybernetic explanation embedded in it. This is the idea of “do no harm”. The requirement is not specifically to do good deeds, but to not do things that will cause harm to others. As the English philosopher, John Stuart Mill put it:

That the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others.

 This is also what Isaac Asimov referred to as the first of the three laws of robotics in his 1942 short story, Runaround:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

The other two laws are circularly referenced to the first law:

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The idea of cybernetic explanation gives us another perspective to purpose and meaning. Our natural disposition is to assign meaning and purpose, as I indicated earlier. We tend to believe that Truth is out there or that there is an objective reality. As the great Cybernetician Heinz von Foerster put it – “The environment contains no information; the environment is as it is”. Truth or descriptions of reality is our creation with our vocabulary. And most importantly, there are other beings describing realities with their vocabularies as well. I will finish with some wise words from Friedrich Nietzsche.

“It is we alone who have devised cause, sequence, for-each-other, relativity, constraint, number, law, freedom, motive, and purpose; and when we project and mix this symbol world into things as if it existed ‘in itself’, we act once more as we have always acted—mythologically.”

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was The Monkey’s Prose – Cybernetic Explanation:

Complexity – Only When You Realize You Are Blind, Can You See:

In today’s post, I am looking at the idea of complexity from a second order Cybernetics standpoint. The phrase “only when you realize you are blind, can you see”, is a paraphrase of a statement from the great Heinz von Foerster. I have talked about von Foerster in many of my posts, and he is one of my heroes in Cybernetics. There is no one universally accepted definition for complexity. Haridimos Tsoukas and Mary Jo Hatch wrote a very insightful paper called “Complex Thinking, Complex Practice”. In the paper, they try to address how to explain complexity. They refer to the works of John Casti and C. H. Waddington to further their ideas:

Waddington notes that complexity has something to do with the number of components of a system as well as with the number of ways in which they can be related… Casti defines complexity as being ‘directly proportional to the length of the shortest possible description of [a system]’.

Casti’s views on complexity are particularly interesting because complexity is not viewed as being intrinsic to the phenomenon. This is a common idea in Cybernetics, mainly second order cybernetics. There are two ‘classifications’ of cybernetics – first order and second order cybernetics. As von Foerster explained it, first order cybernetics is the study of observed systems, where the basic assumption is that the system is objectively knowable. The second order cybernetics is the study of observing systems, where the basic assumption is that the observer is included in the act of observing, and thus the observer is part of the observed system. This leads to second order thinking such as understanding understanding or observing observing. It is interesting because, as I am typing, Microsoft Word is telling me that “understanding understanding” is syntactically incorrect. This obviously would be a first order viewpoint. The second order cybernetics is a meta discipline and one that generates wisdom.

When we take the observer into consideration, we realize that complexity is in the eyes of the beholder. Complexity is observer-dependent; that is, it depends upon how the system is described and interpreted. If the observer is able to make more varying distinctions in their description, we can say that the phenomenon or the system is being interpreted as complex. In their paper, Tsoukas and Jo Hatch brings up the ideas of language in describing and thus interpreting complexity. They note that:

Chaos and complexity are metaphors that posit new connections, draw our attention to new phenomena, and help us see what we could not see before (Rorty).

This is quite interesting. When we learn the language of complexity, we are able to understand complexity better, and we become better at describing it, in a reflexive manner.

What complexity science has done is to draw our attention to certain features of systems’ behaviors which were hitherto unremarked, such as non-linearity, scale-dependence, recursiveness, sensitivity to initial conditions, emergence (etc.)

From this standpoint, we can say that complexity lies in the interactions we have with the system, and depending on our perspectives (vantage point) and the interaction we can come away with a different interpretation for complexity.

Heinz von Foerster remarked that complexity is not in the world but rather in the language we use to describe the world. Paraphrasing von Foerster, cognition is computation of descriptions of reality. Managing complexity then becomes a cognitive task. How well you can interact or manage interactions depends on how effective your description is and how well it aligns with others’ descriptions. The complexity of a system lies in the description of that system, which entirely rests on the observer/sensemaker. The idea that complexity is in the eyes of beholder is to point out the importance of second order cybernetics/thinking. The world is as it is, it gets meaning only when we assign meaning to it through how we describe/interpret it. To put differently, “the logic of the world is the logic of the descriptions of the world” (Heinz von Foerster)

The idea of complexity not being intrinsic to a system is also echoed by one of the pioneers of cybernetics, Ross Ashby. He noted – “a system’s complexity is purely relative to a given observer; I reject the attempt to measure an absolute, or intrinsic, complexity; but this acceptance of complexity as something in the eye of the beholder is, in my opinion, the only workable way of measuring complexity”.

The ideas of second order cybernetics emphasize the importance of observers. The “system” is a mental construct by an observer to make sense of a phenomenon. The observer based on their needs draw boundaries to separate a “system” from its environment. This allows the observer to understand the system in the context of its environment. At the same time, the observer has to understand that there are other observers in the same social realm who may draw different boundaries and come out with different understandings based on their own needs, biases, perspectives etc.

A phenomenon can have multiple identities or meanings depending on who is describing the desired phenomenon. Let’s use the Covid 19 pandemic as an example. For some people, this has become a problem of economics rather than a healthcare problem, while for some others it has become a problem of freedom or ethics. If we are to attempt tackling the complexity of such an issue, the worst thing we can do is to attempt first order thinking- the idea that the phenomenon can be observed objectively. Issues requiring second order approach get worse by the application of first order methodologies. The danger in this is that we can fall prey to our own narrative being the whole Truth.

As the pragmatic philosopher Richard Rorty points out:

The world does not speak. Only we do. The world can, once we have programmed ourselves with a language, cause us to hold beliefs. But it cannot propose a language for us to speak. Only other human beings can do that.

If we are to understand complexity of a phenomenon, we need to start with realizing that our version of complexity is only one of the many.  Our ability to understand complexity depends on our ability to describe it. We lack the ability to completely describe a phenomenon. The different descriptions that come about from the different participants may be contradictory and can point out apparent paradoxes in our social realm.

In complexity, if we are to tackle it, we need to have coherence of multiple interpretations. As Karl Weick points out, we need to complicate ourselves. By generating and accommodating multiple inequivalent descriptions, practioners will increase the complexity of their understanding and, therefore, will be more likely to match the complexity of the situation they attempt to manage. In complexity, coherence – the idea of connecting ideas together, is important since it helps to create a clearer picture and affords avoiding blind spots. This co-construted description itself is an emergent phenomenon.

In second order Cybernetics, there are two statements that might shed more light on everything we have discussed so far:

Anything said is said BY an observer. (Maturana)

Anything said is said TO an observer. (von Foerster)

A lot can be said between these two statements. The first points out that the importance of the observer, and the second points out that there are other observers, and we coconstruct our social reality.

Our descriptions are abstractions since we are limited by our languages. All our biases, fears, misunderstandings, ignorance etc. lie hidden in the “systems” we construct. We get into trouble when we assume that these abstractions are real things. This is the first order approach, where we are not aware that we do not see due to our cognitive blind spots. When we realize that all we have are abstractions, we get to the second order approach. We include ourselves in our observation and we start looking at how we make these abstractions. We also become aware of other autonomous participants of our social reality engaging in similar constructions of narratives. As we seek their understanding, we become aware of our cognitive blind spots. We realize that everything is on a spectrum, and our thinking of “either/or” is actually a false dichotomy.

At this point, Heinz von Foerster would say that we start to see when we realize that we are blind.

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was Causality and Purpose in Systems:

Copernican Revolution – Systems Thinking:

In today’s post, I am looking at “Copernican Revolution”, a phrase used by the great German philosopher, Immanuel Kant. Immanuel Kant is one of the greatest names in philosophy. I am an Engineer by profession, and I started learning philosophy after I left school. As an Engineer, I am trained to think about causality in nature – if I do this, then that happens. This is often viewed as the mechanistic view of nature and it is reliant on empiricism. Empiricism is the idea that knowledge comes from experience. In contrast, at the other end of knowledge spectrum lies rationalism. Rationalism is the idea that knowledge comes from reason (internal). An empiricist can quickly fall into the trap of induction, where you believe that there is uniformity in nature. For example, if I clapped my hand twenty times, and the light flickered each time, I can then (falsely) conclude that the next time I clap my hand the light will flicker. My mind created a causal connection to my hand clapping and the light flickering.

David Hume, another great philosopher, challenged this and identified this approach as the problem of induction. He suggested that we, humans, are creatures of habit that we assign causality to things based on repeat experience. His view was that causality is assigned by us simply by habit. His famous example of challenging whether the sun will rise tomorrow exemplifies this:

That the sun will not rise tomorrow is no less intelligible a proposition, and implies no more contradiction, than the affirmation, that it will rise.

Hume came up with two main categories for human reason, often called Hume’s fork:

  1. Matters of fact – this represents knowledge that we gain from experience (synthetic), and this happens after the fact of experience (denoted by posteriori). An example is – the ball is heavy. Thinking cannot provide the knowledge that the ball is heavy. One has to interact with the ball to learn that the ball is heavy.
  2. Relation of ideas – this represents knowledge that does not rely on experience. This knowledge can be obtained simply through reason (analytic). This was identified as a priori or from before. For example – all bachelors are unmarried. No experience is needed for this knowledge. The meaning of unmarried is predicated in the term “bachelor”.

All the objects of human reason or enquiry may naturally be divided into two kinds, to wit, relations of ideas, and matters of fact. Of the first kind are the sciences of Geometry, Algebra, and Arithmetic … [which are] discoverable by the mere operation of thought … Matters of fact, which are the second object of human reason, are not ascertained in the same manner; nor is our evidence of their truth, however great, of a like nature with the foregoing.

Hume’s fork stipulates that all necessary truths are analytical, the meaning is predicated in the statement. Similarly, knowledge regarding matters of fact indicate that the knowledge is contingent on the experience gotten from the interaction. This leads to further ideas such as – there is a separation between the external world and the knowledge about the world. The knowledge about the world would come only from the world through empiricism. One can view this as the human mind revolving around the world.

Immanuel Kant challenged the idea of Hume’s fork and came up with the idea of a priori synthetic knowledge. Kant proposed that we, humans, are bestowed with a framework for reasoning that is a priori and yet synthetic. Kant synthesized ideas from rationalism and empiricism, and added a third tine to Hume’s fork. Kant famously stated – That all our knowledge begins with experience there can be no doubt. Kant clarified that it does not follow that knowledge arises out of experience. What we come to know is based on our mental faculty. The mind plays an important role in our knowledge of the world. The synthetic a priori propositions say something about the world, and yet at the same time they say something about our mind.

How the world is to us depends on how we experience it, and thus the knowledge of the external world is dependent on the structure of our mind. This idea is often described as a pair of spectacles that we are born with. We see the world through this pair of spectacles that we cannot take off. What we see forms our knowledge of the world, but it is dependent on the pair of spectacles that is a part of us. Kant’s great idea is that our knowledge of the world does not conform to the world. Our knowledge of the world conforms not to the nature of the world, but to the nature of our internal faculties. To paraphrase Heinz von Foerster, we do not see the world as is, it is as we see it.

Nicholas Copernicus, the Polish astronomer, came up with a heliocentric view of the world. The prevalent idea at the time was that the celestial bodies, including the sun, revolved around the earth. Copernicus challenged this, and showed that the earth actually revolves around the sun. Kant, in a similar fashion, suggested that the human minds do not revolve around the world with the meanings coming into our minds. Instead, the world revolves around our minds, and we assign meanings to the objects in the world. This is explained wonderfully by Julie. E. Maybee:

Naïve science assumes that our knowledge revolves around what the world is like, but, Hume’s criticism argued, this view entails that we cannot then have knowledge of scientific causes through reason. We can reestablish a connection between reason and knowledge, however, Kant suggested, if we say—not that knowledge revolves around what the world is like—but that knowledge revolves around what we are like. For the purposes of our knowledge, Kant said, we do not revolve around the world—the world revolves around us. Because we are rational creatures, we share a cognitive structure with one another that regularizes our experiences of the world. This intersubjectively shared structure of rationality—and not the world itself—grounds our knowledge.

Systems:

We have assumed that the knowledge of the world, our cognition, conforms to the world. Kant proposes that all we have access to is the phenomena, and not the actual world. What we are learning is dependent on us. We use an as-if model to generate meaning based on our interaction with the external world. In this viewpoint, the systems are not real things in the world. The systems are concepts that we construct, and they are as-if models that we use to make sense of the phenomena. What we view as systems are the constructions we make and depends on our need for understanding.  

Alan Stewart uses a similar idea to explain his views on constructivism:

The fundamental premise of constructivism is that we humans are self-regulating organisms who live from the inside out. As a philosophical counterpoint to naive realism, constructivism suggests that we are proactive co-creators of the reality to which we respond. Underlying this concept is that perception is an active process in which we ‘bring forth distinctions’. It is our idiosyncratic distinctions which form the structure of the world(s) which each of us inhabits.”

I will finish with a great lesson from Alan Watts:

“Everything in the world is gloriously meaningless.”

To further elaborate, I will add that all meaning comes from us. From a Hume-ian sense, we are creatures of habit in that we cannot stop assigning meaning. From a Kant-ian sense we are law-makers, not law-discoverers.

From a Systems Thinking perspective, we have unique perspectives and we assign meanings based on this. We construct “systems” “as-if” the different parts work together in a way to have a purpose and a meaning, both of which are assigned by us. The meaning comes inside out, not the other way around. To further this idea, as a human collective, we cocreate an emergent phenomenal world. In this aspect, “reality” is multidimensional, and each one of us has a version that is unique to us.  

Stay safe and Always keep on learning…

In case you missed it, my last post was Hegel, Dialectics and POSIWID:

Newton’s Eye/Bodkin Experiment and the Principle of Undifferentiated Coding:

INewton

I work in the field of ophthalmic medical devices. I recently came across one of Sir Isaac Newton’s set of notes at the Newton project. In the notes, one particular experiment stood out to me. Newton pushed against his eye ball using a bodkin (a blunt needle) and recorded the optical sensations produced by the pressure on the eye. The schematic below drawn by Newton himself denotes the experiment. He noted:

Newton

I took a bodkin gh and put it between my eye & the bone as near to the backside of my eye as I could: and pressing my eye with the end of it (soe as to make the curvature a, bcdef in my eye) there appeared several white dark & colored circles r, s, t, &c. Which circles were plainest when I continued to rub my eye with the point of the bodkin, but if I held my eye & the bodkin still, though I continued to press my eye with it yet the circles would grow faint & often disappear until I renewed them by moving my eye or the bodkin.

He went on to note that there were different colors and types of sensations depending on if he was in a dark room or a well-lit room. I enjoyed reading through his notes because of my profession and also because it was an opportunity to peek inside a genius mind such as Newton. The experiment remined me of another great idea in Cybernetics called ‘the principle of undifferentiated coding’. This idea was proposed by another brilliant mind and one of my heroes, Heinz von Foerster. Von Foerster said:

The response of a nerve cell does not encode the physical nature of the agents that caused its response. Encoded is only ‘how much’ at this point in my body, but not what.

The brain does not perceive light, sound, heat, touch, taste or smell. It receives only neuronal impulses from sensory organs. Thus, the brain does not “see light,” “hear sounds,” etc.; it can perceive only “this much stimulation at this point on my body.” The practical consequence is that all perceptions, let alone “thoughts,” are deductions from sensory stimuli. They cannot be otherwise. All observations are therefore partly the function of the observer. This situation renders complete objectivity impossible in principle.

Ernst von Glasersfeld, the proponent of Radical Constructivism stated:

In other words, the phenomenological characteristics of our experiential world – color, texture, sounds, tastes and smells – are the result of our own computations based on co-occurrence patterns of signals that differ only with regard to their point of origin in the living system’s nervous network.

Cognition is an autonomous activity of the observer. The state of agitation of a nerve cell only codifies the intensity, not the nature of its cause. What is understood or constructed is unique to the observer. This goes against the idea that if we provide information to a person, he or she will understand what is being provided. Von Foerster would say that the hearer not the utterer determines what is being said. In Newton’s experiment, the sensations were not caused by the eye seeing lights, but due to the physical interaction on the eye. This idea is further explored by Humberto Maturana and Francisco Varela with the idea of autopoiesis. As an autopoietic being, we are all organizationally closed and any information generated is an autonomous activity of our cognitive apparatus.

Bernard Scott expands this idea further:

Von Foerster begins his epistemology, in traditional manner, by asking, “How do we know?” The answers he provides-and the further questions he raises-have consequences for the other great question of epistemology, “What may be known?”

there is no difference between the type of signal transmitted from eye to brain or from ear to brain. This raises the question of how it is we come to experience a world that is differentiated, that has “qualia”, sights, sounds, smells. The answer is that our experience is the product of a process of computation : encodings or “representations” are interpreted as being meaningful or conveying information in the context of the actions that give rise to them. What differentiates sight from hearing is the proprioceptive information that locates the source of the signal and places it in a particular action context.

Another key aspect to add to this is the idea of circularity, where the output is fedback into the cognitive apparatus.  We continue to learn based on what we already know. Thus, we can say that learning is a recursive activity. What we learn now helps further our learning tomorrow. There is no static nature when it comes to knowledge and learning. The great French philosopher Montesquieu said, “If triangles made a god, they would give him three sides.” The properties of the world (seen and unseen) are dependent on the constructor/observer. The construction/observation is ongoing and reflexive. Montesquieu also said, “You have to study a great deal to know a little.” In other words, the more you learn, the more you realize how less you know. Or simply put, “the more you know, the less you know.”

I will finish with a wonderful von Foerster story from Maturana.

Maturana tells of a time when Heinz von Foerster and the famous anthropologist, Margaret Mead went to visit Russia. While there, they went to visit a museum. Mead was using a walking stick at that time. At the entrance they learned that she could not carry her walking stick inside. Mead decided that she would not go in since she could not walk long without using the walking stick. Von Foerster convinced her to go with him. He suggested that he would hide the stick in his clothing, and once inside he would give the stick back to her. His thinking was as follows:

ln this country, whether by perfection or by design, people do not commit mistakes, therefore, any guard that sees us Inside with the walking stick will be forced to admit that we were granted a special permit because otherwise we would not be Inside with it.’

 As the story goes, they were able to visit the museum without any problems. Maturana concluded:

Heinz, by not asking beyond the entrance whether they could or not carry a walking stick, behaved as if he considered that through his interactions with the guards he could either interact with the protection system of the museum as a whole, or with its components as Independent entities, and as if he had chosen the latter. He, thus, revealed that he understood that the guards realized through their properties two non-intersecting phenomenal domains, and that they could do this without contradiction because they operated only on neighborhood relations. This allowed Heinz and Margaret Mead to move through the museum carrying what a meta- observer would have called an invisible forbidden walking stick.

Stay safe and Always keep on learning…

In case you missed it, my last post was The System in the Box:

The Map at the Gemba:

Map

This is available as part of a book offering that is free for community members of Cyb3rSynLabs. Please check here (https://www.cyb3rsynlabs.com/c/books/) for Second Order Cybernetics Essays for Silicon Valley. The e-book version is available here (https://www.cyb3rsyn.com/products/soc-book)

 Stay safe and Always keep on learning…

In case you missed it, my last post was The Cybernetics of Respect for People: