In today’s post, I am looking with more depth at the ideas of Cybernetics with relation to Ross Ashby, one of the pioneers of Cybernetics.
In particular, I am looking at one of the Ashby aphorisms:
When a machine breaks, it changes its mind.
This is a very interesting observation from a Cybernetics standpoint. Ashby defined a machine as follows:
It is a collection of parts which (a) alter in time, and (b) which interact with on one another in some determinate and known manner.
A designer designs the machine specific to an environment. This means that the designer has encoded a model of the environment into the machine so that when certain perturbations are encountered, the machine reacts in a certain manner. The variety that is estimated to be “thrown” at the machine is captured by the designer, and appropriate responses are encoded into the parts or the circuitry of the machine. The external variety is attenuated to a successful degree by the information conveyed by the machine in terms of affordances and signs on the machine. For example, a vending machine has signs on it along with pushable buttons that convey information to the user.
Ashby viewed this as the machine being successfully adapted to its environment. Ashby spoke of adaptation as being in a state of equilibrium. He referred to the stable state of equilibrium as “normal” equilibrium.
Normal equilibrium has some special properties which we must notice. Firstly, the system tends to the configuration C; so, if it is disturbed slightly from C, it will automatically develop internal actions or tendencies bringing it back to C. In other words, it opposes any disturbance from C. Further, if we disturb it in various ways, it will develop different tendencies with different disturbances, the tendencies being always adjusted to the disturbances so as to oppose them.
it must be noted that an equilibrium configuration is a· property of the organization… The equilibrium states of a machine are defined by the organization only.
From this point on, Ashby explains what the “break” means with regards to the machine.
Let us imagine a machine has “broken.” The first observation is that no matter how chaotic the result, it is, by our definition, still a machine. But it is a different machine. A break is a change of organization.
The specific organization entails what the machine can do when it is perturbed. The machine only has the initial information to deal with perturbations. When a new scenario arises, it cannot deal with it because it cannot generate new information (unlike humans). The difference with us humans is that we can generate new information as needed to deal with the new perturbation. Sometimes, this can be in the mode of the basic fight or flight response. The reaction is indeed an effort to get an equilibrium. As Ashby put it:
The drive to equilibrium forces the emergence of intelligence.
Information is described as the reduction in uncertainty. When the environment is dynamic and constantly changing, we can say that there is a usefulness quotient for the freshness of the information on hand. This is something like a “best by date” that is on the carton of milk. As Ashby put it – Any system that achieves appropriate selection (to a degree better than chance) does so as a consequence of information received. From a second order Cybernetics standpoint, information is generated by the autopoietic being. It is not something that can be transmitted in the form of a physical commodity from one person to the other. We should work on improving our ability to generate new information as needed when new perturbations arise. This provides us the requisite variety to deal with the new variety that is thrown at us. What worked in the past, and what worked at another organization may not be meaningful with the new perturbations. The generation of new information requires updating the model of the environment to some degree. This updating corresponds to isomorphism, the idea that there is a corresponding one to one relationship between the various states of the model and the environment. The better this correspondence, the better the model.
Another aspect of the statement that the machine changes its mind, is that the “mind” is embodied in the physical body also. There is a famous debate in philosophy that looks at how much the mind is separate from the body – is the mind embodied in the body or is it separate? It is believed that the mind is part of the body as much as the body being part of the mind. There is no use trying to separate the two. Ashby may be giving a gentle nod to this idea that the mind should not be separated from the body. When a machine breaks, it changes its mind!
Ashby’s approach of tying adaptation/intelligence to the idea of stable equilibrium is unique. I will finish off with his explanation regarding this:
Finally, there is one point of fundamental importance which must be grasped. It is that stable equilibrium is necessary for existence, and that systems in unstable equilibrium inevitably destroy themselves. Consequently, if we find that a system persists, in spite of the usual small disturbances which affect every physical body, then we may draw the conclusion with absolute certainty that the system must be in stable equilibrium. This may sound dogmatic, but I can see no escape from this deduction.
Please maintain social distance and wear masks. Stay safe and Always keep on learning… In case you missed it, my last post was Cybernetics Ideas from a Thermostat:
The thermostat is a simple device that is often used to describe the basic ideas of Cybernetics. Cybernetics is the art of steering. Simply put, a goal is identified and the “system” acts to get closer to the goal. In the example of the thermostat, the user specifies the setpoint for the thermostat such that when the temperature goes below the setpoint, it kicks on the furnace and stops when the internal temperature of the house meets the desired temperature. In a similar fashion, when the temperature goes above a setpoint, the thermostat kicks on the air conditioner to bring down the internal temperature. The thermostat acts as a medium for achieving a constant temperature inside the house. This is also the idea of homeostasis. In order to achieve what the thermostat does, it needs to have a closed loop. It needs to read the internal temperature at specified frequencies, and act as needed depending upon this information. If it was an open loop, no information is fed back into the system, and thus no homeostasis is achieved. An example of an open loop is a campfire without anyone to manage it. The fire continues to burn until it goes out.
Ernst von Glasersfeld, the father of radical constructivism, talked about these ideas in his short paper, Reflections on Cybernetics (2000):
The good old thermostat, the favorite example in the early literature of cybernetics, is still a useful explanatory tool. In it a temperature is set as the goal-state the user desires for the room. The thermostat knows nothing of the room or of desirable temperatures. It is designed to eliminate any discrepancy between a set reference value and the feedback it receives from its sensory organ, namely the value indicated by its thermometer. If the sensed value is too low, it switches on the heater, if it is too high, it switches on the cooling system. Employing Gordon Pask’s clever distinction (Pask, 1969, p.23–24): from the user’s point of view, the thermostat has a purpose for, i.e. to maintain a desired temperature, whereas the purpose in the device is to eliminate a difference.
The idea that the thermostat’s purpose is simply to eliminate a difference is most important here. I have written about this here.
Von Galsersfeld continues:
This example may also help to clarify a second cybernetic feature that is rarely stressed. Imagine a thermostat that has an extremely sensitive thermometer. If it senses a temperature that is a fraction below the reference value, it switches on the heater. The moment the temperature begins to rise above the reference, it switches on the cooling system –and thus it enters into an interminable oscillation. This would hardly be desirable. Therefore, it is important to design the device so that it has an area of inaction around the reference value where neither the one nor the other response is triggered. In other words, rather than a single switching point, there have to be two, with some space for equilibrium in between.
Homeostasis does not refer to a fine line it needs to maintain. It is often a band or a range. The wider the band, the easier it is to maintain homeostasis. It is more efficient to define the “stable conditions” to be between a range of values. A good example for this is a bicycle lane. It is not easy, if not impossible, to ride a bicycle in a straight line. However, it is easy to ride a bicycle in a somewhat wider lane. With the thermostat, this region is sometimes referred to as a “deadband.” This is the range of the temperature, within which the thermostat does not act (stays OFF). Below the lower limit, the thermostat will kick on the furnace, and above the upper limit, the thermostat will kick on the air conditioner.
Another important lesson from a thermostat is that if you want to change the room temperature, there is no point in moving the thermostat value to an extreme setpoint. Let’s say that you want to cool the room down. It is of no use if you put the thermostat value at 40 degrees F (4.44 degrees C). The house will not get colder faster with this approach. The thermostat controls the temperature inside the house, but not the speed with which it achieves this.
To be economically efficient, the thermostat must be aligned with the external temperature. For example, in colder weather conditions, the heat setpoint should be reduced (for example 67 degrees F or 19.4 degrees C), and similarly during warmer weather conditions the cool set point should be raised. Even though, the thermostat is the regulator, the user determines how this regulation is achieved. The thermostat as a regulator must also follow the Good Regulator Theorem. All good regulators must be a model of the system that it tries to regulate. The model of how to maintain the internal temperature constant (within the deadband) is programmed into the thermostat. It also follows the law of Requisite Variety. The thermostat must have the requisite variety to adjust the internal temperature based on the external perturbations. The thermostat must be able to differentiate the states of “below the setpoint temperature” or “above the setpoint temperature” to achieve the requisite variety and maintain the internal temperature. Both the Good Regulator Theorem and the Law of Requisite Variety are at utmost importance in Cybernetics, and they are both the contributions of one of the pioneers of Cybernetics, Ross Ashby.
I will finish this with some great aphorisms from Ross Ashby:
The drive to equilibrium forces the emergence of intelligence.
That the brain matches its environment is no more surprising than the matching of the two ends of a broken stick.
Every piece of wisdom is the worst folly in the opposite environment. Change the environment to its opposite and every piece of wisdom becomes the worst of folly.
The rule for decision is: Use what you know to narrow the field as far as possible: after that, do as you please.
Any system that achieves appropriate selection (to a degree better than chance) does so as a consequence of information received.
Please maintain social distance and wear masks. Stay safe and Always keep on learning…
In today’s post, I am using the ideas of the great American pragmatist philosopher, Richard Rorty. Rorty’s most famous work is Contingency, Irony and Solidarity. Rorty as a pragmatist follows the idea of an anti-essentialist. This basically means that there is no intrinsic essence to a phenomenon. Take for example, the idea of “Truth”. The general notion of Truth is that it can be found independent of human cognition. Rorty points out that this idea is not at all useful.
Rorty states:
Truth cannot be out there – cannot exist independently of the human mind – because sentences cannot so exist, or be out there. The world is out there, but descriptions of the world are not. Only descriptions of the world can be true of false. The world on its own – unaided by the describing activities of human beings – cannot.
The suggestion that truth, as well as the world, is out there is a legacy of an age in which the world was seen as the creation of a being who had a language of his own.
A key idea that Rorty brings up is the contingency of language. We may see language as this wonderful thing that enables us to communicate. Rorty describes language as contingent. This means that language is actually something we invented rather than discovered. And that language is really a tool we use to describe what is around us and our ideas. It is contingent because it is historically and geographically based. It is also contingent because we are engaged in language games, and meaning is an emergent phenomenon from our language games. This idea of language games is inspired by Ludwig Wittgenstein. If we see language as contingent, then we can prepare ourselves to not fall prey to the idea that truth is out there in the world, and that it is something that we can find. When we realize that language is contingent, we stop believing in dogmas and doctrines stipulated to us. We stop asking questions such as “What is it to be a human being?” Instead we ask, “What is it to inhabit a twenty first century democratic society?”
The idea of contingency slowly reveals us that sentences are no longer important. We should focus on vocabularies. Rorty explains that vocabularies allow us describe and re-describe the world. It is a holistic notion. When the notion of a “description of the world” is moved from the level of criterion-governed sentences within language games to language games as wholes, games which we do not choose between by reference to criteria, the idea that the world decides which descriptions are true can no longer be given a clear sense. It becomes hard to think that, that vocabulary is somehow already out there in the world, waiting for us to discover it. Languages are made rather than found, and truth is a property of linguistic entities (sentences).
As a pragmatist, Rorty’s view is that language, and in turn vocabulary, is a tool that is useful in a particular context. It does not have an intrinsic nature on its own because it is contingent on us, the users. Rorty wonderfully explains this as – the fact that Newton’s vocabulary lets us predict the world more easily than Aristotle’s does not mean that the world speaks Newtonian.
Another idea that Rorty proposes is that of the final vocabulary. Rorty says that we all have final vocabularies. All human beings carry about a set of words which they employ to justify their actions, their beliefs, and their lives. These are the words in which we formulate praise for our friends and contempt for our enemies, our long-term projects, our deepest self-doubts and our highest hopes… It is “final” in the sense that if doubt is cast on the worth of these words, their user has no noncircular argumentative recourse. Those words are as far as he can go with language; beyond them there is only helpless passivity or a resort to a force. A small part of a final vocabulary is made up of thin, flexible, and ubiquitous terms such as “true,” “good,” “right,” and “beautiful. ” The larger part contains thicker, more rigid, and more parochial terms, for example, “Christ,” “England,” “professional standards,” “decency,” “kindness,” “the Revolution,” “the Church,” “progressive,” “rigorous,” “creative.” The more parochial terms do most of the work.
Let’s look at what we have discussed so far and look at systems thinking. Pragmatism is not foreign to systems thinking. The pioneer of soft systems approach, C. West. Churchman was a pragmatist. He advised us that systems approach starts when we view the world through the eyes of another. The general commonsense view of systems is that they are real, and everyone sees the “system” objectively which helps to address the problem. The “system” can be drawn and described accurately. The system can be optimized to achieve maximum performance. This is the “hard systems” approach which utilizes a mechanistic view. However, as we start applying the pragmatist ideas we have looked at, we start to challenge this. “Systems” are not real entities but mental constructs by an observer to aid in understanding of a phenomenon of interest. “Systems” no longer become a necessity, but become contingent on the observer constructing it. When one says that the “healthcare system” is broken, we no longer look at the sentence in isolation, but rather we start looking at the vocabularies. The idea of contingency brings the non-objective nature of reality into the front. How one sees or experiences something depends on his or her contingency and their final vocabulary. From this standpoint, a system has nothing that the observer does not put into it. The intrinsic nature of a system is actually the properties assigned by the observer and contingent on his or her final vocabulary.
Similar ideas are present in Cybernetics and Systems Thinking:
We exist in language using language for our explanations- Humberto Maturana
The environment as we perceive it is our invention. – Heinz von Foerster
If contingency of language is an issue, then how does one do systems thinking then? Here I will introduce another idea from Rorty. This is the idea of an “ironist”. Rorty said:
I shall define an “ironist” as someone who fulfills three conditions : ( 1 ) She has radical and continuing doubts about the final vocabulary she currently uses, because she has been impressed by other vocabularies, vocabularies taken as final by people or books she has encountered; (2) she realizes that argument phrased in her present vocabulary can neither underwrite nor dissolve these doubts ; (3 ) insofar as she philosophizes about her situation, she does not think that her vocabulary is closer to reality than others, that it is in touch with a power not herself. Ironists who are inclined to philosophize see the choice between vocabularies as made neither within a neutral and universal metavocabulary nor by an attempt to fight one’s way past appearances to the real, but simply by playing the new off against the old.
Rorty adds:
The ironist spends her time worrying about the possibility that she has been initiated into the wrong tribe, taught to play the wrong language game. She worries that the process of socialization which turned her into a human being by giving her a language may have given her the wrong language, and so turned her into the wrong kind of human being. But she cannot give a criterion of wrongness. So, the more she is driven to articulate her situation in philosophical terms, the more she reminds herself of her rootlessness by constantly using terms like “Weltanschauung,” “perspective,” “dialectic,” “conceptual framework, “historical epoch,” “language game,” “redescription,” “vocabulary,” and “irony.”
From a second order Cybernetics standpoint, the idea of an ironist is self-referential. The observer is aware of their final vocabulary. Moreover, they are aware that their final vocabulary is perhaps incomplete or incorrect. They are historicist in the sense they understand that their language is contingent based on the time, place and society they were born into. They are also aware that others do not share their vocabulary. From this standpoint, what they can do is to seek understanding and ask leading questions to expose others to their contingencies of their vocabulary. They understand that truth is a function of agreement within language games. They don’t look at sentences in isolation, but at vocabularies in a holistic fashion. They realize that ideas are dynamic and do not have a fixed essence because vocabularies themselves are dynamic. They are open to changing their vocabularies without the fear of going against ideas they once held on to. They understand in a pragmatist sense that all models are wrong but the practical question is how wrong do they have to be to not be useful. (George Box)
I will finish with a quote from Fredrich Nietzsche:
“Truths are illusions about which one has forgotten that this is what they are; metaphors which are worn out and without sensuous power; coins which have lost their pictures and now matter only as metal, no longer as coins.”
Please maintain social distance and wear masks. Stay safe and Always keep on learning…
In today’s post, I am following the theme of cybernetic explanation that I talked about in my last post – The Monkey’s Prose – Cybernetic Explanation. I recently listened to the talks given as part of the Tenth International Conference on Complex Systems. I really enjoyed the keynote speech by the Herb. A. Simon award winner, Melanie Mitchell. She told the story of a project that her student did where the AI was able to recognize whether there was an animal in a picture or not with good accuracy. Her student dug deep into the AI’s model. The AI is taught to identify a characteristic by showing a large number of datasets (in this case pictures with and without animals). The AI is shown which picture has an animal and which picture does not. The AI comes up with an algorithm based on the large dataset. The correct answers reinforce the algorithm, and the wrong answers tweaks the algorithm as needed with the assigned weights to the “incorrectness”. This is very much like how we learn. What Mitchell’s student found was that the AI is assigning probabilities based on whether the background is blurry or not. When the background is blurry, it is more likely that there is an animal in the picture. In other words, it is not looking for an animal, it is just looking to see whether the background is blurry or not. Depending upon the statistical probability, the AI will answer that there is or there is not an animal in the picture.
We, humans, assign the meaning to the AI’s output, and believe that the AI is able to differentiate whether there is an animal in the picture or not. In actuality, the AI is merely using statistical probabilities of whether the background is blurry or not. We cannot help but assign meanings to things. We say that nature has a purpose, or that evolution has a purpose. We assign causality to phenomenon. It is interesting to think about whether it truly matters that the AI is not really identifying the animal in the picture. The outcome still has the appearance that the AI is able to tell whether there is an animal or not in the picture. We are able to bring in more concepts that the AI cannot. Mitchell discusses the difference between concepts and perceptual categories. What the AI is doing is constructing perceptual categories that are limited in nature, whereas what we construct are concepts that may be linked to other concepts. The example that Mitchell provided was that of a bridge. For us, a bridge can mean many things based on the linguistic application. We can say that a person is able to “bridge the gap” or that our nose has a bridge. The capacity for AI, at this time at least, is to stick to the bridge being a perceptual category based on the context of the data it has. We can talk in metaphors that the AI cannot understand. A bridge can be a concept or an actual physical thing for us. For a simple task such as the question of an animal in the picture carries no risk. However, as we up the ante to a task such as autonomous driving, we can no longer rely on the appearances of the AI being able to carry out the task. This is demonstrated in the morality or ethics debate with regards to AI, and how it should carry out probability calculations in the event of a hazard. This involves questions such as the ones in the trolley problem.
This also leads to another idea that has the cybernetic explanation embedded in it. This is the idea of “do no harm”. The requirement is not specifically to do good deeds, but to not do things that will cause harm to others. As the English philosopher, John Stuart Mill put it:
That the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others.
This is also what Isaac Asimov referred to as the first of the three laws of robotics in his 1942 short story, Runaround:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
The other two laws are circularly referenced to the first law:
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The idea of cybernetic explanation gives us another perspective to purpose and meaning. Our natural disposition is to assign meaning and purpose, as I indicated earlier. We tend to believe that Truth is out there or that there is an objective reality. As the great Cybernetician Heinz von Foerster put it – “The environment contains no information; the environment is as it is”. Truth or descriptions of reality is our creation with our vocabulary. And most importantly, there are other beings describing realities with their vocabularies as well. I will finish with some wise words from Friedrich Nietzsche.
“It is we alone who have devised cause, sequence, for-each-other, relativity, constraint, number, law, freedom, motive, and purpose; and when we project and mix this symbol world into things as if it existed ‘in itself’, we act once more as we have always acted—mythologically.”
Please maintain social distance and wear masks. Stay safe and Always keep on learning…
In today’s post, I am looking at the idea of complexity from a second order Cybernetics standpoint. The phrase “only when you realize you are blind, can you see”, is a paraphrase of a statement from the great Heinz von Foerster. I have talked about von Foerster in many of my posts, and he is one of my heroes in Cybernetics. There is no one universally accepted definition for complexity. Haridimos Tsoukas and Mary Jo Hatch wrote a very insightful paper called “Complex Thinking, Complex Practice”. In the paper, they try to address how to explain complexity. They refer to the works of John Casti and C. H. Waddington to further their ideas:
Waddington notes that complexity has something to do with the number of components of a system as well as with the number of ways in which they can be related… Casti defines complexity as being ‘directly proportional to the length of the shortest possible description of [a system]’.
Casti’s views on complexity are particularly interesting because complexity is not viewed as being intrinsic to the phenomenon. This is a common idea in Cybernetics, mainly second order cybernetics. There are two ‘classifications’ of cybernetics – first order and second order cybernetics. As von Foerster explained it, first order cybernetics is the study of observed systems, where the basic assumption is that the system is objectively knowable. The second order cybernetics is the study of observing systems, where the basic assumption is that the observer is included in the act of observing, and thus the observer is part of the observed system. This leads to second order thinking such as understanding understanding or observing observing. It is interesting because, as I am typing, Microsoft Word is telling me that “understanding understanding” is syntactically incorrect. This obviously would be a first order viewpoint. The second order cybernetics is a meta discipline and one that generates wisdom.
When we take the observer into consideration, we realize that complexity is in the eyes of the beholder. Complexity is observer-dependent; that is, it depends upon how the system is described and interpreted. If the observer is able to make more varying distinctions in their description, we can say that the phenomenon or the system is being interpreted as complex. In their paper, Tsoukas and Jo Hatch brings up the ideas of language in describing and thus interpreting complexity. They note that:
Chaos and complexity are metaphors that posit new connections, draw our attention to new phenomena, and help us see what we could not see before (Rorty).
This is quite interesting. When we learn the language of complexity, we are able to understand complexity better, and we become better at describing it, in a reflexive manner.
What complexity science has done is to draw our attention to certain features of systems’ behaviors which were hitherto unremarked, such as non-linearity, scale-dependence, recursiveness, sensitivity to initial conditions, emergence (etc.)
From this standpoint, we can say that complexity lies in the interactions we have with the system, and depending on our perspectives (vantage point) and the interaction we can come away with a different interpretation for complexity.
Heinz von Foerster remarked that complexity is not in the world but rather in the language we use to describe the world. Paraphrasing von Foerster, cognition is computation of descriptions of reality. Managing complexity then becomes a cognitive task. How well you can interact or manage interactions depends on how effective your description is and how well it aligns with others’ descriptions. The complexity of a system lies in the description of that system, which entirely rests on the observer/sensemaker. The idea that complexity is in the eyes of beholder is to point out the importance of second order cybernetics/thinking. The world is as it is, it gets meaning only when we assign meaning to it through how we describe/interpret it. To put differently, “the logic of the world is the logic of the descriptions of the world” (Heinz von Foerster)
The idea of complexity not being intrinsic to a system is also echoed by one of the pioneers of cybernetics, Ross Ashby. He noted – “a system’s complexity is purely relative to a given observer; I reject the attempt to measure an absolute, or intrinsic, complexity; but this acceptance of complexity as something in the eye of the beholder is, in my opinion, the only workable way of measuring complexity”.
The ideas of second order cybernetics emphasize the importance of observers. The “system” is a mental construct by an observer to make sense of a phenomenon. The observer based on their needs draw boundaries to separate a “system” from its environment. This allows the observer to understand the system in the context of its environment. At the same time, the observer has to understand that there are other observers in the same social realm who may draw different boundaries and come out with different understandings based on their own needs, biases, perspectives etc.
A phenomenon can have multiple identities or meanings depending on who is describing the desired phenomenon. Let’s use the Covid 19 pandemic as an example. For some people, this has become a problem of economics rather than a healthcare problem, while for some others it has become a problem of freedom or ethics. If we are to attempt tackling the complexity of such an issue, the worst thing we can do is to attempt first order thinking- the idea that the phenomenon can be observed objectively. Issues requiring second order approach get worse by the application of first order methodologies. The danger in this is that we can fall prey to our own narrative being the whole Truth.
As the pragmatic philosopher Richard Rorty points out:
The world does not speak. Only we do. The world can, once we have programmed ourselves with a language, cause us to hold beliefs. But it cannot propose a language for us to speak. Only other human beings can do that.
If we are to understand complexity of a phenomenon, we need to start with realizing that our version of complexity is only one of the many. Our ability to understand complexity depends on our ability to describe it. We lack the ability to completely describe a phenomenon. The different descriptions that come about from the different participants may be contradictory and can point out apparent paradoxes in our social realm.
In complexity, if we are to tackle it, we need to have coherence of multiple interpretations. As Karl Weick points out, we need to complicate ourselves. By generating and accommodating multiple inequivalent descriptions, practioners will increase the complexity of their understanding and, therefore, will be more likely to match the complexity of the situation they attempt to manage. In complexity, coherence – the idea of connecting ideas together, is important since it helps to create a clearer picture and affords avoiding blind spots. This co-construted description itself is an emergent phenomenon.
In second order Cybernetics, there are two statements that might shed more light on everything we have discussed so far:
Anything said is said BY an observer. (Maturana)
Anything said is said TO an observer. (von Foerster)
A lot can be said between these two statements. The first points out that the importance of the observer, and the second points out that there are other observers, and we coconstruct our social reality.
Our descriptions are abstractions since we are limited by our languages. All our biases, fears, misunderstandings, ignorance etc. lie hidden in the “systems” we construct. We get into trouble when we assume that these abstractions are real things. This is the first order approach, where we are not aware that we do not see due to our cognitive blind spots. When we realize that all we have are abstractions, we get to the second order approach. We include ourselves in our observation and we start looking at how we make these abstractions. We also become aware of other autonomous participants of our social reality engaging in similar constructions of narratives. As we seek their understanding, we become aware of our cognitive blind spots. We realize that everything is on a spectrum, and our thinking of “either/or” is actually a false dichotomy.
At this point, Heinz von Foerster would say that we start to see when we realize that we are blind.
Please maintain social distance and wear masks. Stay safe and Always keep on learning…
In today’s post, I am looking at “Copernican Revolution”, a phrase used by the great German philosopher, Immanuel Kant. Immanuel Kant is one of the greatest names in philosophy. I am an Engineer by profession, and I started learning philosophy after I left school. As an Engineer, I am trained to think about causality in nature – if I do this, then that happens. This is often viewed as the mechanistic view of nature and it is reliant on empiricism. Empiricism is the idea that knowledge comes from experience. In contrast, at the other end of knowledge spectrum lies rationalism. Rationalism is the idea that knowledge comes from reason (internal). An empiricist can quickly fall into the trap of induction, where you believe that there is uniformity in nature. For example, if I clapped my hand twenty times, and the light flickered each time, I can then (falsely) conclude that the next time I clap my hand the light will flicker. My mind created a causal connection to my hand clapping and the light flickering.
David Hume, another great philosopher, challenged this and identified this approach as the problem of induction. He suggested that we, humans, are creatures of habit that we assign causality to things based on repeat experience. His view was that causality is assigned by us simply by habit. His famous example of challenging whether the sun will rise tomorrow exemplifies this:
That the sun will not rise tomorrow is no less intelligible a proposition, and implies no more contradiction, than the affirmation, that it will rise.
Hume came up with two main categories for human reason, often called Hume’s fork:
Matters of fact – this represents knowledge that we gain from experience (synthetic), and this happens after the fact of experience (denoted by posteriori). An example is – the ball is heavy. Thinking cannot provide the knowledge that the ball is heavy. One has to interact with the ball to learn that the ball is heavy.
Relation of ideas – this represents knowledge that does not rely on experience. This knowledge can be obtained simply through reason (analytic). This was identified as a priori or from before. For example – all bachelors are unmarried. No experience is needed for this knowledge. The meaning of unmarried is predicated in the term “bachelor”.
All the objects of human reason or enquiry may naturally be divided into two kinds, to wit, relations of ideas, and matters of fact. Of the first kind are the sciences of Geometry, Algebra, and Arithmetic … [which are] discoverable by the mere operation of thought … Matters of fact, which are the second object of human reason, are not ascertained in the same manner; nor is our evidence of their truth, however great, of a like nature with the foregoing.
Hume’s fork stipulates that all necessary truths are analytical, the meaning is predicated in the statement. Similarly, knowledge regarding matters of fact indicate that the knowledge is contingent on the experience gotten from the interaction. This leads to further ideas such as – there is a separation between the external world and the knowledge about the world. The knowledge about the world would come only from the world through empiricism. One can view this as the human mind revolving around the world.
Immanuel Kant challenged the idea of Hume’s fork and came up with the idea of a priori synthetic knowledge. Kant proposed that we, humans, are bestowed with a framework for reasoning that is a priori and yet synthetic. Kant synthesized ideas from rationalism and empiricism, and added a third tine to Hume’s fork. Kant famously stated – That all our knowledge begins with experience there can be no doubt. Kant clarified that it does not follow that knowledge arises out of experience. What we come to know is based on our mental faculty. The mind plays an important role in our knowledge of the world. The synthetic a priori propositions say something about the world, and yet at the same time they say something about our mind.
How the world is to us depends on how we experience it, and thus the knowledge of the external world is dependent on the structure of our mind. This idea is often described as a pair of spectacles that we are born with. We see the world through this pair of spectacles that we cannot take off. What we see forms our knowledge of the world, but it is dependent on the pair of spectacles that is a part of us. Kant’s great idea is that our knowledge of the world does not conform to the world. Our knowledge of the world conforms not to the nature of the world, but to the nature of our internal faculties. To paraphrase Heinz von Foerster, we do not see the world as is, it is as we see it.
Nicholas Copernicus, the Polish astronomer, came up with a heliocentric view of the world. The prevalent idea at the time was that the celestial bodies, including the sun, revolved around the earth. Copernicus challenged this, and showed that the earth actually revolves around the sun. Kant, in a similar fashion, suggested that the human minds do not revolve around the world with the meanings coming into our minds. Instead, the world revolves around our minds, and we assign meanings to the objects in the world. This is explained wonderfully by Julie. E. Maybee:
Naïve science assumes that our knowledge revolves around what the world is like, but, Hume’s criticism argued, this view entails that we cannot then have knowledge of scientific causes through reason. We can reestablish a connection between reason and knowledge, however, Kant suggested, if we say—not that knowledge revolves around what the world is like—but that knowledge revolves around what we are like. For the purposes of our knowledge, Kant said, we do not revolve around the world—the world revolves around us. Because we are rational creatures, we share a cognitive structure with one another that regularizes our experiences of the world. This intersubjectively shared structure of rationality—and not the world itself—grounds our knowledge.
Systems:
We have assumed that the knowledge of the world, our cognition, conforms to the world. Kant proposes that all we have access to is the phenomena, and not the actual world. What we are learning is dependent on us. We use an as-if model to generate meaning based on our interaction with the external world. In this viewpoint, the systems are not real things in the world. The systems are concepts that we construct, and they are as-if models that we use to make sense of the phenomena. What we view as systems are the constructions we make and depends on our need for understanding.
Alan Stewart uses a similar idea to explain his views on constructivism:
The fundamental premise of constructivism is that we humans are self-regulating organisms who live from the inside out. As a philosophical counterpoint to naive realism, constructivism suggests that we are proactive co-creators of the reality to which we respond. Underlying this concept is that perception is an active process in which we ‘bring forth distinctions’. It is our idiosyncratic distinctions which form the structure of the world(s) which each of us inhabits.”
I will finish with a great lesson from Alan Watts:
“Everything in the world is gloriously meaningless.”
To further elaborate, I will add that all meaning comes from us. From a Hume-ian sense, we are creatures of habit in that we cannot stop assigning meaning. From a Kant-ian sense we are law-makers, not law-discoverers.
From a Systems Thinking perspective, we have unique perspectives and we assign meanings based on this. We construct “systems” “as-if” the different parts work together in a way to have a purpose and a meaning, both of which are assigned by us. The meaning comes inside out, not the other way around. To further this idea, as a human collective, we cocreate an emergent phenomenal world. In this aspect, “reality” is multidimensional, and each one of us has a version that is unique to us.
The Covid 19 pandemic has given me an opportunity to observe, meditate and learn about complexity in action. In today’s post, I am looking at “truths” in complexity. Humans, more than any other species, have the ability to change their environment at a faster pace. They are also able to maintain belief systems over time and act on them autonomously. These are good reasons to call all “human systems” complex systems.
The Theories of Truth:
Generally, there are three theories of truth in philosophy. They are as follows:
Correspondence theory of truth – very simply put, this means that what you have internally in your mind corresponds one-to-one with the external world. The statement you might make such as – “the cat is on the mat” is true, if there are truly a cat and a mat, and if that cat is on that mat. The main objection to this theory is that we don’t have access to have an objective reality. What we have is a sensemaking organ, our brain, that is trying to make sense based on the data provided by the various sensory organs. The brain over time generates stable correlations which allows it to abstract meanings from the filtered information from the sensory data. The correspondence theory is viewed as a “static” picture of truth, and fails to explain the dynamic and complex nature of reality.
Coherence theory of truth – In this approach, a statement is true if it is coherent with the different specified set of beliefs and propositions. Here the idea is more about a fit and harmony with existing beliefs. The coherence theory is about consistency. An objection to this theory is that the subjective nature of a statement can “bend” to match the existing strong belief systems. Perhaps, a good example of this is the recent poll that found that the majority of democrats fear that the worst is yet to come for the Covid 19 pandemic, while the majority of republicans believe that the worst is over. Another criticism against this is that we can be inconsistent in our beliefs as indicated by cognitive dissonance.
Pragmatic Theory of truth – The pragmatic theory of truth was put forth as an alternative to the static correspondence theory of truth. In this theory, the value of truth is dependent on the utility it brings. Pragmatic theories of truth have the effect of shifting attention away from what makes a statement true and toward what people mean or do in describing a statement as true. As one of the proponents of Pragmatic theory, William James, put it – True beliefs are useful and dependable in ways that false beliefs are not:‘You can say of it then either that “it is useful because it is true” or that “it is true because it is useful”. Both these phrases mean exactly the same thing.’ One of my favorite explanations of pragmatic theory comes from Richard Rorty, who viewed it as coping with reality, rather than copying reality. One of the criticisms against the pragmatic theory of truth is how it explains truth in terms of utility. As John Capps notes, utility, long-term durability, and assertibility (etc.) should be viewed not as definitions but rather as criteria of truth, as yardsticks for distinguishing true beliefs from false ones.
Sensemaking Complexity:
From the discussion of truth, we can see that seeking truth is not an easy task, especially when we deal with complexity of human systems. Our natural tendency is to find order as pleasing and reassuring. We try to find order in all we can, and we try our best to maintain order as long as we can. In this attempt, we often neglect the actual complexity we are dealing with. A common way to distinguish complexity of a phenomenon is – ordered, complicated or complex. We can say a square peg in a square hole is an ordered phenomenon. The correspondence theory of truth is quite apt here because we have a one to one relationship. We have a very good working knowledge of cause and effect. As complexity increases, we get to complicated phenomenon where there is still somewhat a good cause and effect relationship. A car can be viewed as a complicated phenomenon. The correspondence theory is still apt here. Once we add a human to the mix, we get to complexity. Imagine the driver of a car. Now imagine thousands of drivers all at once. The correspondence theory of truth falls apart fast here.
The main source of complexity in the example discussed above comes from humans. We are autonomous, and we are able to justify our own actions. We may go faster than the speed limit because we are already late for the appointment. We may overtake on the wrong side because the other driver is driving slowly. We assign meanings and we also assign purposes for others. We do not always realize that other humans also have the same power.
We have seen varying responses and behavior in this pandemic. We have seen the different justifications and hypotheses. We agree with some of them and strongly disagree with others depending on how they cohere with our own belief systems. The actual transmission of the virus is fairly constrained. It transmits mainly from person to person. The transmission occurs mainly through respiratory droplets. Every human interaction carries some risk of becoming infected if the other person is a carrier of the virus. However, the actual course of the pandemic has been complex.
Philosophical Insights to Sensemaking Complexity:
I will use the ideas of Friedrich Nietzsche and William. V.O. Quine to further look at truth and how we come to know about truth. Nietzsche had a multidimensional view of truth. He viewed truth as:
A mobile army of metaphors, metonyms, and anthropomorphisms—in short, a sum of human relations which have been enhanced, transposed, and embellished poetically and rhetorically, and which after long use seem firm, canonical, and obligatory to a people: truths are illusions about which one has forgotten that this is what they are; metaphors which are worn out and without sensuous power; coins which have lost their pictures and now matter only as metal, no longer as coins.
He emphasized on the abstract nature of truth. One comes to view the abstractions/metaphors as stand in for reality, and eventually falsely equate them to reality.
Every word immediately becomes a concept, in as much as it is not intended to serve as a reminder of the unique and wholly individualized original experience to which it owes its birth, but must at the same time fit innumerable, more or less similar cases—which means, strictly speaking, never equal—in other words, a lot of unequal cases. Every concept originates through our equating what is unequal.
Nietzsche advised us against using a cause-effect, correspondence type viewpoint in sensemaking complexity:
It is we alone who have devised cause, sequence, for-each-other, relativity, constraint, number, law, freedom, motive, and purpose; and when we project and mix this symbol world into things as if it existed ‘in itself’, we act once more as we have always acted—mythologically.
As Maureen Finnigan notes in her wonderful essay, Nietzsche’s Perspective: Beyond Truth as an Ideal:
As truth is not objective, in like manner, it is not subjective. Since thinking is not wholly rational, disconnected from the body, or independent of the world, the subjective perception, or conception, of truth through the intellect alone is impossible. “The ‘pure spirit’ is pure stupidity: if we subtract the nervous system and the senses—the ‘mortal shroud’—then we miscalculate—that is all!” Inasmuch as the individual is not independent from the world, one can neither subjectively nor objectively explain the world as if detached, but must interpret the world from within. Subjective and objective, like True and apparent, soul and body, thinking thing and material thing, intellect and sense, noumena and phenomena, are dualities that Nietzsche aspires to overcome. Thus, although Nietzsche is not a rationalist, this does not mean he falls into the irrationalist camp. He does not abolish reason but instead situates it within life, as an instrument, not as an absolute.
With complexity, we should not look for correspondence but coherence. Correspondence forces categorization while coherence forces connections. This follows nicely into Quine’s Web of Belief idea. Quine’s idea is a holistic approach. We make meanings in a holistic fashion. When we observe a phenomenon, our sensory experience and the belief it generates do not standalone in our entire belief system. Instead, Quine postulates that we make sense holistically with a web of belief. Every belief is connected to other beliefs like a web.
For example, we can say Experience1(E1) led to Belief1(B1), and Experience2(E2) led to Belief2(B2) etc. This has the correspondence nature we discussed earlier. This view prefers the ordered static approach to sensemaking. However, in Quine’s view, it is more dynamic, interconnected and complex. This has the coherence nature we discussed earlier. The schematic below, inspired by a lecture note from Bryan. Van. W. Norden, shows this in detail.
The idea of Web of Belief is clearly explained by Thomas Kelly:
Quine famously suggests that we can picture everything that we take to be true as constituting a single, seamless “web of belief.” The nodes of the web represent individual beliefs, and the connections between nodes represent the logical relations between beliefs. Although there are important epistemic differences among the beliefs in the web, these differences are matters of degree as opposed to kind. From the perspective of the epistemologist, the most important dimension along which beliefs can vary is their centrality within the web: the centrality of a belief corresponds to how fundamental it is to our overall view of the world, or how deeply implicated it is with the rest of what we think. The metaphor of the web of belief thus represents the relevant kind of fundamentality in spatial terms: the more a particular belief is implicated in our overall view of the world, the nearer it is to the center, while less fundamental beliefs are located nearer the periphery of the web. Experience first impinges upon the web at the periphery, but no belief within the web is wholly cut off from experience, inasmuch as even those beliefs at the very center stand in logical relations to beliefs nearer the periphery.
The idea of degrees rather than a concrete distinction between beliefs is very important to note here. Additionally, Quine proposes that when we counter an experience contradicting our belief, we seek to restore consistency/coherence in the web by giving up beliefs that are located near the periphery rather than the ones near the center.
Final Words:
The dynamic nature of complexity is not just applicable to a pandemic but also to scientific paradigms. This is beautifully explained in the quote from Jacob Bronowski below:
“There is no permanence to scientific concepts because they are only our interpretations of natural phenomena … We merely make a temporary invention which covers that part of the world accessible to us at the moment”
Our beliefs shape our experience as much as our experiences shape our beliefs in a recursive manner. The web gets more complex as time goes on, where some of the nodes become more distinct and some others get hazier. We are prone to getting perpetually frustrated if we try to apply a static framework to the dynamic everchanging domain of complexity. It gets more frustrating because patterns emerge on a continuous basis providing an illusion of order. The static and rigid frameworks break because of their rigidity and inflexibility to tackle the variety thrown upon them.
With this in mind, we should come to realize that we do not have a means to know the external world as-is. All we can know is how it appears to us based on our web of belief. The pragmatic tradition of truth advises us to keep going on our search for truth, and that this search is self-corrective. The correspondence theory fails us because the meaning we create is not independent of us, but very much a product of our web of belief. At the same time, if we don’t seek to understand others, coherence theory will fail us because we would lack the requisite variety needed to make sense of a complex phenomenon. I will finish with an excellent quote from Maureen Finnigan:
Human beings impose their own truth on life instead of seeking truth within life.
Stay safe and Always keep on learning… In case you missed it, my last post was Korzybski at the Gemba: