Being-In-the-Ohno-Circle:

In today’s post, I am looking at the Ohno Circle in light of Heidegger’s ideas. I will try to stay away from the neologisms used by Heidegger and will only scratch the surface of his deep insights. One of the best explanations of Ohno Circle comes from one of Ohno’s students, Teruyuki Minoura, the past President and CEO of Toyota Motor Manufacturing North America, Inc. He had a first-hand experience of it. Minoura noted:

Mr. Ohno often would draw a circle on the floor in the middle of a bottleneck area, and he would make us stand in that circle all day long and watch the process. He wanted us to watch and ask “why?” over and over.

You may have heard about the five “why’s” in TPS. Mr. Ohno felt that if we stood in that circle, watching and asking “why?”, better ideas would come to us. He realized that new thoughts and new technologies do not come out of the blue, they come from a true understanding of the process.

In my case, I thought it was strange when he asked me to go into the circle. But what could I say? I was a freshman and he was the big boss and a member of the board of directors! So, I went into the circle and began to watch the process. During the first hour, I began to understand the process. After two hours, I began to see the problems. After the third and fourth hours, I was starting to ask “why?” Finally, I found the root cause and started to think about countermeasures.

With the countermeasures in place, I reported back to Mr. Ohno what I had observed and the problems I saw and the countermeasures I put in place as well as the reasons for the countermeasures. Mr. Ohno would just say, “Is that so?” and nothing more. He never gave us answers. Most of the time he wouldn’t even tell us if what we did was good or bad. Now I realize what Mr. Ohno was trying to do. He was trying to make us think deeply — and think for ourselves.

I truly appreciate Minoura’s explanation. There are certain aspects of this that resonated with me. First, standing in the circle is not a quick activity. Minoura noted it as an “all day long” activity. The intent is not to simply identify wastes but to gain a maximally possible understanding of the process. Minoura described it almost in phases:

  • During the first hour, I began to understand the process.
  • After two hours, I began to see the problems.
  • After the third and fourth hours, I was starting to ask “why?”

From Heideggerian viewpoint, every “thing” is in relation to another “thing”. There is a realm of totality, and the meaning of an object comes from this interrelationship. We use a hammer in order to nail a wood which is used in order to build a cabinet, which is used in order to… so and so on. Heidegger pushed back on the subject-object distinction that was put forth by Rene Descartes. Much of science is based on this distinction of pretending to be able to separate the subject, the scientist, from an object, the “thing” at hand. In all actuality, we engage with things without the realization that we are engaging with them. When we drive a car, we cannot possibly pay attention to every little action we take. We go with the flow. There is a Zen like aspect to this in that we do not say that we are pushing the pedal down on the gas or that we are rotating the steering wheel to go left. We simply do the needed action by being part of the flow that has emerged around us. We do this by being a part of the environment around us. This includes other drivers in their cars, the objects lying on the road, the animals that may try to cross the road etc. This activity is not about being careless when we are driving. Instead, we are engaging in an embodied activity where the car is part of our extended body, and we are immersed in our environment.

From this standpoint, when we are on the floor, we should not try to “look” for waste without understanding how we are immersed in the gemba. We are not going there to fix issues. Our role is to understand how things are in relation to each other on the floor. We are not rushing in to find problems. We are standing there to understand how the operator is interacting with the artifacts available to them. How are the materials coming in and out of the assembly station? How is the operator engaging with the artifacts and the materials? Are they stopping and looking at their equipment every step of the way? Is the equipment flowing with the operator as an extension? Coming back to the driving example, if we have to search for the gas or brake pedal every time, we will not be driving in a safe manner. Just like knowing where the appropriate pedal is without looking and knowing how much to press on it, the operator should be able to engage with the equipment or the artifact. The equipment or the artifact should not just be present there, but they should be available for them ready to use.

One of the deep insights that Heidegger had was that we do not really understand something until that “something” breaks down and the need of the relationship is exposed. When we are engaging with it fully, we do not always know where the breaking points are. We understand the limitations only when that “something” starts to behave in a fashion that makes its presence conspicuous to us. If the equipment is working well, we do not really notice it. We start to notice things when things are not working the way they should be. To take this thought further with the Ohno circle, if we do not understand how the process should be working, we cannot even get to the numerous possibilities that are present to make the process work even better. When we care about the operator, the process, the product etc., we start to realize the many possibilities of running the operation. In some of these possibilities, the operations may be more ergonomic to the operator, or the product quality may be improved further. We cannot even begin to get to these possibilities unless we are able to understand how things work together. These possibilities will then make us realize where they are not working together. In other words, unless we deeply understand the current state, we should not even fathom to think of an ideal state. This requires us to go back to the gemba as often as possible and as many times as possible, to understand the variations of material, operators etc. Perhaps we can interview the operator, or try and build the part ourselves on the floor. The more we are engaged in, the better we get at improving our understanding.

I will finish with a great Ohno story from Minoura that explains this further:

I want to relay one of Mr. Ohno’s stories here. This is a lesson from a Kaizen attempt on kanban collection. Let me explain the background of this story. Many of you know that Toyota uses what we call Kanban cards to keep track of parts and components. Most of them are small pieces of paper which contain all the information related to a particular part. When a worker begins to use a part from a box, he or she takes the kanban out and puts it in a kanban collection post. The conveyance group comes around to pick them up and take it to the kanban room for processing. They normally drive a tow motor. In order to pick up kanbans they have to stop the tow motor, get off, pick up the kanbans, then get back on the tow motor and head for the next collection area.

Now, as you know, TPS (Toyota Production System) despises waste. Stopping the tow motor, getting off and getting back on the vehicle is a waste of the team member’s time and motion. So, one group went ahead and figured out a Kaizen for kanban collection. The Kaizen was to eliminate the wasted motion and time by making it possible for the kanban collector to gather kanban cards without getting down from the vehicle. They proudly presented this Kaizen to Mr. Ohno.

To their surprise, Mr. Ohno got real angry when he heard the presentation. The group couldn’t understand why he was not pleased, because their Kaizen had eliminated the number one sin in the Toyota Production System: waste. So Mr. Ohno explained: He told them that if they were to implement this Kaizen, the tow motor drivers would be on the vehicle all the time. They would be twisting the accelerator grip for a couple of hours straight. That is not good for the driver’s wrist. Also, Mr. Ohno pointed out that getting off the vehicle and walking a few steps and getting back on provided exercise of different muscles that were not used by driving the tow motor. That would be beneficial for the kanban collector’s well-being.

Mr. Ohno was looking at a bigger picture. He placed the ergonomic well-being of the worker before the short-term goal of efficiency. This happened almost 30 years ago. It was many years before the concept of ergonomics became a household word.

Stay safe and always keep on learning… In case you missed it, my last post was Representations of Reality in Constructivism:

Representations of Reality in Constructivism:

This is available as part of a book offering that is free for community members of Cyb3rSynLabs. Please check here (https://www.cyb3rsynlabs.com/c/books/) for Second Order Cybernetics Essays for Silicon Valley. The e-book version is available here (https://www.cyb3rsyn.com/products/soc-book)

Stay safe and always keep on learning…

In case you missed it, my last post was Cybernetics of Kindness:

Cybernetics of Kindness:

In today’s post, I am looking at the Socrates of Cybernetics, Heinz von Foerster’s ethical imperative:

“Always act so as to increase the number of choices.”

I see this as the recursive humanist commandment. This is very much applicable to ethics, and how we should treat each other. Von forester said the following about ethics:

Whenever we speak about something that has to do with ethics, the other is involved. If I live alone in the jungle or in the desert, the problem of ethics does not exist. It only comes to exist through our being together. Only our togetherness, our being together, gives rise to the question, How do I behave toward the other so that we can really always be one?

Von Foerster’s views align with that of constructivism, the idea that we construct our knowledge about our reality. We construct our knowledge to “re-cognize” a reality through the intercorrelation of the activities of the various sense organs. It is through these computed correlations that we recognize a reality. No findings exist independently of observers. Observing systems can only correlate their sense experiences with themselves and each other.

Paul Pangaro reminded me that von Foerster did not mean “options” or “possibilities”. Von Foerster specifically chose the word “choices”. By choices, he meant those selections among options that you might “actually take” depending on who “you are” right now. Here choices narrow down to the few that apply most to what you are now in this moment and in this context, down to a decision that makes you who you are. As von Foerster said, “Don’t make the decision, let the decision make you.” You and your choice you take are indistinguishable.

Since we are the ones doing the construction, we are also ultimately responsible for what we construct. No one should take this away from us. Ernst von Glasersfeld, father of radical constructivism explained this well:

The moment you begin to think that you are the author of your knowledge, you have to consider that you are responsible for it. You are responsible for what you are thinking, because it’s you who’s doing the thinking and you are responsible for what you have put together because it’s you who’s putting it all together. It’s a disagreeable idea and it has serious consequences, because it makes you truly responsible for everything you do. You can no longer say “well, that’s how the world is”, or “sono così”; you know, that’s not good enough.

Cybernetics is about communication and control in the animal and machine, as Norbert Wiener viewed it. When we view control in terms of von Foerster’s ethical imperative, interesting thoughts come about. Control is about reducing the number of choices so that only certain pre-selected activities are available for the one being controlled. For example, a steersman has to control their ship such that it maintains a specific course, and here the ship’s “available options” to move are drastically reduced. When we use this view of control and apply it to human beings, we should do so in light of von Foerster’s ethical imperative.

Von Foerster also said – A is better off when B is better off. This also provides further clarity on the recursiveness. If I am to make sure that I act so as to increase the number of choices for B, then B also in turn does the same. How I act impacts how others (re)act, which in turn impacts how I act back… on and on. This might remind the reader of the golden rule – Treat others as you would like others to treat you. However, this is missing the point about constructivism and the ongoing interaction that leads to the construction of a social reality. I see this as part of a social contract. As Jean-Jacques Rousseau noted, Man is born free, but everywhere he is in chains. The social contract comes about from the ongoing interactions and the contexts we are in with our fellow human beings as part of being in a society or social groups. This also means that this is dynamic and contingent in nature. What was “good” before may not be “good” today. This requires an ongoing framing and reframing though interactions.

John Boyd, father of OODA loop, shed more light on this:

Studies of human behavior reveal that the actions we undertake as individuals are closely related to survival, more importantly, survival on our own terms. Naturally, such a notion implies that we should be able to act relatively free or independent of any debilitating external influences — otherwise that very survival might be in jeopardy. In viewing the instinct for survival in this manner we imply that a basic aim or goal, as individuals, is to improve our capacity for independent action. The degree to which we cooperate, or compete, with others is driven by the need to satisfy this basic goal. If we believe that it is not possible to satisfy it alone, without help from others, history shows us that we will agree to constraints upon our independent action — in order to collectively pool skills and talents in the form of nations, corporations, labor unions, mafias, etc — so that obstacles standing in the way of the basic goal can either be removed or overcome. On the other hand, if the group cannot or does not attempt to overcome obstacles deemed important to many (or possibly any) of its individual members, the group must risk losing these alienated members. Under these circumstances, the alienated members may dissolve their relationship and remain independent, form a group of their own, or join another collective body in order to improve their capacity for independent action.

In a similar fashion, Dirk Baecker also noted the following:

Control means to establish causality ensured by communication. Control consists in reducing degrees of freedom in the self-selection of events. This is why the notion of “conditionality” is certainly one of the most important notions in the field of systems theory. Conditionality exists as soon as we introduce a distinction which separates subsets of possibilities and an observer who is forced to choose, yet who can only choose depending on the “product space” he is able to see. If we assume observers on both sides of the control relationship, we end up with subsets of possibilities selecting each other and thereby experiencing, and solving, the problem of “double contingency” so much cherished by sociologists. In other words, communication is needed to entice observers into a self-selection and into the reduction of degrees of freedom that goes with it. This means there must be a certain gain in the reduction of degrees of freedom, which for instance may be a greater certainty in the expectation of specific things happening or not happening.

Ultimately, this is all about what we value for ourselves and for the society we are part of. Our personal freedom makes sense only in light of other’s personal freedoms. That is the context – in relation to another human being, one who may be less fortunate than us. Making the world easier for those less fortunate than us makes the world better for everyone of us. I will finish with a great quote from one of my favorite science fiction character, Doctor Who:

“Human progress isn’t measured by industry. It’s measured by the value you place on a life. An unimportant life. A life without privilege. The boy who died on the river, that boy’s value is your value. That’s what defines an age. That’s what defines a species.”

Please maintain social distance, wear masks and take vaccination, if able. Stay safe and always keep on learning…

In case you missed it, my last post was The Constraint of Custom:

The Constraint of Custom:

I have written a lot about the problem of induction before. This was explained very well by the great Scottish philosopher, David Hume. Hume looked at the basis of beliefs that we hold such as:

  1. The sun will rise tomorrow; or
  2. If I drop this ball, it will fall to the ground

Hume noted that there is no uniformity in nature. In other words, it is not rational to believe that what has happened in the past will happen again in the future. Just because, we have seen the sun rise every single day of our lives, it does not guarantee that it will rise again tomorrow. We are using our experience of the sun rising to believe that it will rise again tomorrow. Even though, this might be irrational, Hume does not deny that we may see the belief of the sun rising as a sensible proposition. He notes:

None but a fool or madman will ever pretend to dispute the authority of experience, or to reject that great guide of human life.

It’s just that we cannot use logic to back this proposition up. We cannot conclude that the future is going to resemble the past, no matter how many examples of the past we have. We cannot simply use experience of the past because the only experience we have is of the past, and not of the future. Hume noted that to propose that the next future event will resemble the past because our most recent “future event” (the last experience event) resembled the past is circular:

All our experimental conclusions proceed upon the supposition that the future will be conformable to the past. To endeavor, therefore, the proof of this last supposition by probable arguments, or arguments regarding existence, must be evidently going in a circle, and taking that for granted, which is the very point in question.

Hume concluded that we fall prey to the problem of induction because we are creatures of habits:

For wherever the repetition of any act or operation produces a propensity to renew the same act or operation, without being impelled by any reasoning or process of the understanding, we always say, that this propensity is the effect of Custom. By employing this word, we pretend not to have given the ultimate reason of such a propensity. We only point out a principle of human nature, which is universally acknowledged, and which is well known by its effects.

In other words, it is our human nature to identify and seek patterns, use them to make predictions of the future. This is just how we are wired. We do this unconsciously. Our brains are prediction engines. We cannot help but do this. I will go further with this idea by utilizing a brilliant example from the wonderful American philosopher Charles Sanders Peirce. Peirce in 1868 wrote about an experiment to reveal the blind spot in the retina:

Does the reader know of the blind spot on the retina? Take a number of this journal, turn over the cover so as to expose the white paper, lay it sideways upon the table before which you must sit, and put two cents upon it, one near the left-hand edge, and the other to the right. Put your left hand over your left eye, and with the right eye look steadily at the left-hand cent. Then, with your right hand, move the right-hand cent (which is now plainly seen) towards the left hand. When it comes to a place near the middle of the page it will disappear—you cannot see it without turning your eye. Bring it nearer to the other cent, or carry it further away, and it will reappear; but at that particular spot it cannot be seen. Thus, it appears that there is a blind spot nearly in the middle of the retina; and this is confirmed by anatomy. It follows that the space we immediately see (when one eye is closed) is not, as we had imagined, a continuous oval, but is a ring, the filling up of which must be the work of the intellect. What more striking example could be desired of the impossibility of distinguishing intellectual results from intuitional data, by mere contemplation?

I highly encourage the reader to check this out, if they have not heard of this experiment. In fact, I welcome the reader to draw a line and then place the coin on the line. Doing so, the reader will see that the coin vanishes, however the line still remains visible in the periphery. This means that even though, our eye “sees” a ring, the brain actually fills it out and makes us see a “whole” picture. To add to this wonderful capability of our interpretative framework, the image that falls on our retina is actually upside-down. Yet, our brain makes it the “right-side” up. This would mean that newborn babies may actually see the world upside down and with voids, but at some point, the interpretative framework changes to correct it so that we see the world “correctly”.

How does our brain know to do this? The answer to this is that it was evolutionarily beneficial for our ancestors to do this, just like our custom to look for patterns. This is what Lila Gatlin would refer to as a D1 constraint. This is a context-free constraint that was evolutionarily passed down from generation to generation. This is a constraint that acts in any situation. In other words, to quote Alicia Juarrero, it is context free.

To go past this constraint, we have to use second order thinking. In other words, we have to think about thinking; we have to learn about learning; we have to look at understanding understanding. I welcome the reader to look at the posts I have written on this matter. I will finish with two quotes to further meditate on this:

Only when you realize you are blind, can you see. (Paraphrasing Heinz von Foerster)

The quieter you become, the more you can hear. – Ram Dass

Please maintain social distance, wear masks and take vaccination, if able. Stay safe and always keep on learning…

In case you missed it, my last post was The Cybernetics of “Here & Now” and “There & Then”

The Cybernetics of “Here & Now” and “There & Then”:

This is available as part of a book offering that is free for community members of Cyb3rSynLabs. Please check here (https://www.cyb3rsynlabs.com/c/books/) for Second Order Cybernetics Essays for Silicon Valley. The e-book version is available here (https://www.cyb3rsyn.com/products/soc-book)

In case you missed it, my last post was The Cybernetics of Bayesian Epistemology:

The Cybernetics of Bayesian Epistemology:

I have had some good conversations recently about epistemology. Today’s post is influenced by those conversations. In today’s post, I am looking at Bayesian epistemology, something that I am very influenced by. As the readers of my blog may know, I am a student of Cybernetics. One of the main starting points in Cybernetics is that we are informationally closed. This means that information cannot enter into us from outside. This may be evident for any teachers in my viewership. You are not able to open up a student’s brain and pour information in as a commodity and then afterwards seal it back up. What happens instead is that the teacher perturbs the student and the student in turn generates meaning out of the perturbation. This would also mean that all knowledge is personal. This is something that was taught by Michael Polanyi.

How we know something is based on what we already know. The obvious question at this juncture is what about the first knowledge? Ross Ashby, one of the pioneers of Cybernetics, has written that there are two main forms of regulations. One is the gene pattern, something that was developed over generations through the evolutionary process. An example of this is the impulse of a baby to grab or to breastfeed without any training. The second is the ability to learn. The ability to learn amplifies the chance of survival of the organism. In our species, this allows us to literally reach for the celestial bodies.

If one accepts that we are informationally closed, then one has to also accept that we do not have direct access to the external reality. What we have access to is what we make sense of from experiencing the external perturbations. Cybernetics aligns with constructivism, the philosophy that we construct a reality from our experience. Heinz von Foerster, one of my favorite Cyberneticians, postulated that our nervous system as a whole is organized in such a way (organizes itself in such a way) that it computes a stable reality. All we have is what we can perceive through our perception framework. The famous philosopher, Immanuel Kant, referred to this as the noumena (the reality that we don’t have direct access to) and the phenomena (the perceived representation of the external reality). We compute a reality based on our interpretive framework. This is just a version of the reality, and each one of us computes such a reality that is unique to each one of us. The stability comes from repeat interactions with the external reality, as well as with interactions with others. We do not exist in isolation from others. The more interactions we have the more we have the chance to “calibrate” it against each other.

With this framework, one does not start from ontology, instead one starts from epistemology. Epistemology deals with the theory of knowledge and ontology deals with being (what is out there). What I can talk about is what I know about rather than what is really out there.

Bayesian epistemology is based on induction. Induction is a process of reasoning where one makes a generalization from a series of observations. For example, if all the swans you have seen so far in your life are white swans, then induction would direct you to generalize that all swans are white. Induction assumes uniformity of nature, to quote the famous Scottish philosopher David Hume. This means that you assume that the future will resemble the past. Hume pointed out that induction is faulty because no matter how many observations one makes, one cannot assume that the future will resemble the past. We seek patterns in the world, and we make generalizations from them. Hume pointed out that we do this out of habit. While many people have tried to solve the problem of induction, nobody has really solved it.

All of this discussion lays the background for Bayesian epistemology. I will not go into the math of Bayesian statistics in this post. I will provide a general explanation instead. Bayesian epistemology puts forth that probability is not a characteristic of a phenomenon, but a statement about our epistemology. The probabilities we assign are not for THE reality but for the constructed reality. It is a statement about OUR uncertainty, and not about the uncertainty associated with the phenomenon itself. The Bayesian approach requires that we start with what we know. We start with stating our prior belief, and based on the evidence presented, we will modify our belief. This is termed as the “posterior” in Bayesian terms. Today’s posterior becomes tomorrow’s prior because “what we know now” is the posterior.

Another important thing to keep in mind is that one does not assign a 0 or 100% for your belief. Even if you see a coin with 10,000 heads in a row, you should not assume that the coin is double headed. This would be jumping into the pit of the problem of induction. We can keep updating our prior based on evidence without reaching 100%.

I will write more on this topic. I wanted to start off with an introductory post and follow up with additional discussions. I will finish with some appealing points of Bayesian epistemology.

Bayesian epistemology is self-correcting – Bayesian statistics has the tendency to cut down your overconfidence or underconfidence. The new evidence presented over several iterations corrects your over or under reach into confidence.

Bayesian epistemology is observer dependent and context sensitive – As noted above, probability in Bayesian epistemology is a statement of the observer’s belief. The framework is entirely dependent on the observer and the context around sensemaking. You do not remove the observer out of the observation. In this regard, Bayesian framework is hermeneutical. We bring our biases to the equation, and we put money where our mouth is by assigning a probability value to it.

Circularity – There is an aspect of circularity in Bayesian framework in that today’s prior becomes tomorrow’s posterior as noted before.

Second Order Nature – The Bayesian framework requires that you be open to changing your beliefs. It requires you to challenge your assumptions and remain open to correcting your belief system. There is an aspect of error correction in this. You realize that you have cognitive blind spots. Knowing this allows us to better our sensemaking ability. We try to be “less wrong” than “more right”.

Conditionality – The Bayesian framework utilizes conditional probability. You see that phenomena or events do not exist in isolation. They are connected to each other and therefore require us to look at the holistic viewpoint.

Coherence not Correspondence – The use of priors forces us to use what we know. To use Willard Van Orman Quine’s phrase, we have a “web of belief”. Our priors must make sense with all the other beliefs we already have in place. This supports the coherence theory of truth instead of the realist’s favorite correspondence theory of truth. I welcome the reader to pursue this with this post.

Consistency not completeness – The idea of a consistency over completeness is quite fascinating. This is mainly due to the limitation of our nervous system to have a true representation of the reality. There is a common belief that we live with uncertainty, but our nervous system strives to provide us a stable version of reality, one that is devoid of uncertainties. This is a fascinating idea. We are able to think about this only from a second order standpoint. We are able to ponder about our cognitive blind spots because we are able to do second order cybernetics. We are able to think about thinking. We are able to put ourselves into the observed.

I will finish with an excellent quote from Albert Einstein:

“As far as the laws of mathematics refer to reality, they are not certain; as far as they are certain, they do not refer to reality”.

Please maintain social distance, wear masks and take vaccination, if able. Stay safe and always keep on learning…

In case you missed it, my last post was Error Correction of Error Correction:

Error Correction of Error Correction:

This is available as part of a book offering that is free for community members of Cyb3rSynLabs. Please check here (https://www.cyb3rsynlabs.com/c/books/) for Second Order Cybernetics Essays for Silicon Valley. The e-book version is available here (https://www.cyb3rsyn.com/products/soc-book)

Please maintain social distance, wear masks and take vaccination, if able. Stay safe and always keep on learning… In case you missed it, my last post was The Open Concept of Systems:

The Open Concept of Systems:

In today’s post, I am looking at the famous American philosopher Morris Weitz’s Closed and Open Concepts. Weitz studied aesthetics, the branch of philosophy interested in beauty and taste. He looked at the simple or not so simple question of “how do you define art?” This might seem to be a simple question at first. As we try to answer this, we will soon find that this is not so easy to answer. This might remind you of Socrates and the Socratic method of asking questions. Socrates would ask questions such as what is virtue? For any answer he got, he would find a contradiction that would push the other person further and further into a corner. Weitz came out against this approach and said that the question “what is art?” is itself the wrong question. Instead, he said that you should ask “what sort of concept is art?” The general tendency amongst theorists is to use strict definitions about the essence of something. Weitz called this approach a “closed concept”. Weitz said:

If necessary and sufficient conditions for the application of a concept can be stated, the concept is a closed one. But this can happen only in logic or mathematics where concepts are constructed and completely defined. It cannot occur with empirically-descriptive and normative concepts unless we arbitrarily close them by stipulating the ranges of their uses.

In this fashion, Weitz noted that – Art, as the logic of the concept shows, has no set of necessary and sufficient properties, hence a theory of it is logically impossible and not merely factually difficult.

To contrast the closed concept with the open concept, Weitz stated:

A concept is open if its conditions of application are emendable and corrigible; i.e., if a situation or case can be imagined or secured which would call for some sort of decision on our part to extend the use of the concept to cover this, or to close the concept and invent a new one to deal with the new case and its new property.

Weitz had strong words against the theorists of Aesthetics wanting to confine the subject into a box:

 Aesthetic theory is a logically vain attempt to define what cannot be defined, to state the necessary and sufficient properties of that which has no necessary and sufficient properties, to conceive the concept of art as closed when it’s very use reveals and demands its openness.

Weitz was a fan of Wittgenstein and seems to have been influenced by his idea of “what a game is?” In his posthumous book, Philosophical Investigations, Wittgenstein talked about how a concept such as a game can be defined. There are so many different games that you would be able to identify a game when you engage in it. They all have similarities but it is very hard to properly define a game in a closed concept sense. You know that Chess and Soccer (Football) are games, but also very different. Similarly, skating and polo are games, again of very different nature. They have family resemblances! Wittgenstein’s main point is that the meaning of a word is in its use. Weitz noted:

In his new work, Philosophical investigations, Wittgenstein raises as an illustrative question, What is a game? The traditional philosophical, theoretical answer would be in terms of some exhaustive set of properties common to an games. To this Wittgenstein says, let us consider what we call “games”: “I mean board-games, card-games, ball-games, Olympic games, and so on. What is common to them all?—Don’t say: ‘there must be something common, or they would not be called “games'” but look and see whether there is anything common to all.—For if you look at them you will not see something that is common to all, but similarities, relationships, and a whole series of them at that. … ” Card games are like board games in some respects but not in others. Not all games are amusing, nor is there always winning or losing or competition. Some games resemble others in some respects—that is all. What we find are no necessary and sufficient properties, only “a complicated network of similarities overlapping and crisscrossing,” such that we can say of games that they form a family with family resemblances and no common trait. If one asks what a game is, we pick out sample games, describe these, and add, “This and similar things are called ‘games.’ ” This is all we need to say and indeed all any of us knows about games. Knowing what a game is, is not knowing some real definition or theory but being able to recognize and explain games and to decide which among imaginary and new examples would or would not be called “games.”

In other words, a “game” is an open concept. How you define a game is specifically up to how you, as the observer, view the actual functioning of the concept. Weitz does note that it is possible to “close” an “open” concept in certain cases. The example he gives is that of “tragedy” and “Greek tragedy”. Tragedy is an open concept, whereas Greek tragedy is a closed concept. He notes:

Of course, there are legitimate and serviceable closed concepts in art. But these are always those whose boundaries of conditions have been drawn for a special purpose. Consider the difference, for example, between “tragedy” and “Greek tragedy. ” The first is open and must remain so to allow for the possibility of new conditions, e.g., a play in which the hero is not noble or fallen or in which there is no hero but other elements that are like those of plays we already call “tragedy.” The second is closed. The plays it can be applied to, the conditions under which it can be correctly used are all in, once the boundary, “Greek,” is drawn. Here the critic can work out a theory or real definition in which he lists the common properties at least of the extant Greek tragedies.

Systems:

I was fascinated with the idea of open and closed concepts. I think this has use in Systems Thinking. Often, systems are depicted as real entities in the world that one can change or fix. This is to me, the use of a closed concept in systems thinking. Systems, similar to art, should be viewed as an open concept. A system is entirely dependent upon who does the observation. If we have three observers, then there are at least three systems of the same phenomenon. To paraphrase Dominik Jarczewski, the question whether something is a system is not a factual problem. It is a decision problem. How you define your system is entirely contingent upon your worldview, your biases and your experiential realities. The knowledge of what is a system is not theoretical but practical. You can replace the word “art” in the previous section with “system”, and there will be no meaning lost.

Peter Checkland, the eminent Systems Thinker provides more light on this. He noted that there will be an observer who gives an account of the world, or part of it, in systems terms; the principle which makes them coherent entities; the means and mechanism by which they tend to maintain their integrity; their boundaries, inputs, outputs, and components; their structure. Finally their behavior may be described in terms of inputs and outputs via state descriptions.

If you are trying to understand a system, you must not view it as a closed concept. You must view it as an open concept, and this means that you have to try to understand where the other person is coming from, and how it is constructed by that person. In other words, how does the functioning of the coherent whole affect that person. It is easy to fall into the mindset that systems can be viewed as closed concepts, where the purpose, the whole, etc. are definable and understandable by everybody. You might be tempted to say that the whole is more important than the parts, as if your whole is accepted by everybody. You might think that holism is the way to do systems thinking, and that reductionism is a terrible idea. When you embrace systems as an open concept, you realize that holism can be as bad as reductionism and reductionism can be as good as holism. All you have are abstractions. Even the holism you look at, is a form of reductionism.

I will finish with some more food-for-thought idea from Weitz that systems thinking is a meta-discipline (replacing “art” with “system”):

If I may paraphrase Wittgenstein, we must not ask, What is the nature of any system x?, or even, according to the semanticist, What does “x” mean?, a transformation that leads to the disastrous interpretation of “system” as a name for some specifiable class of objects; but rather, What is the use or employment of “x”? What does “x” do in the language? This, I take it, is the initial question, the begin-all if not the end-all of any philosophical problem and solution. Thus, … our first problem is the elucidation of the actual employment of the concept of a system, to give a logical description of the actual functioning of the concept, including a description of the conditions under which we correctly use it or its correlates.

Please maintain social distance, wear masks and take vaccination, if able. Stay safe and always keep on learning…

In case you missed it, my last post was Direct and Indirect Constraints:

[1] Art by Annie Jose

Direct and Indirect Constraints:

In today’s post, I am following on the theme of Lila Gatlin’s work on constraints and tying it up with cybernetics. Please refer to my previous posts here and here for additional background. As I discussed in the last post, Lila Gatlin used the analogy of language to explain the emergence of complexity in evolution. She postulated that lower complex organisms such as invertebrates focused on D1 constraints to ensure that the genetic material is passed on accurately over generations, while vertebrates maintained a constant level of D1 constraints and utilized D2 constraints to introduce novelty leading to complexification of the species. Gatlin noted that this is similar to Shannon’s second theorem which points out that if a message is encoded properly, then it can be sent over a noisy medium in a reliable manner. As Jeremy Campbell notes:

In Shannon’s theory, the essence of successful communication is that the message must be properly encoded before it is sent, so that it arrives at its destination just as it left the transmitter, intact and free from errors caused by the randomizing effects of noise. This means that a certain amount of redundancy must be built into the message at the source… In Gatlin’s new kind of natural selection, “second-theorem selection,” fitness is defined in terms very different and abstract than in classical theory of evolution. Fitness here is not a matter of strong bodies and prolific reproduction, but of genetic information coded according to Shannon’s principles.

The codes that made possible the so-called higher organisms, Gatlin suggests, were redundant enough to ensure transmission along the channel from DNA to protein without error, yet at the same time they possessed an entropy, in Shannon’s sense of “amount of potential information,” high enough to generate a large variety of possible messages.

Gatlin viewed that complexity arose from the ability to introduce more variety while at the same time maintaining accuracy in an optimal mix, similar to human language where there is always constant emergence of new and new ideas while the main grammar, syntax etc. are maintained. As Campbell continues:

In the course of evolution, certain living organisms acquired DNA messages which were coded in this optimum way, giving them a highly successful balance between variety and accuracy, a property also displayed by human languages. These winning creatures were the vertebrates, immensely innovative and versatile forms of life, whose arrival led to a speeding-up of evolution.

As Campbell puts it, vertebrates were agents of novelty. They were able to revolutionize their anatomy and body chemistry. They were able to evolve more rapidly and adapt to their surroundings. The first known vertebrate is a bottom-dwelling fish that lived over 350 million years ago. They had a heavy external skeleton that anchored them to the floor of the water-body. They evolved such that some of the spiny parts of the skeleton grew into fins. They also evolved such that they developed skull with openings for sense organs such as eyes, nose, ears etc. Later on, some of them developed limbs from the bony supports of fins, leading to the rise of amphibians.

What kind of error-correcting redundancy did he DNA of these evolutionary prize winners, the vertebrates, possess? It had to give them the freedom to be creative, to become something markedly different, for their emergence was made possible not merely by changes in the shape of a common skeleton, but rather by developing whole new parts and organs of the body. Yet this redundancy also had to provide them with the constraints needed to keep their genetic messages undistorted.

Gatlin defined the first type of redundancy, one that allows deviation from equiprobability as ‘D1 constraint’. This is also referred to as ‘governing constraint’. The second type of redundancy, one that allows deviation from independence was termed by Gatlin as ‘D2 constraint’, and this is also referred to as ‘enabling constraint’. Gatlin’s speculation was that vertebrates were able to use both D1 and D2 constraints to increase their complexification, ultimately leading to a high cognitive being such as our species, homo sapiens.

One of the pioneers in Cybernetics, Ross Ashby, also looked at a similar question. He was looking at the biological learning mechanisms of “advanced” organisms. Ashby identified that for lower complex organisms, the main source of regulation is their gene pattern. For Ashby, regulation is linked to their viability or survival. He noted that the lower complex organisms can rely just on their gene pattern to continue to survive in their environment. Ashby noted that they are adapted because their conditions have been constant over many generations. In other words, a low complex organism such as a hunting wasp can hunt and survive simply based on their genetic information. They do not need to learn to adapt, they can adapt with what they have. Ashby referred to this as direct regulation. With direct regulation, there is a limit to the adaptation. If the regularities of the environment change, the hunting wasp will not be able to survive. It relies on the regularities of the environment for its survival. Ashby contrasted this with indirect regulation. With indirect regulation, one is able to amplify adaptation. Indirect regulation is the learning mechanism that allows the organism to adapt. A great example for this is a kitten. As Ashby notes:

This (indirect regulation) is the learning mechanism. Its peculiarity is that the gene-pattern delegates part of its control over the organism to the environment. Thus, it does not specify in detail how a kitten shall catch a mouse, but provides a learning mechanism and a tendency to play, so that it is the mouse which teaches the kitten the finer points of how to catch mice.

The learning mechanism in its gene pattern does not directly teach the kitten to hunt for the mice. However, chasing the mice and interacting with it, trains the kitten how to catch the mice. As Ashby notes, the gene pattern is supplemented by the information supplied by the environment. Part of the regulation is delegated to the environment.

In the same way the gene-pattern, when it determines the growth of a learning animal, expends part of its resources in forming a brain that is adapted not only by details in the gene-pattern but also by details in the environment. The environment acts as the dictionary, while the hunting wasp, as it attacks its prey, is guided in detail by its genetic inheritance, the kitten is taught how to catch mice by the mice themselves. Thus, in the learning organism the information that comes to it by the gene-pattern is much supplemented by information supplied by the environment; so, the total adaptation possible, after learning, can exceed the quantity transmitted directly through the gene-pattern.

Ashby further notes:

As a channel of communication, it has a definite, finite capacity, Q say. If this capacity is used directly, then, by the law of requisite variety, the amount of regulation that the organism can use as defense against the environment cannot exceed Q.  To this limit, the non-learning organisms must conform. If, however, the regulation is done indirectly, then the quantity Q, used appropriately, may enable the organism to achieve, against its environment, an amount of regulation much greater than Q. Thus, the learning organisms are no longer restricted by the limit.

In the same way the gene-pattern, when it determines the growth of a learning animal, expends part of its resources in forming a brain that is adapted not only by details in the gene-pattern but also by details in the environment. The environment acts as the dictionary, while the hunting wasp, as it attacks its prey, is guided in detail by its genetic inheritance, the kitten is taught how to catch mice by the mice themselves. Thus, in the learning organism the information that comes to it by the gene-pattern is much supplemented by information supplied by the environment; so the total adaptation possible, after learning, can exceed the quantity transmitted directly through the gene-pattern.

As I look at Ashby’s ideas, I cannot help but see similarities between the D1/D2 constraints and Direct/Indirect regulation respectively. Indirect regulation, similar to enabling constraints, helps the organism adapt to its environment by connecting things together. Indirect regulation has a second order nature to it such as learning how to learn. It works on being open to possibilities when interacting with the environment. It brings novelty into the situation. Similar to governing constraints, direct regulation focuses only on the accuracy of the ‘message’. Nothing additional or any form of amplification is not possible. Direct regulation is hardwired, whereas indirect regulation is enabling. Direct regulation is context-free, whereas indirect regulation is context-sensitive. What the hunting wasp does is entirely reliant on its gene pattern, no matter the situation, whereas, what a kitten does is entirely dependent on the context of the situation.

Final Words:

Cybernetics can be looked at as the study of possibilities, especially why out of all the possibilities only certain outcomes occur. There are strong undercurrents to information theory in Cybernetics. For example, in information theory entropy is a measure of how many messages might have been sent, but were not. In other words, if there are a lot of possible messages available, and only one message is selected, then it eliminates a lot of uncertainty. Therefore, this represents a high information scenario. Indirect regulation allows us to look at the different possibilities and adapt as needed. Additionally, indirect regulation allows retaining the successes and failures and the lessons learned from them.

I will finish with a great lesson from Ashby to explain the idea of the indirect regulation:

If a child wanted to discover the meanings of English words, and his father had only ten minutes available for instruction, the father would have two possible modes of action. One is to use the ten minutes in telling the child the meanings of as many words as can be described in that time. Clearly there is a limit to the number of words that can be so explained. This is the direct method. The indirect method is for the father to spend the ten minutes showing the child how to use a dictionary. At the end of the ten minutes the child is, in one sense, no better off; for not a single word has been added to his vocabulary. Nevertheless, the second method has a fundamental advantage; for in the future the number of words that the child can understand is no longer bounded by the limit imposed by the ten minutes. The reason is that if the information about meanings has to come through the father directly, it is limited to ten-minutes’ worth; in the indirect method the information comes partly through the father and partly through another channel (the dictionary) that the father’s ten-minute act has made available.

Please maintain social distance, wear masks and take vaccination, if able. Stay safe and always keep on learning…

In case you missed it, my last post was D1 and D2 Constraints: