A Tale of a Thousand Models:

In today’s post, I am further exploring the notion of models and mental models. We often speak of mental models as though they are neat packages of knowledge stored somewhere in the mind. These models are typically treated as internal blueprints and as simplified representations of the world that help us navigate and make decisions. But what exactly do we mean when we call something a model? And are we always speaking about the same kind of thing?

The term model, in both technical and informal contexts, carries more ambiguity than we often acknowledge. In classical cybernetics, W. Ross Ashby gave the concept a central role. For Ashby, a model was a representation that could simulate the behavior of a system. A good regulator, he argued, must contain a model of the system it seeks to control. This model did not need to be a literal image or a complete mirror. It simply needed to have the right kind of functional correspondence with just enough structure to predict and act upon.

Ashby’s definition is rigorous and functional. The model need not share the same physical form or medium as the system it regulates. What matters is not material resemblance but structural correspondence across selected variables. The model must preserve the relations and transformations that enable viable regulation. Ashby called this ‘isomorphism’. This isomorphism does not demand total replication. It requires that the model preserve only those relations necessary for viable control. This is the basic premise of First Order Cybernetics.

This isomorphic correspondence is what makes the model useful for regulation. The regulator can manipulate the model, run it forward, test interventions, explore possibilities, and trust that the results will map back to the actual system. The model becomes a kind of structural analogue: a way of capturing pattern without requiring material similarity.

When we look deeper, something about this view of models can feel distant. It risks separating the observer from the observed, the knower from the known. It tends toward a view of knowledge that is separated from lived experience. What does it mean for an organism to contain a model of its world, if that organism is not a computer but a living, breathing being?

This is where the Thousand Brains Hypothesis (TBH) offers a helpful contrast. Jeff Hawkins, in developing this hypothesis, suggests that intelligence arises not from a single unified model of the world, but from many partial models working in parallel. Here, however, Hawkins seems to use ‘model’ in a markedly different sense than Ashby’s isomorphic structures. For Hawkins, a cortical column’s model is not a representation that stands apart from experience but a learned pattern of prediction embedded within sensorimotor engagement itself.

Each cortical column builds what Hawkins calls a model of objects in the world, but this model is constituted by the column’s capacity to predict sensory sequences as the body moves through space. The column does not store a picture of a coffee cup. Instead, it develops expectations about what sensations will follow from particular movements when encountering cup-like patterns. Some of these may be visual, some tactile, while others may be of a different sense altogether. The model is not a static thing, but a dynamic process. It is a way of being attuned to specific sensorimotor regularities.

While Hawkins retains the term “model,” his usage stretches its meaning. These patterns may not be models in the traditional sense at all. When we say a cortical column builds a model or learns expectations, we may still be trapped in representational thinking. The cortical column does not store information about objects. It maintains patterns of connectivity shaped by experience. These patterns do not represent the world per se. Instead, they enact a way of being responsive to it. A column’s knowledge of a coffee cup is not a stored description, but a readiness to engage with cup-like affordances. This is the key nuance I would like to offer.

This view of modeling resonates with Heidegger’s phenomenological understanding of being-in-the-world. Heidegger once noted that a hammer is not first known through its shape or composition, but through its use. It becomes present to us as ready-to-hand, as something we know by doing. Similarly, a cortical column knows an object by interacting with it, not by storing a detached image of it. As Heinz von Foerster once said, if you want to see, learn how to act.

In earlier reflections, I explored the limitations of treating mental models as internal representations. When we interact with a system or object, we are not retrieving stored pictures. Instead, we are drawing upon a history of lived engagement. Our orientation is not merely cognitive, but bodily and situated. The notion of a model here becomes something that reveals itself through action, not inspection. The Thousand Brains Hypothesis reinforces this idea by showing how perception and prediction are distributed. A single cortical column may only know part of an object in a specific sensory dimension, but through movement and integration with other columns, it participates in a kind of collective intelligence. There is no master map but only partial perspectives constantly updating and coordinating with one another. The columns are not comparing models. They are participating in a dynamic process of mutual constraint and coordination. This is what Maturana and Varela would recognize as structural coupling. Each column’s activity is shaped by its coupling with other columns, with the body, and with the environment. The result is a network of mutual specification rather than a collection of independent representations.

Intelligence, in this view, emerges not from the integration of discrete models but from the ongoing attunement of multiple sensorimotor streams. This attunement is guided not by accuracy but by viability. Viability is the organism’s capacity to maintain its structure and continue its pattern of living. It is often misunderstood that accuracy directly correlates with viability. The external world presents more complexity than any cognitive system can represent in full. The response, shaped by both constraint and energetic efficiency, is not to build exhaustive models but to maintain abstractions that are good enough. These are not symbolic summaries, but embodied dispositions formed through recurrent interaction.

This is not a flaw, but a feature of adaptive beings. Cognitive structures are not designed to capture the world exhaustively, but to filter it selectively. The principle of structural coupling rests on repetition. It rests on the organism’s ability to reinforce useful patterns over time. What endures are not accurate representations but habits of orientation that have proven viable. Cortical columns do not construct truthful depictions of the world. They cultivate ways of engaging that preserve continuity and coherence within the organism’s domain of living.

This stands in contrast to the classical view where the model is assumed to be singular, coherent, and representational. The model is not something we hold apart from the world, but something we become a part of through interaction with it*. This framing aligns with the constructivist view that organisms are informationally closed. An organism does not passively receive information from an objective world. It brings forth a world through its own structural coupling. What we call a model, then, is not a mirror of external reality but a structure of engagement, a dynamic fit between the organism and its environment.

The language of structure is important. Rather than thinking of models as things organisms have, we might think of them as patterns organisms are. A cortical column’s responsiveness to a coffee cup is not something it possesses but something it enacts. The pattern of connectivity is not a representation of the cup, but a way of being coupled to the cup’s affordances. Whether we call these models, structures of prediction, or patterns of skilled engagement, what unites them is that they are not static descriptions. They are emergent dispositions, formed through repeated interaction. Each term foregrounds a different aspect such as structure, process, or habit. However, they all point to intelligence as enacted rather than mirrored.

This is not to dismiss Ashby’s insight. His use of the term model was never about mirroring for its own sake. It was about enabling viable regulation and constructing just enough structure to explain and act. Perhaps it is more accurate to think of such models as habits of expectation. They are not representations but anticipations. They do not describe the world as it is but orient us toward what is likely to come. They are pragmatic, situated, and always in motion. Or perhaps the term model itself is too burdened. What we call a model may be better understood as a form of skilled attunement. It becomes a pattern of responsiveness that is cultivated through history, shaped by constraints, and sustained by viability. The cortical column does not model the coffee cup. It simply becomes responsive to it.

This reframing opens up deeper questions. If intelligence is not the construction of better representations but the cultivation of more viable engagements, what does this mean for artificial intelligence? Can machines learn to be responsive rather than simply predictive? Can they participate in the world, rather than map it?

The Thousand Brains Hypothesis, interpreted through the lens of structural coupling and lived engagement, suggests that intelligence emerges not from central models but from richly distributed interactions. It implies that robust intelligence does not require more accurate representations, but more diverse ways of being coupled to the world.

To model, in this deeper sense, is to engage. It is to live into a world that reveals itself not all at once, but gradually through action, adjustment, and care. Perhaps, the real power of what we call a model may not lie in what it represents, but in what it enables us to do. Or more accurately, in what it allows us to become.

Final Words:

This shift from models as internal representations to models as patterns of skilled engagement challenges deeply held assumptions about knowledge, cognition, and intelligence. It is not merely a technical redefinition. It is a philosophical turning. If cognition is not about mirroring the world but about maintaining a viable relation to it, then intelligence becomes a matter of fitting rather than mapping. It is not about what we store, but about how we respond. Even this post is not free of modeling. It draws distinctions, frames structures, and builds conceptual pathways. But it does so with an orientation toward viability, not toward finality. The second order reflexive nature of this inquiry (modeling the limits of models) underscores the point. Intelligence is not found in having the final answer, but in remaining open to reframing, recoupling, and reengaging as the world shifts around us.

This reframing also casts new light on the ambitions of artificial intelligence. If intelligence is not the construction of better representations but the cultivation of more viable engagements, then it becomes clear that AI systems, as currently conceived, may be fundamentally limited. The limitation is not merely technical. It is existential. Intelligence, in this deeper sense, emerges from embodied interaction, historical coupling, and recursive responsiveness to a world that matters. Machines that manipulate symbols or detect statistical regularities may approximate aspects of intelligent behavior, but they remain ungrounded in the affective, bodily, and experiential dynamics that make living cognition what it is. Responsiveness is not a product of prediction alone. It emerges from vulnerability, concern, and the need to maintain coherence amid complexity.

Without changes in their environment shaping how they persist, machines may simulate participation, but they do not truly engage. They act without inhabiting. They process without perspective. Perhaps this is one of the main reasons artificial intelligence may fall short of achieving sentience. It relies on static internal representations and lacks the embodied, experiential living necessary for understanding, concern, or care. Without lived coupling, there may be behavior, but not presence. There may be processing, but not perspective.

While navigating complexity, my hope is that this reframing offers both humility and hope. Humility, because it reminds us that our understanding is always partial and situated. Hope, because it suggests that intelligence is not a fixed capacity, but a living process which is co-created, and transformed through our engagements with the world and with each other in a social realm. I will finish with an excellent quote from Di Paolo, Rhohde and De Jaegher:

Organisms do not passively receive information from their environments, which they then translate into internal representations. Natural cognitive systems are simply not in the business of accessing their world in order to build accurate pictures of it. They participate in the generation of meaning through their bodies and action often engaging in transformational and not merely informational interactions; they enact a world.

Always keep learning…

* Hat tip to Heinz von Foerster’s wonderful quote. Am I apart from the universe or am I a part of the universe?


Discover more from Harish's Notebook - My notes... Lean, Cybernetics, Quality & Data Science.

Subscribe to get the latest posts sent to your email.

7 thoughts on “A Tale of a Thousand Models:

  1. “Am I apart of my model of am I a part of my model?” And, “a model is not the domain; but the structure of the model accounts for its usefulness”.

    Years ago, on a conference about models at the Delft Design Department, there was an exhibition of many models. Like the structure of the health system in New York. At the end of a lecture, I got the opportunity to ask a question.

    I asked: “Every model has a legend. The model is to the legend is like reality to….?”. The professor was taken by surprise, and, being a lecturer said it was good question and gave a lengthy answer, basically saying: “I don’t know”. At the very end however, he gave a metaphorical expression that contained the answer. I should look it up.

    In my opinion the answern is the model maker. The uses of the model account for the model. Like a ship is a model of the sea. It took me a long tome to understand that a ship is also a model of the sea, because the structure of a ship accounts for its usefullness. A ship on a river, a coaster, a shipping vessel, a row boat, …. in eacht case the use of the ship in a certain domain “determines” it’s shape. In fact, the word “shape” and “schip” are having the same source: to make.

    In the same way, any model models the use of the user using the model. This also explains why many ICT projects fail: because they’re build by somebody who is working from another domain for somebody who has other uses of the model – mainly control – then the using user. In my career as an information analyst, I preferred users building their own “prototype” first and then redoing it.

    Liked by 1 person

  2. There’s another interesting aspect of models. They come in the shape of beliefs, as models shape perceiving and perceiving shapes (thousands of) models. You are (also) a model of your domain and the structure accounts for the usefulness. That’s how many models became based on a two x two matrix: as above, so below and as forward, so backwards. (Or left and right).

    One cannot perceive without applying a model, modelled by one’s perceptions. This seems for me to be in accordance with Von Foerster’s perceiving-as-action-learning (! I’m a big fan of action learning) and the “model” of Maturana of a being being “informational closed”. A being is neither open, nor closed, it’s “hidden” behind a “hide”, or skin or (cell) wall.

    (By the way, in my MBA thesis on information system design, I showed that information doesn’t exist or, better, depends on the perceiver of a situation. It’s a process of informing, asking questions. And when data are confusing you, it might be called information. I also didn’t (and still don’t) believe in communication.

    Also: the word “in-form-ation” seems to be a metaphor about how shape or form “inform”. )

    Like

  3. To stretch the metaphor of a model even further, perceptual organs are models of an organism’s domain too. We need eyes to see the colour of food, but bats don’t. And bees see polarised light…

    And any brain models the models of the perceptual organs, against each other and the internal sense “organs”, like stimuli from internal organs and tissues, proprioception (positions of body parts) and the sense of the spatial orientation of a body. You’ve got three or four times more neurons working from the back of you head to your eyes, then the other way around. Brains produce “expectations of the future”. I would even go so far as to state that every organ organically models both herself and the conditions it’s working on. That’s why we can speak of a hearth or liver or guts as “having a mind of their own”.

    You could say, senses “make sense” and organs are like-minded. (By the way, I also like the use of “mind” in “mind the gap”).

    It has been shown that “independent” states of an organ or organism under a Markov blanket, model the environment one is dependent on. Yes, (in)dependence is a paradox too.

    Like

  4. A really nice post. I’m wondering if you’ve read about Ruth Millikan’s unitrackers from her book Beyond Concepts. Based on my (amateur) understanding of unitrackers and what you’ve posted here, I’d say unitracker = cortical minicolumn = model = pattern recognition/prediction unit. Cortical column = set of associated unitrackers.

    I would also like to point out there is a representation of an active (spiking) model/unitracker, but that representation is generated first in L5 cells of the column and then copied to the thalamus, making the representation available for tracking by higher order unitrackers/models. The L5 cells and thalamic cells are acting as semantic pointers, as per Chris Eliasmith.

    *

    Liked by 1 person

  5. I find a beautiful harmony between the contructivist view that we bring forth the world and the Buddha’s doctrine of dependent origination (pratityasamutpada). Both emphasize that reality doesn’t arise in isolation. It is relational. It is conditional. It is co-generated, i.e., reality isn’t passively received but actively constructed.

    “When this exists, that comes to be; when this ceases, that ceases.”

    Both seem to question the common view that things have independent existence. Both point out the fluid emergent nature of experience. And, both seem to point to the root cause of suffering (or chronic frustration) as our tendency to cling to fixed views of reality.

    I think Francisco Varela saw this commonality. He explored it with the Dalai Lama in the “Mind and Life” dialogues. Together with Evan Thomson and Eleanor Rosch, Varela wrote “The Embodied Mind” around the themes of your post.

    One significant influence of mine, J. Krishnamurti, had a constant refrain “The world is me and I am the world,” which sort of rhymes with von Foerster’s “Am I apart from the universe or am I a part of the universe?”

    Liked by 1 person

Leave a reply to Shrikant Kalegaonkar Cancel reply