Oh, humans. We're so great, so smart, with our huge brains and world-dominating intellectual power. We can see things nobody did before, and manipulate things that no other animal can. We're just amazing, aren't we? Except, we can't even see what's in front of us right now. No, really:

The "robe"

Take a look around. What are you holding in your hands? A keyboard and mouse? A phone? A tablet? Are you wearing your T-Shirt? A bath robe? Just your skin? All of these, don't actually exist. In the real world, there is no "phone" or "robe". All of it is in our minds. They are cognitive constructs, that help us make sense of the world. You don't want to be looking at the complex organized cotton fibers that are winding into threads and into layers, you want to see "robe". Because a "robe" occupies just a tiny part of your cognition, and it manifests itself as "dry", "warm", "soft". It has mass, durability, size. No, not the robe, but the "robe" - the imaginary construct in your mind that is a proxy for the real world thing.

The actual thing that might constitute the robe, are a lot of water molecules that come in and out of the robe, some of your skin cells and body hairs, yeast, bacteria, fungi, and of course, the cotton. All of that is also just proxies for the complex networks of molecules arranged in different patterns, which on their end are self-organized groups of hadrons that temporarily keep this shape, while others are actively being devoured by some bacteria, while some atoms decay.

There's a lot going on there.

The "robe" is a high-level user interface designed to help a biological organism navigate reality without needing to solve quantum field equations every time it wants to get comfy.

Human modelling

The Interface Theory of Perception exemplifies with gadgets, but I like a more relatable example that anyone can feel and understand without being a hardware engineer.

This ability isn't something that humans are the best at. Quite the contrary - we've outgrown some of our need for it. Where it's really fascinating is in much smaller brains, like that of a jumping spider or a wolf spider. However, we create quite complex models of everyday things that include:

  • Affordance (function) - what's the interaction with it? Can you drink out of it? Can you wear it? Can you step on it? This allows us to create a search index in our minds, and whenever we need to make a decision, we quickly come up with the correct thing.
  • Prototype - a bottle of beer is like any other bottle of beer of the same brand. They're indistinguishable by our model. The bottle of beer is a prototype that we use to substitute the actual object - a particular single bottle of beer.
  • Permanence and Identity - is the object still there or has it split or disappeared? Does it do that? What happens when you change a tyre on your car? Is it the same car?
  • Positional - the money in my wallet. Even when I move my wallet, the money continues to be tied to it.

... while smaller brain animals can't afford all of that.

Non-human

You probably learned, just like me, that tool use is a special human trait that makes us special. Then it shifted to "tool manufacture". Do you see this as specifically the ability to understand functional traits of an object, or it's affordance? That gives you a clue that not all animals have this detailed model of the world.

Spiders, such as Portia or Hyllus, are fascinating because they're solitary animals that don't rely on chemical signals from a hive, and their lives depend on navigating 3D space. But evoluton couldn't afford a bigger brain than a grain of sand. Pathetic. Or is it?

What they do with that grain of sand is truly fascinating. They manage to create barebones but expansive models of the world around them. They don't care that something is a stick or a leaf, but they remember the path. If we build complex colorful models of objects, with prototypes in our mind, knowing how they feel, what color they are, how they behave when they're dropped, a spider will just know its position in space. They know the object's affordance for them (place to walk, place to hide, prey, predator). And they have object permanence.

Beyond objects

According to anthropologists like Yuval Harari, the success of humans on the planet was due to developments beyond modelling of objects, but moving to a framework where the existence of objects in order to create a mental model, and abstraction, was no longer necessary.

We started abstracting away about everything. From credit to organizations with abstractions that work as proxies. For example right now you can buy a pig without any pig existing at all, or moving anywhere. It's a right to a promise of a pig, somewhere. Imagine that.

We have modelled and categorized everything there is to model and categorize. Even as I write this, before Zola turns it into a HTML page, there is an ample definition of ways to categorize things (taxonomies). We've modelled and classified behavior, character, reactions, societies, economies, etc.

Where it goes wrong

The reality is that each of these models are just proxies for the real world. They are simplified, reduced.

Whenever they misbehave, and we see an outcome that our model didn't expect, a lot of us question reality instead of the model. This doesn't happen for simple things, like a cup not shattering when dropped - no, for that we adjust the model: the cup is harder than I thought. But when it comes to complex models, that took a lot of effort to build, it is reality that is abolished when it disagrees with the model, as new models are built on top of the existing one, and facts are being ignored if they prevent the new models from being built. It's like the sunk cost falacy, where the cost is creating a complex model or defending it.

Look at history. Geocentrism was the dominant model in the middle ages, for the last centuries. But geocentrism didn't describe observations very well. While it was somewhat accurate, it was accurate only after additional models were built to explain discrepancies - epicycles. And even then, better instruments made observations that didn't agree with the model, and eventually the heliocentric model was adjusted enough to offer a lot more accuracy.

The only correct way to develop and use models, is to test them, and when observations contradict the model, you need to question the model, and not ignore the discrepancies.

Unfortunately that's not what we do. I've seen that a lot of the social media has propagated shame associated with acknowledging being wrong, and many people just won't ever do that. Facts are being ignored, cognitive dissonance rises, and models and abstractions turn into dogma at the base of cults.

Incentive for honesty

In a healthy intellectual model, admitting to a discrepancy is an "Update". It’s how the model gets better. But in our current social environment, we have inverted the incentives. We have created a culture where the "Cost of Update" is too high. Admitting you were wrong isn't seen as a step toward truth. It's seen as a moral failure. Consequently, people double down on their "Epicycles" - their sarcasm, their diversions, and their outright lies, to protect a model that was broken from the start.

In Philip Tetlock's book "Superforecasting", a superforecaster is described as someone who doesn't deal with certainties, and constantly adapts new information into their existing models of reality, updating them, making them more accurate. They win, because they remain grounded in reality. If you're doing anything other than this - you are living in a fantasy.

We are living in the "Idiocracy" trial scene. We weigh arguments not by their predictive power or their alignment with reality, but by their delivery. If you speak with nuance, if you concede a point to find a deeper truth, or if you refuse to use sarcasm as a primary language, you are "talking like a girl" - you are mislabeled as dishonest.

We have reached a point where the User Interface of our social discourse is so bloated with ego and dogma that we can no longer see the hardware of reality beneath it. Like the geocentrists of old, we will continue to add epicycles of sarcasm and "gotchas" to our broken models until the cost of maintaining the lie becomes higher than the cost of admitting we were wrong. Until then, the plants will keep dying, and we'll keep wondering why the electrolytes aren't working.