Total cognitive space

The way we architect what we do influences the total space of possibilities that we can reach in that activity. It is critical to understand this when dealing with our expectations regarding AI, organizational changes and, in general, the things we do and love.

Total cognitive space

It's been a while now that it grew on me that artificial intelligence systems (AIs in short) wouldn't have a conscience like ours, no matter the technological advancement, unless they are embodied in physical vessels that are a perfect simulation of our own bodies. And not even then.

Not that it matters a lot, I guess. But as I have been thinking about this every once in a while, some interesting concepts have been emerging and I feel the need to write about them, as I believe it may help think around organizational models and how they face the challenges we all have to tackle in modern times.

There is much research, and widespread commercial interest, in getting AIs to predict human behaviour with respect to a given scenario. How will we purchase this spring season, when will we switch telephone companies? What products do we prefer, how can we discover new things that we like? The search for an artificial system that can discern the right clues to our behaviour, and in a broader context systems that can discern the reasons why things work so we can improve our response to them, is an important one and it pays to step back and try to look at the problem from a broad perspective. The thing is, when we try to build systems that can understand a part of something, they need to be able to represent that part of that something inside what their total addressable cognitive space can hold. And we are trying to find solutions to problems with systems with inadequate cognitive spaces for the task at hand.  

What is this cognitive space that I am talking about? Let me explain it with some examples. Take for instance an image of one pixel, where the pixel may be black or white. That image knows no more than two possible realities. One human observer may have a reaction to an image that is just black or white, and start assigning meaning to it, each one according to their own experiences, their emotions, reflections, etc. But the image itself cannot represent anything else other than just black or just white.

One white pixel, one black pixel. Not much to say.

If we make the image a bit more complex, for example made up of 16 pixels by 16 pixels, and where each pixel can be one of 4 colours, we have an image that can represent more things. If those pixels have the capacity of representing many more colors, they can keep growing their representational space.

Left: a 16x16 pixel image with 4 possible colours per pixel. Right: same sized image but with more colours per pixel; it can aspire to represent Diego Velázquez now.

This example is as obvious as it gets, but it is the same concept that applies to basically every being, every institution, every activity that you do in life. Everything that exists does so in a context. The relationship to the context greatly explains the success of the thing, entity or being that we consider. The relationship to the context depends on this cognitive space that the thing, entity or being can represent.

In the case of AIs, most of you know at least the basics of how these systems work. They have a model that they use to process a collection of inputs, and provide a collection of outputs. For example, a model that is designed and trained to assess consumer credit risk contains information about certain variables that the designer / modeller thinks will be useful to explain the risk scenario, such as the person's salary, if they have accumulated previous debt, if they have a history of repayment, etc. The cognitive space of the model is constrained to the variables it has, to the depth of the data within those variables that it has been fed with and so on. A system with a mission to recommend songs may have information about which other songs you like, your age, who are your friends and what they like, etc. A system that coordinates traffic lights in a city may have a model that takes into account the state of traffic, the flow in each street, past predictions, past results, weather information, etc. It's not so easy to represent as the case of an image that has only two possible states / two possible representations. But the principle is the same. The traffic system has a total number of variables and combinations within that shapes what I am calling its cognitive space: the number of situations that it can represent. If a giant monster comes from space, lands in the city and disrupts traffic, the model cannot be aware of that circumstance because it does not know that is even possible. It's not within its cognitive space. It will select the best representation it has for the reality it can see from its inputs, cheerfully ignoring the fact that a beast from outer space is sitting down across three blocks downtown.

Our own cognitive space

What about us? Our cognitive space is also shaped and bound by a number of factors. We don't know much still about what conscience is, where it comes from, how does our sense of being arise from our existence and so on. There are many models of mind and theories of consciousness, an exciting field of research that I try to follow. From the various theories that circulate I see some common traits. Apparently, it has to do with the integration of the various sensory inputs. It has to do with trying to make sense of what is going on around us, choosing which inputs are relevant to us and to our goals. It has to do with reconciling what is around us with what is inside us: our past experience, what we know, what we remember, what we learned and what we love. It has to do with how we can connect these things together in our brains. So it has to do with the sensory input and the way we can then process that input. Moreover, some of the processing centres in our brains, when exposed to those inputs, react in certain ways releasing certain neurotransmitters which make us feel something. Emotions are also a byproduct of the processing of everything that is going on around us, or so it might seem from the very basic knowledge we are trying to gather around how our brains work. And emotions are one more variable to take into account when we consider the total amount of situations (external & internal) that we can represent in our cognitive space (oh, and they basically direct our behaviour, but that's a topic for a different article).

Long distance connections in our brain. For more amazing visuals of and, better yet, research on how our brains are internally connected, check out the Human Connectome Project; image by Anastasia Yendiki, Ph.D., Viviana Siless, Ph.D., MGH/Harvard, Boston Adolescent Neuroimaging of Depression and Anxiety (BANDA)

Our brains have around 84-100 billion neurons (84.000 - 100.000 million neurons), and around 100 trillion synapses (connections among neurons). We don't really know how the different types of neurons work, and besides the mere number of neurons and connections there may be many more variables at play when it comes to understand how many different combinations of activations patterns this yields. The possible combinations of all the variables are the tools we have to represent in our mental model whatever is out there, and whatever is inside of us too. It defines the total addressable cognitive space of humans. So whatever our conscience really is, whatever our feeling of self is, it seems it is somehow shaped by how our brains are built, the connections we have, the neurons, how they interact with the world, etc.  The total cognitive space that we can handle has an upper limit. Not exactly an upper limit, but rather a given N-dimensional shape: it covers a space defined by all the dimensions that influence it, and it has boundaries across all those dimensions. It does not necessarily cover all that there is. It covers what it can understand. The things that are not representable within this cognitive space will be experienced in a way that makes sense to the person, but may have nothing to do with reality (like the traffic model that cannot know that some Godzilla from Alpha Centauri is having some intergallactic fun some blocks away from the major's house).This total cognitive space is what is responsible for our adaptation capabilities. We can react to our context as much as our cognitive space allows us to understand it. Same happens to every other system and activity.

Your AIs have a different "mental map"

I hope you get the point of what I mean by total cognitive space. Now, why did I go into the trouble of trying to explain this idea? Let's go back to the beginning of this text: AIs wouldn't have a conscience like ours, I said. No matter how advanced our AIs will be, they will not be just like us in terms of what they are made of. They don't have our senses, and if we build artificial sensory inputs for these systems (cameras to see, artificial touch sensors, artificial noses, etc.), they would express reality in terms of different signals than those we process. The most similar being we could come up with would perhaps have a total cognitive space with a large overlap with ours, but that's all. The most probable thing, though, is that such systems would have a cognitive space that would overlap somewhat with ours but with large areas of existence that we would not be able to understand. All we can hope for is to identify properly the overlap and to try to project into it the things we need to deal with that other entity. It brings to my mind the fable of Plato's cave. We cannot see the reality, but rather the projections of whatever is the absolute reality into what our cognitive space can hold. Those AIs would experience a different projection.

This opens up several avenues for discussion and reflection.

First of all, it should remind us of what to expect from the systems we build. Or rather, what not to expect. Can those systems understand us? Most probably not, because they exist in a different cognitive space and therefore their "understanding" of the situations is different. On a more practical note, if you're building, e.g., recommender systems for humans, don't forget that your model cannot encode emotion, for example. And therefore it will be very incomplete. You can find workarounds for that which may enhance the cognitive space of the model, but it is inherently impossible for the model to "understand" an emotion, hence it cannot work with it properly. It should also serve to understand from a broad perspective the fact that modelling a situation requires diverse data sources. The more data sources you can have (that is, the more ways you can have to look at a situation), the broader the cognitive space of the model, the better its representational capabilities will be. Not only more points of view: designing and architecting systems with meaningful interconnections between all the sensory input (which includes internal states), with the possibility of interplay between its different parts, is essential. Every new connection between two different elements is a new dimension that expands the cognitive space.

It happens in the best families

It is obvious that this doesn't happen only between humans and AIs. It happens across different human cultures, for example. It happens when someone has a neurological problem that alters the total compound of neurons, connections, emotions, experience, etc. and this yields a possible comprehension of the world that is different to that of any "healthy" brain. If we understand that this total cognitive space is not just bound or shaped by the raw ingredients, but how they are cooked at a given point in time (in the case of our minds, for example, it includes what we already know, how we feel about that, our beliefs, etc.) then we can see that the difference between cognitive spaces and their state is also at the root of endless human conflicts. To me, it is very important to take all of this into account when we consider interactions between entities. The possible interaction takes place in the overlap of the cognitive spaces. What is beyond that intersection, that overlap, is something that won't be recognizable to each of the entities at play. And thus it is hopeless and illusory to try to obtain proper interaction results out of that non-overlapping space.

It also means that other entities that have a different cognitive space than ours are overly precious. They have a view of things that we cannot see. Anyone seeking to explore and understand whatever is out there, the changes we experience and how to better act should have an interest in finding their proper overlap with those other cognitive spaces. This is where I see the greatest potential in, say, artificial intelligences of the future. Having AIs that have an overlap with us to try to explain what they see, but having capabilities that are different that those of our own and therefore able to create a shared existence that goes beyond what our biological nature allows for. Trying to build systems that mimic our own functioning, hoping to create synthetic creatures that can pass as humans seems to me a bit boring. There's lots of people to get to know and learn from, why would I want to replicate that? Better to build things that bring something new.

Considering broader implications

This very simple idea has, from my point of view, lots of implications into how we organize our activities. Since the cognitive space is what allows entities to adapt to their context, why don't we consider reshaping it in, for example, the companies we build. Too many companies try to push forward not just with their current products and services, but rather with their existing internal organizations (interconnections, politics, governance, dynamics, etc.). As the context changes, they would do well in trying to reshape their cognitive space. The current interest in having a more diverse workforce, for example, is an excellent practice towards this end. Promoting diversity in the workforce brings different experiences, backgrounds, emotions into the company and effectively expands the cognitive space. There are more things to do, however. Diversity in work dynamics, diversity in processes. It may be counterproductive towards  optimization, but I think it's way past the time to only think about optimizing and work instead on building adaptive capacity. That is a topic for a different article, though. The thing is, it will help the company scale in its impact and its understanding of its context.

It applies also to society, to governments, to education. It's a  simple idea with universal applicability. In future articles I'd like to expand this idea more deeply into how organizations are designed and its implications. Why the decentralized economy is a good idea for humanity. How arts expand their cognitive space and how this, in turn, expands humanity's.

Anyway, that was just this simple idea. I'll be glad to know what you think about it, feel free to reach out to me with your feedback. Thanks for reading this far.