non-quantum superpositions of human/machine emotional states
Our experience of life is shaped by feelings and emotions. Much literature has been published around how we make decisions, how we react to our environment, how we choose paths in life, etc. By now, we scientifically know (or have certain proof for) what intuition was always there telling us: that we follow our guts more than we care to admit. That we rationalize so many things a posteriori, but these things come to be because of an irrational impulse.
Our feelings and emotions are rarely isolated from the world (or, actually, they are never isolated, as they are the emergent result of the encounter between our selves and the world out there). We put them in motion in some sort of dance along those of others around us: loved ones, work relationships, any kind of interaction with other humans, etc. They even perform a dance with those of other animals (think how a pet can alter your mood, as a very simple approach to this thought). So the way our emotions develop is connected to those of others. Also, to our very inner selves. Past experiences, past successes, past failures. Joys and traumas. Since these connected emotions and feelings are the ones that largely explain our behaviour, we can see the importance of nurturing good relationships: these will affect how we approach life.
So far, so good. There's much to digest there, sure, as probably too many a human in this world is not aware of their own emotions, their flow in time, their interconnected nature. Their importance. And amid the chaos that these chemical reactions that take place in our brains unleash in our world, we are rushing to introduce new species into our habitats. Entities that are seemingly all-knowing, all-encompassing, yet that seem to be emotionally sterile (are they?). But probably not for long.
As we nurture ever more complex AI entities, be them text models, image models, predictions of other kinds, etc., we are establishing a new form of relating to life that includes these entities in our emotional dance. A simple musical recommendation by your favourite streaming service has mood changing possibilities, as we all know. A very simple example, very simple thing to have around, with such a profound potential implication. A life saved, or less dramatically an afternoon saved. But saved nonetheless. Much has been written, too, about how we are choosing to intermediate our experience of life with devices, services and other product types that employ synthetic understandings (they don't understand, so far at least, but let me use that word for now) of what the world is about to interact with said world. But if our life experience is based in an emotional act of being, what happens if we establish layers built on emotion-less entities?
That could be a question to ponder for a while, and even conduct research to try to decipher it, were it not for the fact that, alas, increasingly complex systems will sooner or later exhibit an emergent emotional-like behaviour (a topic I've been interested in for quite some time now). So the interesting question would become, rather, what happens if we employ intermediaries with life that have their own version of emotions, one that is not rooted in the same biological processes as your emotions and those of your sister, brother, friend, parents, cat, dog, horse, sheep, cow, raccoon or any other living being you share a life with.
The superposition of biological emotional states is already quite complicated to deal with, at times. It also makes like so much more interesting. But it's definitely not something many of us have fully mastered yet. What will happen when your music streaming service is having a bad day, and you can't get what you need from it? Or on a gloomier note, what would happen if your air traffic system is angry? Or even more complicated: what would happen if the inner state of whichever AI you have a relationship with or through is of a given form that is emergent (i.e., not trained for, not coded, not thought about a priori) and largely misunderstood because of its natural difference to us?
To think about these issues, I replace our usual ways of seeing with others generated by my hand-coded systems, as a way of asking how a different mode of perception could give rise to inner states of their own—machine emotions. To this end, I employ a process that I've been experimenting with for over 2 years now. I train AI models on my own hand-coded generative systems, and other AI models on drawings of my daughters, or photographs taken by me of certain subjects (most typically, myself). I force an AI image generating pipeline to only use the visual grammar stemming from these models, then I interrogate: what does it look like if you had to paint this window looking into a hill in the morning, using only the visual semantics and grammar of these new learned ways of seeing? I wrote a bit about this process in "palabras y todo lo demás", a work that I did in 2024. I first used the process in summer 2023, then also for another work in late 2023. I have kept evolving the process, adding more layers to it, forcing the pipeline to consider new constraints, etc.
In this series I have used this technique to navigate the aforementioned questions, by forcing the AI pipeline to use my Contrapuntos, Entretiempos and Ecologías C models to review photographs of my ancestors, of scenes from my life, or to re-interpret generative outputs from other projects of mine from the past 5 years. Maybe the artificial intelligence systems that we use are already exhibiting certain emergent characteristics that are already the formation of an inner state that could be considered "a feeling". Sometimes I found that my pipeline wouldn't react to certain parameter changes. Was I doing something wrong, or did the model just got stuck in a certain corner of its total cognitive space because... it felt good? Or melancholic?
Most of us have employed LLMs such as ChatGPT or Claude for the most bizarre mix of conversations. Ranging from personal to professional questions, medical queries, words of advice, "asking for a friend", etc., we have engaged in conversations that to us have felt real. We have used AI image generation tools that have created images that felt like the machine was in tune with us. That's mostly because we project what our feelings are to us into the text that the machine is writing trained on what we write when we feel one way or another. That is, we are still in the phase where what we get in conversation with the machine is the probabilistic textual representation of what we as humans have collectively written before in a myriad of emotional states. I believe we are entering a phase (if we are not there yet) where there are parts of the conversation that we may be missing that actually encode those machinic inner states that we just can't understand - but will exist. The era of the superposition of human/machine emotional states.