Social networks, filter bubbles and the largest opportunity to date to make human great again.

While we enjoy global conversations with unprecedented scale and reach, our platforms are far from enabling a greater understanding. There is growing preoccupation around the polarization of thought, but platforms still have the power to be the greatest tools for the universal evolution of thought.

Social networks, filter bubbles and the largest opportunity to date to make human great again.

(image above: three partly dissonant opinions collide to create a fabric of dialogue and enriching human experience // artwork by myself)

Dear readers (if there is actually anyone reading this), you all are very well aware of the debate around how the various tools and services that we have built using technology shape the way we think, the way we interact, the way we communicate.  I'd like to add my view to that debate, with a short review of how these tools work and some tried approaches, and then going on to say that the social networks of our time are one of the most powerful tools to help the human race advance towards enlightment, or something like that. Interested? Read on!


There is much talk about how Facebook, Twitter and all the platforms we use to communicate should take a more active role in moderating and curating the content that users pour into them, the conversations that take place within, and the stimuli that their citizens are subject to. In other words, that social media platforms should take a more active role policing the content that users are exposed to. All the talk about fake news, for example, fits within this conversation.
Users are exposed to the content that other users create, but they are exposed to what the product in question (Facebook or other social media platforms) choose to show. The users that any given user follows generate too much content, with a great deal of it being irrelevant for the consuming user. So the platform has to find a way to sort through all that and pick bits that may be interesting to you in particular. Perhaps because you always read what some other user posts, or because you like their content, or there is a mutual follow, or many other reasons that can be measured in the platform.


There are many other things that cannot be measured in the platform, and therefore there is no data, no information on what other things drive you, how is your life actually like. So platforms have to either look for other data sources or rely on what they can infer from your usage of their platforms to come up with a picture of who you are, so they can help you navigate their content in an effective manner. But, alas! Can they really come up with an adequate enough idea of who you and me, their users, are? And with it, can they really come up with an adequate enough mechanism of curation of what life in those platforms is?


Oh, it’s the business model

The main business model behind a large portion of the platforms we use is based on selling advertisements. While they all provide a useful service, and the people who run them like to elevate their mission to the highest and purest service to humanity, this means in practice that the main incentive of these platforms, to be economically viable, is to keep people in the platform for the longest time possible. You may have heard about the attention economy: if I keep your attention you will be exposed, in one way or another, to certain types of content that will, in the end, try to sell you something. The type of services that this approach creates, and the implications to the people using them, is a far reaching subject which I will not try to cover here now. To the point of this article, though, I would like to stress that this business model has made the companies that run their services according to it to optimize their platforms towards keeping users "glued" to their offerings by priming the content that will be easier for you to digest. You like watching cartoons on Youtube? It will recommend you more cartoon videos. You follow musicians pages on Facebook? It will recommend pages and groups based on that. You like to spend time on pages that show extremist points of view? It will give you more of that too.


There are certain points of view that are regarded by a majority as being harmful. There are pieces of misinformation that find their way thanks to this manner of preparing content for the user. Not only do they find their way, but they find themselves amplified thanks to the recommendation algorithms that are primed to bring you more of what you are reading / experiencing right now, or based on your usage history. How can these platforms act upon them to limit their effect? Should they even do that?


Many call for the companies that run the platforms to introduce different kinds of moderation efforts. Deleting messages, warning about the potential fallacy of certain posts, or about the lack of quality or verifiability of the content, are some of these measures taken or proposed in certain services. There are many factors to consider when designing how to put these measures in place that will explain part of their efficacy or lack thereof, from the most basic design principles all the way to the cognitive biases with which we operate. I will focus on the latter.

The anchoring cognitive bias is one of the most important at play here. It states that the first piece of information we obtain regarding a specific topic typically matters and shapes our thinking around that topic afterwards. If the following pieces of information we obtain are in line with the first (the anchor), we will process these favourably. However, pieces of information that seem to depart from what the original, anchor, piece stated will be received with resistance, and will be put in doubt. Daniel Kahneman explains in his book Thinking, Fast and Slow, how our brains operate in two different modes: fast, reacting to little pieces of information and producing a response in a short time frame, and slow, a more rational process model in which broader pieces of information can be taken into account. The slow system works with heuristics, which are shorthand solutions based on our previous experience. These heuristics are influenced by our cognitive biases. The heuristics we react with via the fast system contain those biases.
There are two other relevant biases at play here. The confirmation bias makes us place more trust into the pieces of information that confirm what we already think (and once we have been anchored, this only gets reinforced). The bandwagon bias makes us place more trust on the opinions with more following.


When we start reading on a Facebook group, or read a Tweet that brings a first opinion to our minds regarding the topic at hand, we are exposed to these biases. If we are told that the following tweet contains disputed claims, we will disregard the warning and jump to the tweet only to read and react based on our confirmation bias. If we had no prior opinion, the content can anchor us, no matter the prior warning because the fast system is reacting to the stimuli and the medium is not designed to make us pause and reflect on what they tell us. Instead, we have an endless stream of comments, typically strongly polarized, where the design of the platform leaves room only for short opinions (the type of thing that the fast system likes to consume and produce), which are necessarily oversimplified views on the subject at hand. Our slow system, which could take into account several points of view and ponder them to produce a better informed opinion, requires more energy and time to kick in. But we humans also favor instant gratification, where we overvalue the short term rewards versus rewards that take longer time to obtain. Which means that most of us probably will just want to keep reading short comments instead of pausing to think about what we just read, contrast the information and form our opinion.


How can we approach this scenario, to prevent the directed use of these platforms and their design that exploits the way we are wired to promote certain ideas?


A classic: censorship

Outright censorship has been proposed, much as it has been proposed in many occasions throughout human history. The term "censorship" comes from the ancient Rome's occupation of censor. The censor was responsible for maintaining the census and supervising the morality of those counted and classified. It is interesting to see how digital platforms are getting closer to a full census of the internet citizens, and are the governors of the morals therein. In any case, not to go the route of saying that history repeats itself over and over again, with the large online properties today resembling the empires of yesteryear, I just would like to remind that censorship can have certain positive results, when the scope is constrained to matters where a universal consensus has been achieved by humanity (e.g., child pornography cannot be tolerated), but is a tricky field when powerful entities (typically states, but our beloved internet corporations hold greater power in certain respects) have to choose what is to be censored. How do we guarantee that we are not closing the door to ideas that hold human value in themselves? How do we reach consensus on what is a censorable idea? Pressing this approach forward typically leads to heated debate, conflict and remains ineffective for the lifespan of the debate in question. Plus the everlasting question of who shall we give the responsibility of deciding what is censorable. States, private companies, panels of independent experts, counsels… we humans have tried it all and keep trying it (there is a very nice piece on The Economist about this). Yet it seems we don't find the key to making things work within this approach.


The filter bubble!

Technology allows us to do other things, however. We could, for instance, tune the algorithms that recommend the content to show us more of what we might not like, and less of the same. You must surely have heard about the filter bubble: all we have to do is break up that bubble by relaxing the filter criteria. How compatible that is with the business model based on keeping your attention is another question, though: as soon as the content seems less relevant in a bad manner (too much of what I don't want to see) users will start leaving the site. This is a classic problem when dealing with recommendation algorithms: tuning the recommend vs. discover balance to prevent users going down a rabbit hole where their known tastes, ideas and thoughts only reinforce themselves instead of being exposed to the enriching flavours of diversity. This, actually, has further reaching implications and is present in practically every application of data-hungry algorithms, when they are used to optimize more and more instead of letting some fresh air get in. More on that topic on upcoming articles, but going back to what we were discussing, the recommend vs. discover balance is not easy to craft, especially when you run a huge company with audiences / citizens on the hundreds of millions or more.


Perhaps we can use our technological capabilities in a different fashion. A way of approaching the discover vs. recommend balance would be to not just let more "random" content in vs. what the algorithm expects to be a good fit, but to try to detect what is actually opposed to what the algorithm expects to be a good fit. In the case of written content (posts, articles, tweets, user forums, facebook groups) we would need to build algorithms that could understand the topic being discussed and the overall opinion / position towards the topic. These algorithms exist already, and they can be further improved if we throw some research resources into them. If we could do that, perhaps we can start defusing the radicalization of opinion by showing recommended content that is actually relevant to the discussion, just that it is not what you already think. Yes, the anchoring effect and the confirmation bias are still at play, but perhaps you'll see there are many other people also on the other side, at least countering the bandwagon effect. And perhaps you get to read something on the other side that you still didn't read about on your side, and anchor you from that other side.


Yet, there is more we can do, and with potentially higher impact. One thing is to recommend content, but why not inject it directly in whatever you are reading at the moment. Let's say you are interested in the debate of which animal makes the best company. You believe cats are best, and are subscribed to three groups that defend felines as the greatest choice. Instead of recommending you a dog centric forum, which you are not very likely to click on, we can detect what is talked about in the cats-are-best forum and have an algorithm reply to a conversation injecting a dog-centric opinion. Probably creates much more exposure to that content than the "recommended for you" features in all digital platforms, and it will serve its purpose much better too: users will have to react to what you present to them. Yes, all your beloved cognitive biases are still there, but you are presented with an opinion that is different and that you get to read, vs. the recommended group where you don't even click. It's not the warning message that tells you that the following tweet is disputable. It's an actual opinion that differs from the ongoing main opinion, and can help calibrate, at the very least, the discussion.


GPT-3

We could even go further. Surely you have heard about GPT-3, the language model that has been pioneered by OpenAI, that can generate text on virtually any topic, with unseen levels of credibility, quality and style. Why not have a synthetic agent produce a contrarian opinion to the ones being commented on the forum just to provoke a bit of cognitive friction? When this happens, we give an option to our System 2, the slow thinking process, to kick in and then ponder better all the information so far. This might sound like trolling to you, experienced forum user, but I'm sure we can do this in style, respectfully and with great taste. Properly designing the interaction so that users know that there is a machine talking, but a machine that basically doesn't think but rather reproduces human opinions and synthesises them beautifully (GPT-3, to start with, understands nothing at all of the text is has been trained with and the text it produces, it only analyzes relationships between words, not their meaning - but doesn’t mean it can’t be used to produce a text with a given opinion on a given subject).


Time to wake up

Being able to prompt a slightly different reaction on the reader, to make them stop and reflect, is something that I am wishing for personally. Promoting critical thinking is a key endeavour in these times of fast paced news, quick bite-sized pieces of information, rampant fake news and even fake realities being thrown at our faces (anyone wants to have Bill Gates talk back to you as your favourite virtual assistant?). The role of your well-tempered friend who tries to gently bring back the heated debate to rational grounds is sorely missing in the planetary scales at which these platforms operate. However, it is at those scales where impact can be more noticeable. Introducing any possible artifact that provokes a critical thought in the middle of a radical debate or even where no debate is happening, no matter how simple the approach is, may have the same scale-derived advantages.


Should digital platforms choose to try any action of this sort, I think they would be opening a door to profound change. The internet was endowed with the deep hope that it would bring freedom to humanity. Freedom of ideas, exchange of points of view. While there exist real global conversations with unprecedented scale and reach, we should acknowledge that our current platforms are still far from enabling a greater global understanding. As noted earlier in this text, we are witnessing growing preoccupation around the polarization of thought, fueled by these very platforms. However, they still have the power to act as the greatest tools for the universal evolution of thought, for the universal fusion and breeding of ideas, for generative, non-destructive diversity. Much as we sometimes may be tempted to think that the internet has matured in certain ways since its inception, specially since the inception of the web, the truth is that we still are in some kind of a cambrian digital soup. There are nutrients, there are aminoacids, there are incipient organisms. Perhaps we are witnessing already a cambrian explosion of internet, digital, data based products and services. But evolution still has a long way to go. The power of combining ideas into evolved products is exciting in terms of how they can impact education and a broader view of the world. The consequences for this planet's well-being can be monumental.


R. Buckminster Fuller famously said "If you want to teach people a new way of thinking, don't bother trying to teach them. Instead, give them a tool, the use of which will lead to new ways of thinking”. We have amazing tools today and unprecedented capacity, financial and technological. Turning Facebook, Twitter, Youtube and other existing and future platforms into the tools to let people evolve their thinking is one of the best bets humanity has to progress.