In a previous article I reported on recent findings of Markowitz and Shulman. Their research showed that indeed people seems to use complexity to hide unpleasant messages, as we all more or less intitutively know. The situation is well known to all of us. If you are in the middle of a conversation that becomes more and more complex, where your interlocutor uses more and more unknown words and very long sentences, you suspect that there is something rotten in the kingdom. We do not like to be confused, so we reject complexity, almost as a reflex.
The interesting turn of this piece of research was that they showed some cases were people actually like complexity. Namely, when they are being asked for money. So indeed, complexity functions like a mask, a disguise. Nobody like to be asked right away for money, even when it is for a good cause. We love a well spin tale, a complex one in the results of Markowitz and Schulman, in order to get over the unpleasant task of being deprived of our own money. So complexity confuses, but some times, we rather be confused.
Now, recently published work by Haghtalab, Jackson and Procaccia gives another interesting turn of the screw on the (complex) relation between humans and complexity. This time the issue researched is not the delicate extraction of money from unsuspected research subjects, but polarization. You know, that thing that is going on in pretty much every other political landscape across the globe. We might talk about Spain and the Podemos insurgency few years back, radicalizing politics to the left, or we can think in VOX, appeared even fewer years ago and radicalizing electors towards the right. Or we can think in the masses trying to topple the North American house of representatives, or the demonstrations against COVID19 vaccinations, pretty much across whole Europe. Let there be no doubt, political polarization matters, and if complexity studies had enlightened dark corners before (like controlling chaotic dynamics, one of the mayor topics of research of the senior author) any kind of light on political radicalization is very much welcome.
But before entering into these new results, let me point to an interesting reversal in their methodology. Back in the nineties, research in artificial inteligence (at least in my university) was introduced as the study of neural networks. That is because back then we had some understanding on how neurons might be arranged, so we copied those arrangements, or networks, in software that many years after and with much better computers, started mimicking our learning. So we model ourselves in order to produce other learning identities. Now, Haghtalab, Jackson and Procaccia have reversed this logic. What they did was to see how some Artificial Intelligent Agents learn, and polarize. And from their polarization traject, they inferred the polarization traject of ourselves, humans. I actually am enthralled by this reverse. We copied ourselves into software, and we manage to make our copies learn. And now, we see how these copies repeat our behaviors, avctually, our stupid behaviour, and looking at them we can understand ourselves better. If this is not some sort of bootstrap/meta/GNU’s-not-Unix kind of thing, I don’t know what it is. But that aside…
What the authors observes is that, when their learning machines are forced to build simple models of complex realities, the models produced are substantially different, even when the realities presented to their learning software are very similar. This is a well known way of radicalization. You present people with a complex reality, say for example a country. From a country you can see many things. You can pay attention to number of robberies, or to economical output. Say inflation, or level of education. That is a complex system, a country. And we see people that presented with the same country they see a place developing in the right direction, growing and being promising to their children, or they see a failing system, a rotting carcass. People radicalize, and they end up voting for Trump, or for Bidden. But the country that both Biden and Trump and their electors look at, is the same. How comes that so different views arise?
The answer from Haghtalab, Jackson and Procaccia is that different machines simplify reality along different lines, along different axes. And of course, that negates the whole point of a complex system, it’s multidimensionality. One axe does not do justice to the whole. But that is -also- what simplicity is, one dimensional. So then models that understand the same system in fundamentally different ways happen. People see the same country, but some look at the increase of robberies and decide to buy guns and vote for Trump, and some others see the increase of racism and decides to vote Biden.
So maybe we should not keep it simple, after all.
Belief polarization in a complex world: A learning theory perspective: Nika Haghtalab, Matthew O. Jackson, and Ariel D. Procaccia in PNAS May 11, 2021 118 (19)
What about Ockham's Razor?