Large Language Models and their WEIRD Consequences

In his book “The Weirdest People in the World“, evolutionary psychologist Joseph Henrich focuses on a particular species that he calls “WEIRD people”. This play on words can be resolved because WEIRD stands for “white, educated, industrialised, rich, democratic”. Henrich wonders how it was possible for a small section of the population, most of whom live in the western world, to develop a range of very specific skills. He begins with the fact that over the last 500 years, the brains of these people have been changed by extensive reading and by the influence of Luther and his imperative to read the Bible independently. In order to characterise these changes and, in particular, to work out how a dynamic of acceleration and the driving of innovation as a motor of economic growth developed in Central Europe, he deals with educational institutions, urbanisation, the development of impersonal markets, supra-regional monastic orders, universities, knowledge societies, scholarly correspondence and the formation of new (protestant) religious groupings. If we wanted to continue Henrich’s study and extend it into the 21st century, we would have to look at the influence and changes that large language models (LLMs) have on the human brain. Although they have existed as recently as 2016 and have been available to a broad user base since autumn 2022 (ChatGPT) only, it is already possible to anticipate some – admittedly speculative – consequences of their use.

  1. We will (have to) learn how to deal with misinformation. LLMs are great fabricators, but they cannot distinguish between true and false. As highly efficient text generators, they can produce large amounts of factually incorrect content in no time at all, which feeds the internet, social media and the comment columns of news sites. This can lead to significant distortions in political discourse, for example, when elections are coming up – and this will be the case in 2024 in the USA, India, probably the UK and numerous other countries around the world. It therefore comes as no surprise that even the World Economic Forum, in its Global Risks Report this year, lists misinformation and disinformation used for the purpose of deception among the greatest risks with a short-term impact. As LLMs produce texts that predict the most likely next word, they generate articles that may sound plausible, but are often at least not entirely correct in terms of content and facts. A WEIRD consequence will therefore be that the human brain will have to learn discernment skills in order to accurately identify (and reject) this synthetic content.
  2. We will (have to) sharpen our concept of authenticity. In April 2023, Berlin-based photographer Boris Eldagsen rejected the prestigious Sony World Photography Award on the grounds that the authentic-looking image of two women was AI-generated. The jury responsible for the award was unable to distinguish the image entitled “Pseudomnesia: The Electrician” from a photo taken with a conventional camera. However, our viewing habits and perceptual routines are geared towards viewing photographs as faithful representations of reality. We will undoubtedly have to learn and adapt our concept of authenticity here, as multimodal LLMs have also become extremely powerful in the area of moving images. In January 2024, a study revealed that over 100 deepfake videos by Rishi Sunak had been distributed as adverts on Facebook in recent weeks. Both examples demonstrate the manipulability of our perception, lead to irritation, disturbance and scepticism and point to the fact that we need to relearn how to deal with AI-generated visual content.
  3. We will (have to) come to terms with the fascination of visual worlds. Generative pretrained transformers (GPTs) will soon not only be able to generate texts, but will also be able to create complete three-dimensional visual worlds. This is exactly what Mark Zuckerberg’s vision of the metaverse is aimed at: To create virtual worlds that are so overwhelmingly fascinating that users can no longer detach themselves from them; in other words, visual worlds that are highly addictive. The attraction of virtual realities, as they have been known in the gaming industry up to now, is thus potentiated. In order not to become completely dependent on these worlds and not to lose touch with reality, we will therefore have to adapt our cognitive abilities – certainly a WEIRD competence in Henrich’s sense.

These three examples show only the most likely consequences that the widespread use of LLMs will have on our brains. Many others are conceivable, such as the atrophy of the ability to conceptualise complex texts (also a WEIRD ability). In terms of the plasticity of our brains, the arrival of LLMs and their output is thus in line with historical upheavals such as the invention of printing and the introduction of electronic mass media and their consequences for cognitive organisation and social coexistence. It is no understatement to say that the concept of representation needs to be redefined. So far, humanity has coped quite well with these epochal upheavals. We will see how the WEIRD consequences will play out in practice.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *