Human-Machine-Cognition

Humans search for themselves in non-human creatures and inanimate artefacts. Apes, the “next of kin”, or dogs, the “most faithful companions” are good examples of the former, robots are good examples of the latter: A human-like design of the robots’ bodies and a humanising linguistic framing of their capabilities supports, according to a common hypothesis, the anthropomorphisation of these machines and, as a consequence, the development of empathetic behaviour towards robots. The tendency to anthropomorphise varies from person to person; there are “stable individual differences in the tendency to attribute human-like attributes to nonhuman agents“.

Large Language Models (LLMs) are not (yet) associated with human-like body shapes. However, this does not mean that they are not subject to the human tendency to anthropomorphise. Even a well-formulated sentence can lead us to wrongly assume that it was spoken by a rational agent. Large language models are now excellently capable of reproducing human language. They have been trained on linguistic rules and patterns and have an excellent command of them. However, knowledge of the statistical regularities of language does not enable “understanding”. The ability to use language appropriately in a social context is also still incompletely developed in LLMs. They lack the necessary world knowledge, sensory access to the world and commonsense reasoning. The fact that we nevertheless tend to understand the text produced by generative pretrained language models (GPTs) as human utterances is on the one hand due to the fact that these language models have been trained on very large volumes of 21st century text and can therefore perfectly replicate our contemporary discourse. If the way in which meaning is produced through language corresponds to our everyday habits, then it can come as no surprise that we attribute “intelligence”, “intentionality” or even “identity” to the producer of a well-crafted text. In this respect, LLMs confirm the structuralist theories of the second half of the 20th century that language is a system that defines and limits the framework of what can be articulated and thus ultimately thought. And in this respect, LLMs also seem to confirm Roland Barthes’ thesis of the “death of the author”. The infinite recombination of the available word material and the prediction of the most probable words and sentences seem to be enough for us to recognise ourselves in the text output.

On the other hand, the specific design of chatbots supports anthropomorphisation. ChatGPT, for example, has been trained on tens of thousands of question-answer pairs. Instruction fine-tuning ensures that the model generates text sequences in a specific format. The LLM interprets the prompt as an instruction, distinguishes the input of the interlocutor or questioner from the text produced by itself and draws conclusions about the human participants. On the one hand, this means that the language model is capable to adapt the generated text to the human counterpart and to imitate sociolects; on the other hand, it creates in humans the cognitive illusion of a dialogue. The interface of apps such as ChatGPT further supports this illusion; it is designed like all the other interfaces used for human conversations. We humans then follow our habits and, in the dialogue with the chatbot, add the social context that is characteristic of a conversation and assume intentionality on the other side. Finally, ChatGPT was trained as a fictional character that provides answers in the first person. The language model therefore produces statements about itself, for example about its ethical and moral behaviour, its performance, privacy and the training data used. If a user asks for inappropriate output, the language model politely declines. These statements can therefore best be understood as an echo of the training process, as what OpenAI would like us to believe about this technology. The dialogue form and the fictional character reporting in the first person are the only ways in which OpenAI can control the output of the language model.

All of this can be summarised as “anthropomorphism by design”. It is therefore no wonder that we humans tend to ascribe human characteristics to a disembodied language model. However, while we are learning how to use such chatbots, we must not succumb to the illusion that we are dealing with a human interlocutor. Empathetic statements or emotions uttered by the bot are simulations that can become extremely problematic if we e.g. confuse the bot with a therapist. The assumption that a language model could be suitable for making decisions and therefore take on the role of lawyers, doctors or teachers is also misleading: in the end, it is still humans who take responsibility for such decisions. Therefore, we must not be tricked by an anthropomorphising design. The cognition that humans have anything other than a machine as counterpart is deceptive: there is no one there.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *