On the Use of ChatGPT in Cultural Heritage Institutions
Since the release of the ChatGPT dialogue system in November 2022, the societal debate about artificial intelligence (AI) has gained significant momentum and has also reached cultural heritage institutions (such as libraries, archives, and museums). The main challenge is to assess how powerful such large language models (LLMs) are in general, and Generative Pre-trained Transformers (GPTs) in particular. For the cultural heritage sector, the ChatGPT chat bot prototype reveals a whole range of possible uses: producing text summaries or descriptions of artworks, generating metadata, writing computer code for simple tasks, assisting with subject indexing and keyword indexing, or helping users find resources on the websites of cultural heritage institutions.
Undoubtedly, ChatGPT’s strengths lie in the generation of text and associated tasks. As “statistical parrots,” as these large language models were called in a much-discussed 2021 paper, these language models can predict on a stochastic basis what the next words of a snippet of text will look like. In this context, the ChatGPT use case has been trained – as a text-based dialogue system – to provide answers at any rate. This property of the chat bot points directly to one of the central weaknesses of the model: In case of doubt, ChatGPT provides untrue statements in order to maintain the dialogue. Since large language models are, after all, only applications of artificial intelligence and have no knowledge of the world, they cannot per se distinguish between fact and fiction, social construction and untruth. The fact that ChatGPT “hallucinates” (as the common anthropomorphizing term goes) when in doubt and also e.g., invents literature references, damages of course the reliability of the system – and it points to the great strength of libraries in providing authoritative evidence.
On the other hand, a strength of such systems is that they can excellently reproduce discourses and are therefore able to classify individual texts or larger text corpora and to describe their content in an outstanding way. This shows great potential, especially for libraries: Up to now, digital assistants that support the indexing of books have at best worked with statistical methods such as tf-idf, or with deep learning. Such approaches could be complemented through the use of topic modeling. The latter method generates a stochastically modelled text sequence that describes the content of a work or the topics it deals with. The challenge for users so far has been to interpret this collection of words and assign a coherent label to it – and this is exactly what ChatGPT does excellently, as several researchers have confirmed. Since this massively improves and facilitates the labelling of texts, this is certainly one of the most probable use cases for AI in libraries, and exactly the field on which the sub-project 3 “AI-supported content analysis and subject indexing” of the project “Human.Machine.Culture” focuses. By contrast, simple programming tasks such as creating a bibliographic record in a specific format or transforming a record from MARC.xml to JSON are in need of improvement; ChatGPT does not always perform such tasks reliably, as a recent experiment showed.
ChatGPT, as one of the most powerful text-based AI applications currently available, underlines the potential benefits of such models. At the same time, however, it also highlights the risks associated with the use of such applications: So far, only U.S.-American big tech companies are able to train such powerful models, make them accessible, and develop later onwards models optimized through reinforcement learning for specific tasks – with the clear goal of monetization. In addition, generative AI systems bring with them a number of ethical issues, as they require large masses of text that have so far been taken from the Internet and thus a place where not all people interact politely and with all etiquette. For example, a recent study has underlined that large language models reproduce stereotypes by associating the terms “Muslims” and “violence”. Moreover, toxic content in the language models have to be labeled as such, an operation that is being carried out by underpaid workers; this underlines again the ethical dubiousness of the process of establishing such models.
Finally, the fact that these models have been trained almost exclusively based on 21st century textual material available on the Internet has to be underlined. By contrast, sub-project 4 “Data provision and curation for AI” of the project “Human.Machine.Culture” concentrates on the provision of curated and historical data from libraries for AI applications. Finally, the deployment of large langage models points to very fundamental questions: Namely, what role the cultural heritage of all humanity should play in the future and what effect cultural heritage institutions like libraries, archives and museums may have on their establishment; and what influence the texts generated by large language models will have on our contemporary culture as such.
Leave a Reply
Want to join the discussion?Feel free to contribute!