Entries by Jörg Lehmann

Prize Question: Openness vs. Arcane Knowledge in the 21st Century

In 1798, the Königliche Societät der Wissenschaften (Royal Society of Sciences) in Göttingen published the following prize question in the Reichs-Anzeiger: “How can the advantages that are possible through the wandering of craftsmen be promoted and the disadvantages that occur be prevented?” (“Wie können die Vortheile, welche durch das Wandern der Handwerksgesellen möglich sind, gefördert […]

Whose Openness? Whose Ethics?

Openness can be described as one of the logics driving contemporary science and culture. The mechanism behind it is quite simple: Nation-states take a part of their tax revenues and invest them to finance science and culture, in the latter field for example the digitisation of cultural heritage. In return, research organisations and cultural (heritage) […]

Cultural Heritage Datasets, Artificial Intelligence and the Ethics of Non-Intervention

It is a well-known fact that machine-learning algorithms exacerbate biases inherent in the datasets on which they were trained. In recent times, this fact has found ample evidence, e.g. in Cathy O’Neill’s book “Weapons of Math Destruction” (2016), in Kate Crawford’s and Trevor Paglen’s “Excavating AI” (2019), or in articles such as “Data and its […]

On the Use of Licences in Times of Large Language Models

It could all be so simple: cultural heritage institutions and other public sector bodies provide high-quality data on a large scale and, wherever possible, under a permissive licence such as CC0 or Public Domain Mark 1.0. This is in line with the idea that cultural heritage institutions are funded by taxes, therefore everyone should also […]

On the Use of ChatGPT in Cultural Heritage Institutions

Since the release of the ChatGPT dialogue system in November 2022, the societal debate about artificial intelligence (AI) has gained significant momentum and has also reached cultural heritage institutions (such as libraries, archives, and museums). The main challenge is to assess how powerful such large language models (LLMs) are in general, and Generative Pre-trained Transformers […]