Orientation in Turbulent Times
Cultural heritage institutions such as galleries, libraries, archives and museums (GLAMs) currently find themselves in a difficult situation: Generative AI models have fundamentally changed the meaning of the term “openness”. Until recently, the open provision of digital cultural heritage was an absolute ideal, as was the protection of intellectual property rights (IPR). There is a grey area between this pair of oppositions with many fine nuances, and guidelines offer orientation to navigate between these oppositions in case of doubt. Openness should enable the creation of new culture on the basis of existing cultural heritage and to stimulate innovation and research, ideally by providing material that is in the public domain. Cultural heritage institutions can conclude licence agreements with publishing houses as the holders of copyrights. Until now, cultural heritage institutions have therefore seen their role as access brokers, balancing creator-friendly copyright and accessibility.
The development of generative AI applications, especially in the 2020s, has significantly complicated this situation: What is the relationship between generative AI and intellectual property? Can such models be trained with copyrighted material? Can copyright holders refuse to allow their material to be used to train machine learning applications? Who owns the copyright to the output of these models? Can certain commercial organisations be excluded from using copyrighted material while allowing other (commercial) users to do so? Cultural heritage institutions now have to navigate between the monsters of Scylla (intellectual property protection) and Charybdis (restrictions for commercial companies). The fact that there are now two Messina lighthouses (openness for all, and provision of cultural heritage data sets for innovation and research) does not make things any easier.
Karl Friedrich Schinkel, “Strait of Messina, Scylla and Charybdis”. Public Domain, Kupferstichkabinett of Berlin State Museums
The previously existing pair of oppositions, which often represented a dilemma (i.e. a situation in which every decision in favour of one of the oppositions leads to an undesirable outcome), is now replaced by four poles – with significantly more options for action: Affirmation, Negation, Both, Neither. This tetralemmatic situation is particularly striking for research libraries, as they have a treasure that is becoming increasingly valuable: Digitally available books with syntactically and lexically correct texts from trusted sources such as a cultural heritage institution or publishers have become a depletable and, in the near future, contested source for the training of Large Language Models. According to one study, high-quality text data in English will be exhausted before 2026, and the time horizon for other world languages is unlikely to be much longer. The stocks of public domain works that are constantly being digitised by libraries are therefore also currently increasing in value – ironically, however, including texts that are actually published in open access and for which the major publishing houses will secure usage rights in the near future in order to be able to train their own models. Libraries that have entered into licence agreements with publishers in order to be able to make copyrighted works available in digital form have a problem if the licence agreements explicitly exclude the use of protected content for training purposes. If there is no statement to this effect yet, it is advisable, depending on the national context, to protect the claims of the rights holders. The Royal Library of the Netherlands (KB) has therefore excluded commercial companies from downloading such resources, as there is a fear that such companies will violate copyright law, and the KB has updated its terms of use accordingly. This is unusual in that no distinction was previously made between different users. Legally, such an approach can be problematic if it prevents access to public domain material. Technically, blocking crawlers is only an emergency solution, as crawlers cannot be effectively blocked from the content provided; legally, action must also be taken against unauthorised use in the event of an infringement. And finally: Is it ethically correct to block commercial companies from certain content? After all, this also affects start-ups, small and medium-sized enterprises (SMEs) and companies in the creative sector. How can we legitimately differentiate between big tech companies and smaller players?
It is not surprising that there is a lack of clarity about the legal framework: the law often lags behind reality. The AI Act, which was negotiated with a compromise, is due to be passed and come into force this year. What will the regulations look like here – and will they really provide clarity? Entities that develop AI applications and operate in the EU will be required to develop a “policy to respect Union copyright law”. The use of copyright-protected works for the training of AI models is linked to the text and data mining (TDM) exception in Article 4 of the “Directive on copyright in the Digital Single Market“. This allows AI models to be trained with copyrighted material. However, the cited directive also provides for the possibility for rights holders to reserve their rights to prevent text and data mining; “where the rights to opt out has been expressly reserved in an appropriate manner, providers of general-purpose AI models need to obtain an authorisation from rightholders if they want to carry out text and data mining over such works.” This is where it gets tricky: So far, there is no standardised legal process for this, and it is unclear along which (technical) standard or protocol the right to opt-out should be formulated in machine-readable form. It is therefore not surprising that even a non-profit organisation such as Creative Commons has called for the option to opt out of such use to become an enforceable right.
Against this background, it becomes clear that cultural heritage institutions must abandon the ideal of openness, at least if it is set in absolute terms. Rather, nuances need to be added here: open to private users and research, but not to the cultural industry, to start-ups, small and medium-sized enterprises and commercial AI companies, if the rights holders wish. In pragmatic terms, this initially means that numerous licence agreements will have to be renegotiated in order to clearly document the rights holders’ position. Nevertheless, many questions remain unanswered: What about the numerous works where the rights of use have not been clarified? Is it possible to differentiate between SMEs and big tech companies, or does “NoAI” simply apply across the board? Shouldn’t there also be separate licences for this? Who is responsible for developing technical standards and protocols to implement the opt-out in a machine-readable way? Who is responsible for initiating the “machine unlearning” of models that have already been trained with copyright-protected works?