Tag Archive for: data sovereignty

Openness, and Some of its Shades

Openness, that lighthouse of the 20th century, came along with open interfaces (APIs). In the case of galleries, libraries, archives, and museums (GLAMs), it was the Open Archives Initiative Protocol for Metadata Harvesting, or OAI-PMH for short. At the time, the idea was to provide an interface that makes metadata available in interoperable formats and thus enables exchange between different institutions. In addition, the harvesting of distributed resources described in XML format is made possible, which may be restricted to named sets defined by the provider. The objects are referenced via URLs in the metadata; this also facilitates access to the objects themselves. Basically, the protocol is not designed to differentiate between users; licences and rights statements can be included, but it was not foreseen to mask specific material from access: The decision whether or not (and which) use would be made from material protected by intellectual property rights in the end lies with the users.

Lighthouse on the Breton coast, painting by Théodore Gudin, 1845

Lighthouse on the Breton coast, painting by Théodore Gudin, 1845. Staatliche Museen zu Berlin, Nationalgalerie. Public Domain Mark 1.0

The 21st century brought a new concept: Data sovereignty. This implies, on the one hand, that data are subject to the laws and governance structures that apply in the jurisdiction where the data are hosted; and for the hosts, on the other hand, the concept stands for the notion that rights holders can determine themselves what third parties may and can do with the data. With regard to the situation that there is now a second lighthouse – provision of cultural heritage data sets for innovation and research – providing orientation in troubled times, the role of cultural heritage institutions as access brokers becomes tangible: If rights holders do not wish to provide their (IPR protected) data openly to commercial AI companies, GLAM institutions as data providers are in the position to negotiate differentiations in the use of these data. For example, they may be used freely by startups, small and medium-sized enterprises (SMEs) and companies active in the cultural sector, while for big tech this could involve fees. Interestingly, the European Data Governance Act foresees such a case and includes a relevant set of instruments. There is a chapter on the use of data provided by public sector bodies (Chapter II, Article 6), which regulates the provision of data in exchange for fees and allows for the differentiation of the fees to be charged between private users, SMEs and startups on the one hand, and larger corporations on the other, which don’t fall under the former condition. In this way, a possibility for differentiation within the framework of commercial users is created, whereby the fees have to be oriented at the costs of the infrastructure to provide data. For these cases, cultural heritage institutions need new licences (or rights statements), clarifying whether or not commercial enterprises are excluded from the access to data based on the opt-out option of the rights holders; and clarifying whether or not big tech corporations get access by paying fees while data are provided free of cost to start-ups and and SMEs.

While this describes the legal side of the role of GLAM institutions as access brokers, there is also a technical side to data sovereignty, addressed by “data spaces”. APIs like OAI-PMH will continue to ensure the exchange between institutions, but will lose in importance in terms of data provision for third parties (apart from the provision of material which is in the public domain). By contrast, the concept of data spaces, which is of central importance for the European Commission’s policy for the upcoming years, will gain in importance. One planned data space is, e.g., the European Data Space for Cultural Heritage, which is to be created in collaboration with Europeana; existing similar initiatives include the European Open Science Cloud (EOSC) and the European Collaborative Cloud for Cultural Heritage (ECCCH). A technical implementation of such a data space is GAIA-X, a European initiative for an independent cloud infrastructure. Amongst other functionalities, it enables GLAM institutions to keep their data on premise while delivering processed data to users of the infrastructure after applying an algorithm of their choice to the data held by the cultural heritage institution: Instead of downloading terabytes of data and processing them on their own, the algorithm (or machine learning model) can be selected and sent to the data. An example providing such functionalities has been developed by Berlin State Library with the CrossAsia Demonstrator. Such an infrastructure does not only enable the handling of data with various rights of use, but also allows a differentiation between users as well as payment services. In other words: It grants full sovereignty over the data. As with all technical solutions, there is a downside: Such data spaces are usually complex and difficult to manage, which entails an obstacle for cultural heritage institutions, and often results in the need for additional manpower.

Linked (but not bound) to the concepts of data spaces and data sovereignty is the idea of a commons. “Commons” designates a shared resource that is managed by a community for the benefit of its members. Europeana, the meta-aggregator and web portal for the digital collection of European cultural heritage, explicitly conceptualises the planned European Data Space for Cultural Heritage as “an open and resilient commons for the users of European cultural data, where data owners – as opposed to platforms – have control of their data and of how, when and with whom it is shared“. The formulation chosen here is indicative of a learning process with regard to openness: defining an open commons “as opposed to platforms” addresses an issue which is characteristic of open commons, namely the over-use of the available resources which may lead to their depletion. In the classical examples of commons like fishing grounds or pasture, the resource is endangered if users try to profit from it without contributing at the same time to its preservation. However, this is not the case with digital resources. Rather, the issue lies with the potential loss of communal benefits due to actions motivated by self-interest. In the 21st century, the rise of the big platforms has revealed what has been termed “the paradox of open”: “open resources are most likely to contribute to the power of those with the best means to make use of them“. The need for data spaces managed by a community for the benefit of its members does not only add another shade to openness; at the same time, it opens up another front – the turn against platformization implies a rejection of the dominance of non-European big tech companies.

On the Use of Licences in Times of Large Language Models

It could all be so simple: cultural heritage institutions and other public sector bodies provide high-quality data on a large scale and, wherever possible, under a permissive licence such as CC0 or Public Domain Mark 1.0. This is in line with the idea that cultural heritage institutions are funded by taxes, therefore everyone should also benefit from their services and products; in the case of data, innovation, research and of course private use should be possible.

However, we live in times of large language models and exploitative practices, especially of US-american big tech companies. Here, data are extracted from the web on a large scale and processed into proprietary large language models. These companies are not only the drivers of innovation, but also set themselves apart from research institutions, for example, by having specifically trained data sets at their disposal as well as exceptional computing power and the best-paid positions for developers of algorithms; all these elements are expensive ingredients of a recipe for success in the face of limited competition.

One of the weaknesses of ChatGPT – and presumably of GPT-4 – is its lack of reliability. This weakness results from the inability of purely stochastic language models to distinguish between fact and fiction; but also from a lack of data. Especially with regard to “hallucinated” literature references, bibliographic data from libraries are very attractive for building large language models. Another problem is the lack of high-quality text data. According to a recently published study, high-quality text data will be exhausted before the year 2026; this is mainly due to the lack of etiquette and proper spelling on the internet. But who, if not the libraries, have huge stocks of high-quality text data? Almost all the content available here has passed through a quality filter called “publishing houses”. One may be divided about the intellectual quality of the books; but linguistically and orthographically, everything that was printed until the end of the 20th century (i.e. before the advent of self-publishing) is of very good quality.

Finally, dear money: inflation is back, the low-interest phase is gone, the first Silicon Valley bank went bankrupt. Many companies based there will soon need fresh money; there will soon be monetisation to generate profits. New and more capable models will soon be created from products (such as ChatGPT) that were previously offered free of charge, providing demand-driven services in exchange for payment.

Should cultural heritage institutions as public entities serve the maximisation of the profits of a few companies by providing expensive and resource-intensive (and tax-funded) data for free? The answer has to be differentiated and therefore complicated. Of course, data should also be made available under permissive licences, as has been the case up to now. A dual strategy can certainly be used here. On the one hand, data made available via interfaces such as OAI-PMH or IIIF continue to be accessible under CC0 licence or or Public Domain Mark 1.0; technical access restrictions can prevent large-scale data extraction, e.g. by controlling IP addresses or download maxima. On the other hand, specific data publications can be provided that bundle individual data sets to enable research and innovation; such offerings are protected as databases for 15 years, and here licences can be used that contain a “NC” (non-commercial) mark and make such data usable for research and innovation. As an example, the Prussian Cultural Heritage Foundation uses such a licence (CC-BY-NC-SA) for the digital representation of one of its masterpieces, and the (not so easy to use) 3D scan is also freely available under this licence (download here).

Interestingly, the European Union anticipated the case described above in the Data Governance Act and included a relevant set of instruments. There is a chapter on the use of data provided by public sector bodies (Chapter II, Article 6), which regulates the provision of data in exchange for fees. It states that public sector bodies may differentiate the fees they charge between private users, small and medium-sized enterprises (SMEs) and start-ups on the one hand and larger corporations on the other, which don’t fall under the former definition. In this way, a possibility for differentiation within the framework of commercial users is created, whereby the fees have to be oriented at the costs of the infrastructure to provide data. This is something rather atypical in the European legal system, since the principle of equal treatment applies. Cultural heritage institutions thus have EU Commissioner for Competition Margrethe Vestager on their side, who presented the Data Governance Act in 2020 (that is applicable from 24 September 2023, by the way). Vestager is also Executive Vice President of the European Commission for a Europe Fit for the Digital Age and has imposed more than 15 billion Euros in antitrust fines in her first five years in office. So the enforcing political will seems to be there.

In case of doubt, this will be necessary. Licences like CC-BY-SA-NC effectively prevent the use of public data for commercial exploitation in large language models. But since the creators of large language models are moving around in a minefield regarding copyright, and in the case of other models, a stock photo agency or other rights holders have already filed copyright lawsuits, one must unfortunately doubt that they will show consideration in the future. Of course, the relevant court decisions remain to be seen in the pending cases. Even with reverse engineering, it is not easy to prove which data sets have been incorporated into a large language model; therefore, a kind of circumstantial evidence would have to be provided. In the medium and long term, it therefore seems more sensible to focus on establishing validation processes and standards that have to be implemented prior to publishing AI models. This includes the disclosure of the training material and process, its evaluation by experts, code audits, but also a reversal of evidence with regard to the licensing of the data material used. Making such procedures an obligatory part of the approval of commercial AI applications is then actually the task of the European Union.

Finally, another way is to publish cultural heritage data in a separate Data Space for Cultural Heritage; the tender for this Data Space was launched last autumn and is part of the European Union’s Data Act. To what extent this Data Space will grant full data sovereignty  to cultural heritage institutions and thus the possibility to control access to data publications remains to be seen.