Nowadays, medicine, robotics, stock markets… they’re all governed and orchestrated through artificial intelligence. Collecting, digging, and searching within millions and billions of data is therefore no longer such a complicated process, which requires professionals to spend a lot of hours in study and analysis, but thanks to AI, these processes seem to be completed in only a few seconds.
However, contrary to what we might think, there are areas that still resist the temptation to use AI and despite this may create astonishment, archives are part of sectors that still operate in a traditional way. At this point, the first question which undoubtedly arises in all of us is: why? Is it right to maintain this method or would it be better to rely on new technologies?
The 26 of September Lise Jaillant, Senior Lecturer in Digital Humanities and PGT Programme Leader in Communication and Media at the University of Loughborough, held a lecture at the University of Bern, to give us a more complete picture of this topic and to guide us through the discovery of the causes for which this industry is still reluctant to accept AI within it.
An overview of research
Since last year, Dr. Lise Jaillant and Dr. Arran J. Rees are leading a project called “Unlocking our Digital Past: Engagement with policy makers to improve the preservation, access and usability of born-digital archives”. The project was founded in Loughborough University with the partnership of the cabinet office in London in order to conciliate archivists, civil servants, and academics to understand which are the real obstacles between AI and archives, and how AI can be used as a solution to make them easily accessible but also well-preserved.
They ran their research, interviewing outstanding personalities in these fields as for example: James Backer (historians and ex-worker of the British Library), Clifford Lynch (executive director of the Coalition for Networked Information), Jason Baron (Professor of the Practice in the University of Maryland’s College of Information Studies and legal professional), Trump McDonald (Computer Scientist), Andrew Riley (archivist) etc… and at the end they were able to give an answer to those questions we previously placed.
Context and different intertwined areas
Every day, tons of important digital records are created by governmental bodies, and some of them are selected and preserved in the archives. However, getting access to these archival materials is extremely complicated for many reasons.
For what concerned the legal framework, as in the EU we can find the GDPR (General Data Protection Regulation), in the UK we have the equivalent Data Protection Act (DPA) made in 2018. Moreover, there’s a copyright legislation, which as we can imagine, has a huge impact on this topic and at last, The National Archive Guide, which also influence the way archival institutions provide access to the collections in terms of justification that must be given to diffuse personal data.
Looking at the governmental side, the opening of these archives could lead to issues of national security and for example embarrass entire countries and populations. However, it is important to remember that not all kinds of collections present the same number of risks and, at the same time, if we think on the side of researchers, give them the possibility to see this kind of digital records is essential to enable the writing of history. Therefore, access to these documents is a complex challenge not only for governments and institutions, but also for academics (historians, digital humanities and traditional humanities scholars etc.). Considering that there are a lot of data and, so far, few solutions to make the search lean, it is necessary to find a meeting point between these two points of view to find a common solution.
An attempt to find common ground was made with keyword search methods but, as we can easily guess, their accuracy is more than questionable and consequently they have not been very successful.
To date, what we are certain of, is that untangle large amounts of data is certainly not a work that can be done manually, and it is at this point that AI becomes central, because it is presented as a possible solution to greatly simplify the search processes.

What’s exactly Artificial Intelligence? Which are the principal hurdles between AI and Archives?
Generally speaking, Artificial Intelligence is an approach to make decision and complete tasks. Of course, it has a very long history, with periods of excitement and period of disinterest and surely, using AI in the search process can lead the researchers to the discovery of previously inaccessible records, but we have to take into account that AI is still in an experimental state.
The first issue that we face, if we take into consideration AI as a solution, is how to put it into practice from a technical point of view. In fact, finding specialists in the new technologies field, who are willing to work in this area is quite complicated because archives are not business with high-stakes moneywise looking from a commercial point of view, the gain is almost nil.
Of course, if this was the only problem preventing the use of artificial intelligence in this area, some obvious solutions could be implemented, such as finding more funding to attract talents working with this type of technology or investing more on the training of employees to educate them on the use of the future’s means. Nevertheless, once things get this far, we encounter two main problems which prevent the realization of this futuristic method that seems so convenient and easy: non-communication and distrust.
The non-communication
Civil servants, archivists, and researchers, the three categories of workers who are affected by this possible innovation, have professional principles which are not radically different since they are enshrined in the Code of Ethics and as professionals, they all must earn the ethical approval. The dilemma of the non-communication is born because these principals are often taken for granted and not shared within these three areas, but they are kept within that specific sector. Naturally, all this leads to another issue: mistrust.
Mistrust between stakeholders and mistrust in technology.
To apply AI to digital archives, we need to think about trust, collaboration across the entire archival circle, which starts with people who create records (like government professionals), then move to archivists, and at the end to users and researchers.
Unfortunately, trust is not the highlight of the archival circle, in fact on more than one occasion, archivists hesitated to realise data for development of research, because they don’t trust how the academic researchers are going to make use of this data.
Moreover, the lack of trust in technology also contributes as obstacle to apply AI tools. Actually, professionals do not always agree on what is and what is not ethical in the case of new technology, and this kind of mistrust along with the scarcity of shared professional ethic can lead to a stalemate.

Progress towards a solution and fixed barriers that remain raised
However, there are many initiatives developed at the moment, especially in the scholarly field where a lot of work in preservation of digital records has been done. A fantastic example, could be the Digital Preservation Coalition, based in Glasgow (DPC).
Furthermore, for what concern the miscommunication, different networks have been designed to have conversations between archivists, computer scientists and records creators such as the Aura Network, focusing on the UK and Ireland and the Aeolian Network connecting the UK and the USA.
Obviously, the most natural way to build trust in AI is make it more transparent, more explainable, and more ethical. But how can we do this in practise?
A reference model could be the UK government. In 2018 it established the centre for data asset and innovation, whose mission is to help partners to use data and AI in a trustworthy way, and in October 2020 the UK department of science and technology released a note where they ensured that the machine learning systems are designed and deployed in an ethical and sensible way. Nevertheless, also the GLAM sector (Galleries – Libraries – Archives – Museums) has stood out enough in trying to make these new practices as seamless as possible.
Unfortunately, we cannot say the same to what concern the code of behaviour, because tons of laws on this subject were made in the different concerned areas. The main issue is that there’s not a single set of ethical principles on which people agree, but at least, there is a common discussion which included transparency, justice, fairness, non-maleficence, responsibility, and privacy. These standards can also be found in the ethical code and professional guidelines followed by Jaillant’s interviewees. They all confirmed that they always try to work with integrity, honesty, objectivity, and impartiality.
The key point remains that there can be no betrayal without trust.
It is precisely the lack of trust in other stakeholders, and in new technologies, that formed a common ground in all the interviews.
Consequently, the first thing to do is to resolve this absence of confidence and establish a common code of conduct. Once this faith between humans has been restored, AI can come in as the next step to unlock digital archives. Of course, it is not possible to have unquestioning confidence in AI, and indeed it is important to have some critical analysis about that, but with the building of collective trust, with more knowledge about AI, and with interdisciplinary dialogues to surface the shared professional ethics the possibilities of finding a suitable solution for everyone grows exponentially.
To conclude, digital humanities have a huge role to play in filling the gap between record creators, archivists, researchers, and users because they are at the centre between many interdisciplinary fields. Thereby, they can be really fundamental in the cross-sectoral decision making and agreement process.

BIBLIOGRAPHY
- PowerPoint presentation and lecture given by Professor Dr. Lise Jaillant (University of Loughborough) on her lecture held on 26.09.2022 at the Digital Humanities at the University of Bern. The video recording of the lecture can be found on the institute’s website.
- Loughborough University Website https://www.lboro.ac.uk/subjects/communication-media/staff/lise-jaillant/
- Lise Jaillant Website http://www.lisejaillant.com/
- Unlocking our Digital Past Website https://unlockingourdigitalpast.com/
- Aeolian Network https://www.aeolian-network.net/
- Image 1: https://www.thetimes.co.uk/article/historians-revolt-over-plan-for-national-archives-limit-tpvw3xv79
- Image 2: https://technative.io/category/machine-learning/page/3
- Image 3:https://www.secureredact.co.uk/articles/attitudes-towards-ai-lack-of-trust-and-awareness-from-consumers
OpenEdition schlägt Ihnen vor, diesen Beitrag wie folgt zu zitieren:
serenasartori (18. Oktober 2022). APPLYING ARTIFICIAL INTELLIGENCE TO ARCHIVES: TRUST, COLLABORATION AND SHARED PROFESSIONAL ETHICS. Einblicke in die Digital Humanities. Abgerufen am 12. Juni 2025 von https://doi.org/10.58079/o58t
3 Antworten auf „APPLYING ARTIFICIAL INTELLIGENCE TO ARCHIVES: TRUST, COLLABORATION AND SHARED PROFESSIONAL ETHICS“
Thank you very much for this clear blog post which leads the reader wonderfully through Anna Jobin’s impressive lecture. First and foremost, she and the scientists she interviewed try to answer the questions about semantics of “trust” and it strikes me interesting as from which fields this rather “distrust” comes from. This seems to be another good example for the fact that there should be strictly defined common grounds when stakeholders from different fields come and work together. Once determined, the sooner the better once it’s applied. I agree completely with my fellow commenters – otherwise, in whichever field, loads of data is bound to be lost. To me, it seems rather realistic to establish common principles than to build trust towards AI. Several obstacles arise here, mainly how to make the complex knoweldge about AI more transparent and therefore accessible. At the same time, that almost dystopian touch that accompanies AI needs to be pushed away, by any means in a critial way.
Thanks for this interesting blog post. I think this analysis can be applied to any sort of archive. For example, I am a student of theatre study in Bern and if I consult an archive, it is usually the SAPA (https://sapa.swiss/en/). They are currently trying to digitize their collection and to provide it to the public by Wikidata (https://www.wikidata.org/wiki/Wikidata:WikiProject_Performing_arts). But while they are digitizing their collection, which already is expensive, increasingly Swiss theatres are starting to archive their data online only. Suddenly, the SAPA is being confronted with another problem: Having to archive the online archives of theatres because usually, when the artistic directorship of a theatre changes website will be overhauled, and the online archive is gone. As well, this only concentrates on the institutionalized theatre. The so called “freie Szene” often is forgotten by scholars and archivists alike, because it works more on the margins of the system. But their data would be accessible through event management websites such as eventfrog.ch. But those stakeholders (event websites) do not allow scholars to archive the already normed data entries.
So, we are losing loads and loads of data because the archives do not have the budget to digitize their archives and archive the digital at the same time and because stakeholders are not interested on collaborating with scholars. That is why I think Dr. Jaillant’s work is so important to bridge these gaps.
Very interesting article. What fascinates me about it is how important this is for many fields. I personally study sociolinguistics and it is as relevant to think about digital archives working with AI (with is advantages and its disadvantages) in this field as it is in any other humanities field, I would say.