Why every librarian needs to be involved in shaping your AI Policy
CILIP, the library and information association
We represent and champion all information professionals
As the initial hype around Chat GPT and large-language models (LLMs) subsides, many organisations and regulators find themselves wresting with the same question – “now we better understand the risks and capabilities, how do we harness the potential of AI safely and successfully for our users?”.
This article is featured on the AI hub.
The answer - according to the BBC and the UK Information Commissioner’s Office (ICO) at least – is that we need to use this brief hiatus in disruptive announcements to develop and share effective policies for the use of AI by staff.
GDPR and AI-assisted decision-making
In their guidance on AI policies and procedures, the ICO sets out the key reasons why every organisation should be developing their AI Policy as a matter of priority:
“Your policies and procedures are important for several reasons, they:
This idea of ‘explainability’ is central to the ICO approach – and to the legal requirements on organisations. Specifically, the ICO states that General Data Protection Regulations (GDPR) create specific requirements around the provision of information about and the explanation of AI-assisted decisions.
We are currently in a ‘Wild West’ era of AI adoption in organisations, with individual staff frequently trying out AI-assisted tools as part of their workflow outside the organisation’s IT policies. A February 2023 study by Fishbowl found that as many as 68% of employees in their sample group had failed to disclose their use of AI to managers.
As a result, many organisations are operating with a significant degree of risk in the impact of AI on their compliance with GDPR. This is a primary reason why every organisation should develop and communicate their AI Policy before the ICO begins to deploy a stricter regime of controls and potentially fines.
Transparency, trust and creativity – Generative AI at the BBC
In October 2023, the BBC published three top-level principles that will shape and drive their use of Generative AI across the whole of their output. The three principles are (in summary):
These principles are important because they lay a foundation which will enable the BBC to develop and share AI Policies that are compatible with their public purpose as defined under their Charter.
CILIP Employers' Forum
This year's employer's forum brings together library and information professionals to hear from speakers in the industry on how they developed their artificial intelligence policies, and participate in a workshop to develop your own policy on artificial intelligence.
At the same time, it puts an important marker down that the BBC’s own use of AI should protect and promote the rights of creatives – absolutely critical in a time when several LLMs have been revealed to have been trained using large volumes of other people’s copyrighted material.
The librarian as the bridge to ethical AI
CILIP believes that there is a central role for ethical library, information and knowledge management professionals to ‘be the bridge’ between the capabilities of new technology and the needs and rights of information users.
Our professional values mean that we are ideally-placed to help organisations harness AI in ways that minimise the risk of exploitation and bias. Looking down the list of potential benefits of adopting an AI Policy, it is clear that this should rapidly become a key part of the skillset of our professional community:
An AI Policy will help your organisation (whether your library or information service, or your wider parent institution) to:
Not only this, but the process of developing an AI Policy will help your organisation both to learn more about its own systems and data - its values and ‘red lines’ – and to begin to develop training and support for the staff that will be putting the Policy into practice in their daily work.
In an age in which many librarians, information and knowledge professionals are being called upon to demonstrate the currency and relevance of our professional skillset – being ready to help our organisations develop a robust and effective AI Policy could just be the way into the hearts and minds of senior managers that we have been looking for.
If you have found this article useful, why not take it further by attending the CILIP Employers Forum event ‘Developing your AI Policy’. This practical event will take place in Worcester and online on the 28th February, featuring contributions from leading organisations that have been working on AI Policies of their own.
Develop your AI policy
Register now to join the discussion, and set you and your library up for developing an AI policy at the CILIP employers' forum.
Be hopeful .
1 年A good practicing library professional will always act as a safe & active information gatekeeper- be it in the past, present or in the future era of Artificial intelligence.
was Librarian/Senior Analyst at C5i
1 年Every librarian is a 'prompt engineering' specialist as well as handling access to information as part of our role. That needs to be more widely recognised.
I drafted our org's policy and am now on an AI committee. We definitely have the skills in this area and need to put ourselves forward!
Author publishing via Brewin Books.
1 年It's important.
Author publishing via Brewin Books.
1 年So right.