It’s time to wrap a culture of privacy around AI
Privacy Culture
The first People-First Privacy Platform that puts people at the heart of privacy.
By Pete Ansell , Chief Technology Officer of Privacy Culture
Generative artificial intelligence is immensely powerful. Governing it properly so it supercharges your business, and at the same time embeds a culture of privacy, involves similar strategies. By breaking down generative AI into its three constituents -- inputs, models and outputs -- with a laser focus on data management, DPOs have the potential to create a robust framework for its proper use.
Firstly, there are some big questions that data privacy officers or DPOs are looking to answer: How do I create a wrap-around culture of privacy when it comes to governing AI properly? And how do I deploy generative AI effectively so that it also elevates my culture of privacy?
Right now, it’s not just the EU AI Act that’s driving action. Businesses are looking to differentiate themselves by tapping into proprietary data, either corporate or customer information. This can expose sensitive data to bad actors, falling foul of GDPR rules.
In fact, generative AI issues are like earlier GDPR compliance challenges on steroids. In privacy circles, if GDPR was a big stick, the race to utilise artificial intelligence is now a big carrot.
Yet, it’s difficult to bake data privacy into foundational models. Their strength comes from digesting millions of disparate documents and data points -- structuring the unstructured. On top of this their processing is a black box due to the number of parameters involved -- Chat GPT4 uses 1.7 trillion -- and the complexity of the algorithms used. It means that it’s difficult to control data outputs.
It’s why smart organisations must take a multi-teared approach when it comes to governing AI and deploying gen AI in their business to articulate an organisational environment where privacy is the default mode. Breaking down generative AI into its constituent parts is helpful in this process.
Firstly, when it comes to data inputs, mature data management is essential, this involves data labelling and full audits on what data is feeding the foundation model. This is where DPIAs, data privacy impact assessments come into play. Data inputs used to fine-tune large language models (LLM) are the one thing that businesses can control.
Secondly, the next level involves how that information is processed -- data minimisation, anonymisation, pseudonymisation and encryption are therefore vital when feeding gen-AI systems. The use of synthetic data can be helpful.
领英推荐
Thirdly, the next tier involves controlling how the AI interacts with your data. This is why retrieval augmented generation, or a so-called RAG based approach, is now popular. The goal is to protect sensitive data, where a foundation model references an authoritative data set that sits outside of the training sources before generating a response.
With RAG systems, businesses don’t share vast tranches of raw data with the LLM itself, access is via a secure vector database. Sensitive data is only retrieved when it’s relevant to a query, this minimises exposure. Differential privacy can also assist in this process, this allows the sharing of aggregated data with the model while protecting individual privacy.?
The final tier involves data outputs. This is where running AI impact assessments, creating a multi-disciplinary steering group and human insight comes into play. Human-in-the-loop is essential, so is training and awareness on the correct deployment of AI.
Right now, it is vital to take a multi-teared, ecosystem approach. Why? Because governing AI, particularly gen-AI properly and creating a culture around its responsible use is dependent on a wide array of variables, not just internal and external data, but the processes you put in place, as well as employee behaviour.
We want to be at a stage where we have AI governance effectively covered and where it can deliver a proactive rather than a reactive culture of privacy in your organisation. DPOs don't want to be sitting on a privacy platform all day analysing the risks associated with data use or AI governance. Equally they don’t just want to check in on it every six months or when something goes wrong or when there are data breaches.
DPOs just want a smart, proactive system that notifies them when action is needed based on their company’s culture around data. This includes the use of the latest AI tools, sensitive data, changes with processing and new regulations that coming into play.
Generative AI can actually help achieve this, it’s why Privacy Culture is developing Cuba. A wrap around solution that embeds a proactive culture of privacy. It does this by deploying a foundational model that is fine-tuned with a wide array of privacy related metrics – yours, ours and external information, this includes benchmarking across industries and jurisdictions.
The future of AI is bright but only with the right approach.
Find out more from me, Pete Ansell the CTO of Privacy Culture. I will be attending the IAPP AI Governance Global Conference from 4th - 5th June in Brussels.? ??