Some initial thoughts on data privacy in the post-ChatGPT world

Some initial thoughts on data privacy in the post-ChatGPT world

One supposes that now #openai 's #chatgpt3 has been the subject of both a #TES article and BBC Radio 4's #thoughtfortheday Thought for the Day (Thursday January 12th 2023 if you're interested in looking it up), it's now sufficiently mainstream for a reflection on it a little more mature than the alarmist articles we have been reading about how it will spell the end of homework or cause half of all universities to go bankrupt in the next five years.

The publication of both terms of use and a privacy policy by ChatGPT's creators has gone some way towards filling the void identified by German law firm Field Fisher which last month had published reflections on whether or not universities should ban the tool and which cited concerns over the tool's designers' apparent lack of consideration of data privacy concerns (in particular the GDPR). But the privacy policy OpenAI have published is short on detail and generic to the point that prompts more questions than it answers. For example, here is a statement advising how OpenAI might transfer data to a third party without the consent of the data subject:

No alt text provided for this image
https://openai.com/privacy/

This is a very different interpretation of the concept of consent than the one we are used to under GDPR (and UK-GDPR which remains very similar). It is, however, entirely compliant with data protection law as enforced in California - which is the law OpenAI is primarily concerned with, and there are some safeguards elsewhere in the policy against the unintentional use of children's data where those children are under 13 years of age.

As ChatGPT gains users in British and European schools, it will be up to schools themselves to demonstrate to regulators that they have taken measures to ensure the privacy of personal data disclosed as a result of its use. This will involve a combination of updated privacy notices and similar policy documents and proactive measures such as parent information meetings and newsletters. We can't put the genii back in the bottle (and nor should we - ChatGPT 3 is an exciting and potentially liberating development in AI), but we must ensure its users understand what they are engaging with and what data they risk disclosing if they use it unthinkingly.

At this still relatively early stage in ChatGPT 3's introduction in the world of school, most school leaders will be discussing where they stand on the continuum that extends between outright ban (as we have seen in New York public schools - and I have to confess being in agreement with Kevin Roose on that issue), and freely allowing Chat GPT (and similar technologies; ChatGPT is by no means the only bot with the ability to disrupt the established expectations on essay writing integrity) to be used in their schools. Acceptable Use Policies and Academic Honesty policies are being re-written up and down the land and across the continent as I type this piece. The expectations they communicate will of course be subject to a judgement call by school leaders based on their knowledge of their individual learning communities.

Ironically, one popular use of ChatGPT for organisations experimenting with what it can do for them is drafting privacy notices and data protection policies for publication on their websites for compliance reasons. How effective this is is, to say the least, open to question - and I wouldn't recommend it as a practice. Even before AI became as ubiquitous as it currently is we saw examples a-plenty of organisations taking other organisations' privacy notices from websites and simply re-publishing them as their own - often with disastrous effects.

For example, for many years an independent firm my previous school often used for school photographic services had a full privacy notice on its website. I was probably one of only a few clients who bothered to read the whole thing, and when I did I realised it had been taken en bloc from a US-based manufacturer of sanitary products. Unless the school photographer had begun a non-typical side hustle in hygiene wares, it was apparent they had simply cut a corner when GDPR came in!

It's a safe bet that students' use of ChatGPT will extend far beyond such limitations, and they will need to be educated in how best to harness the power that tools like ChatGPT offer - as well as how to navigate the tricky moral and ethical maze that now looms. Our impressions about what constitutes fair use are bound to be challenged and may well change in the years ahead. But the data privacy questions will also merit serious consideration.

Presently, we lack the information about OpenAI's own intentions with data privacy to draw many conclusions about how well-protected data subjects' personal data will be. But this will change as more GDPR territories give more serious thought to how to use its products. There is an adage in data protection that applies well here. If a product appears to be free then you ARE the product. ChatGPT 3 is going to attract huge numbers of users who need to understand that little in the tech world is ever truly free.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了