Regulating Artificial Intelligence
Luiza Jarovsky
Co-founder of the AI, Tech & Privacy Academy, LinkedIn Top Voice, Ph.D. Researcher, Polyglot, Latina, Mother of 3. ???Join our AI governance training (1,000+ participants) & my weekly newsletter (37,000+ subscribers)
If you have been online in the last few days, you have certainly heard of ChatGPT, the artificial intelligence (AI) tool developed by OpenAI. As the company's founder said, in less than one week, they went from zero to one million users. AI is fascinating, and it does not come without its own risks. As such, it has been under the radar of lawmakers and regulators, especially in the European Union (EU). In today's newsletter, I will discuss ChatGPT, AI, and the latest developments of the EU Artificial Intelligent Act.
According to OpenAI, the creators of ChatGPT: "we’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests."
As it seems that everything the internet could talk about in the last few days was this technology, I decided to try it myself. I asked it to "write a rap battle between privacy & security." See below the result and judge for yourself if the tool deserves its current hype or not:
You can visit ChatGPT's website and learn more about the research behind it, the methods, and the declared limitations.
ChatGPT was trained using Reinforcement Learning from Human Feedback. As such, among the various problematic issues are bias in training data - which might lead to incorrect, inappropriate, prejudicial, unethical, immoral, and unlawful answers. I will write a newsletter about AI bias in a few weeks, so I hope to explore the topic more soon.
I brought the ChatGPT model as an example due to the fascination it has generated worldwide and to illustrate the current capabilities of AI-based tools. When talking about AI deployment, especially in a privacy & data protection discussion, it is central to understand: how the technology will be used in the real world, how it will affect real people, and what the risks and consequences are - to individuals, groups, communities, and societies.
And the risk element brings us to the EU Artificial Intelligence Act. According to yesterday's press release from the Council of the European Union: "the proposal follows a?risk-based approach?and lays down a uniform,?horizontal legal framework for AI?that aims to ensure legal certainty.?It promotes investment and innovation in AI, enhances?governance and effective enforcement?of existing law on fundamental rights and safety, and facilitates the development of a single market for?AI applications. It goes hand in hand with other initiatives, including the Coordinated Plan on Artificial Intelligence which aims to accelerate investment in AI in Europe."
This proposal matters not only to the EU but to the whole world, as it will likely generate waves of regulatory changes affecting all continents. You can check the full proposal here.
领英推荐
The aspect of the proposal that I would like to comment on here are the two central categories of "prohibited artificial intelligence practices" and "classification of AI systems as high-risk."
Regarding prohibited practices, according to Article 5, these will be the use of an AI technology that: deploys subliminal techniques, exploits any of the vulnerabilities of a specific group of persons, or establishes a social scoring system. There is also a strict regimen for using "‘real-time’ remote biometric identification systems."
I am curious to know how these subliminal techniques will be properly identified and banned, as the exploitation of cognitive biases through diverse methods is current practice in various fields, and they can be subtle and nevertheless cause psychological harm. The same comment is valid for identifying technologies that exploit vulnerabilities and materially distort behavior, as it can be done in a disguised or contextual way that will be tricky to detect and ban.
Regarding classifying AI systems as high-risk, there are cumulative conditions or, alternatively, a list of high-risk systems. It looks like an effective system, as these high-risk systems will have to follow special requirements, including a risk management system. My concern here is that it should be easy and straightforward to amend the list, as high-risk AI can occur at any moment in time (and there are probably hundreds of high-risk AI systems being developed right now).
I look forward to seeing the next developments of the AI Act, how it will be applied in practice, and if it will affect technologies similar to ChatGPT.
-
? Before you go:
See you next week. All the best,?Luiza Jarovsky
After 25 years in digital asset creation and team development, I now explore the dynamic between humans and technology. MSc Cyberpsychology, Ethics, Privacy, Security, and AI.
1 年Luiza Jarovsky I am enjoying the The Privacy Whisperer and the subjects being presented in it. I always remind people of a couple things i believe to be true. 1 Privacy is first and foremost a personal responsibility. 2. No tool is ever responsible for its actions or consequences of its actions. People are. So when we talk about artificial intelligence (does not exist, at best we have automated information or augmented intelligence) we must always start with the fact that our tools are not excuses or blame for our choices. Tools don't care. They can not be ethical or moral. That's a humans' responsibility. Anyone that builds or uses a tool where they actually believe that they are uncertain of exactly what it will do, why it will do what it does and what the outcome and impact of its use will be ... is being criminally, ethically, unprofessionally, or otherwise irresponsible.