Regulating Artificial Intelligence
Photo by Andrea De Santis on Unsplash

Regulating Artificial Intelligence

If you have been online in the last few days, you have certainly heard of ChatGPT, the artificial intelligence (AI) tool developed by OpenAI. As the company's founder said, in less than one week, they went from zero to one million users. AI is fascinating, and it does not come without its own risks. As such, it has been under the radar of lawmakers and regulators, especially in the European Union (EU). In today's newsletter, I will discuss ChatGPT, AI, and the latest developments of the EU Artificial Intelligent Act.

According to OpenAI, the creators of ChatGPT: "we’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests."

As it seems that everything the internet could talk about in the last few days was this technology, I decided to try it myself. I asked it to "write a rap battle between privacy & security." See below the result and judge for yourself if the tool deserves its current hype or not:

No alt text provided for this image
No alt text provided for this image

You can visit ChatGPT's website and learn more about the research behind it, the methods, and the declared limitations.

ChatGPT was trained using Reinforcement Learning from Human Feedback. As such, among the various problematic issues are bias in training data - which might lead to incorrect, inappropriate, prejudicial, unethical, immoral, and unlawful answers. I will write a newsletter about AI bias in a few weeks, so I hope to explore the topic more soon.

I brought the ChatGPT model as an example due to the fascination it has generated worldwide and to illustrate the current capabilities of AI-based tools. When talking about AI deployment, especially in a privacy & data protection discussion, it is central to understand: how the technology will be used in the real world, how it will affect real people, and what the risks and consequences are - to individuals, groups, communities, and societies.

And the risk element brings us to the EU Artificial Intelligence Act. According to yesterday's press release from the Council of the European Union: "the proposal follows a?risk-based approach?and lays down a uniform,?horizontal legal framework for AI?that aims to ensure legal certainty.?It promotes investment and innovation in AI, enhances?governance and effective enforcement?of existing law on fundamental rights and safety, and facilitates the development of a single market for?AI applications. It goes hand in hand with other initiatives, including the Coordinated Plan on Artificial Intelligence which aims to accelerate investment in AI in Europe."

This proposal matters not only to the EU but to the whole world, as it will likely generate waves of regulatory changes affecting all continents. You can check the full proposal here.

The aspect of the proposal that I would like to comment on here are the two central categories of "prohibited artificial intelligence practices" and "classification of AI systems as high-risk."

Regarding prohibited practices, according to Article 5, these will be the use of an AI technology that: deploys subliminal techniques, exploits any of the vulnerabilities of a specific group of persons, or establishes a social scoring system. There is also a strict regimen for using "‘real-time’ remote biometric identification systems."

I am curious to know how these subliminal techniques will be properly identified and banned, as the exploitation of cognitive biases through diverse methods is current practice in various fields, and they can be subtle and nevertheless cause psychological harm. The same comment is valid for identifying technologies that exploit vulnerabilities and materially distort behavior, as it can be done in a disguised or contextual way that will be tricky to detect and ban.

Regarding classifying AI systems as high-risk, there are cumulative conditions or, alternatively, a list of high-risk systems. It looks like an effective system, as these high-risk systems will have to follow special requirements, including a risk management system. My concern here is that it should be easy and straightforward to amend the list, as high-risk AI can occur at any moment in time (and there are probably hundreds of high-risk AI systems being developed right now).

I look forward to seeing the next developments of the AI Act, how it will be applied in practice, and if it will affect technologies similar to ChatGPT.

-

? Before you go:

See you next week. All the best,?Luiza Jarovsky

Frank Gilbert

After 25 years in digital asset creation and team development, I now explore the dynamic between humans and technology. MSc Cyberpsychology, Ethics, Privacy, Security, and AI.

1 年

Luiza Jarovsky I am enjoying the The Privacy Whisperer and the subjects being presented in it. I always remind people of a couple things i believe to be true. 1 Privacy is first and foremost a personal responsibility. 2. No tool is ever responsible for its actions or consequences of its actions. People are. So when we talk about artificial intelligence (does not exist, at best we have automated information or augmented intelligence) we must always start with the fact that our tools are not excuses or blame for our choices. Tools don't care. They can not be ethical or moral. That's a humans' responsibility. Anyone that builds or uses a tool where they actually believe that they are uncertain of exactly what it will do, why it will do what it does and what the outcome and impact of its use will be ... is being criminally, ethically, unprofessionally, or otherwise irresponsible.

回复

要查看或添加评论,请登录

Luiza Jarovsky的更多文章

  • ?? Fundamental Rights & AI

    ?? Fundamental Rights & AI

    AI Policy, Compliance & Regulation | Edition #140 ?? PERSONAL REQUEST: You are currently subscribed to this…

    5 条评论
  • ??? AI & Biometrics

    ??? AI & Biometrics

    AI Policy, Compliance & Regulation | Edition #138 ?? PERSONAL REQUEST: You are currently subscribed to this…

    9 条评论
  • ?? Next Station: AI Liability

    ?? Next Station: AI Liability

    AI Policy, Compliance & Regulation | Edition #136 ?? PERSONAL REQUEST: You are currently subscribed to this…

    3 条评论
  • ?? The AI Regulation Divide

    ?? The AI Regulation Divide

    AI Policy, Compliance & Regulation | Edition #134 ?? PERSONAL REQUEST: You are currently subscribed to this…

    1 条评论
  • ??? AI Regulation Is Not Uncertain

    ??? AI Regulation Is Not Uncertain

    AI Policy, Compliance & Regulation | Edition #132 ?? PERSONAL REQUEST: You are currently subscribed to this…

    4 条评论
  • ??AI Governance: OpenAI o1 System Card

    ??AI Governance: OpenAI o1 System Card

    AI Policy, Compliance & Regulation | Edition #131 ?? PERSONAL REQUEST: You are currently subscribed to this…

    9 条评论
  • ?? The AI Copyright Saga: Top 10 Papers

    ?? The AI Copyright Saga: Top 10 Papers

    ?? PERSONAL REQUEST: You are currently subscribed to this newsletter's LinkedIn edition; kindly move your subscription…

    2 条评论
  • ?? AI in Healthcare: Risks & Challenges

    ?? AI in Healthcare: Risks & Challenges

    AI Policy, Compliance & Regulation | Edition 127 ?? PERSONAL REQUEST: You are currently subscribed to this newsletter's…

    2 条评论
  • ?? AI & Education: 10 Excellent Resources

    ?? AI & Education: 10 Excellent Resources

    AI Policy, Compliance & Regulation | Edition 125 ?? PERSONAL REQUEST: You are currently subscribed to this newsletter's…

    9 条评论
  • ??? Top 10 AI Governance Papers

    ??? Top 10 AI Governance Papers

    AI Policy, Compliance & Regulation #123 | August 22, 2024 PERSONAL REQUEST: You are currently subscribed to this…

    14 条评论

社区洞察

其他会员也浏览了