Vol. 5 The EU AI Act,"AI Systems" and ChatGPT's Customised Conversations

Vol. 5 The EU AI Act,"AI Systems" and ChatGPT's Customised Conversations

As 2024 marches on, we find ourselves in a world that is moving at speed. I’m here for it, while also slightly intimidated. The latter is probably healthy. In the spirit of continuous learning and sharing, this week's newsletter dives into:

??The definition of “AI systems” in the European AI Act (“the Act”, “EAIA”). We're set to explore the nuances of AI system definitions in the EU, OECD and US and their implications.?

??Additionally, I'll share my recent foray into a ChatGPT feature – a discovery that simply makes life a bit more efficient. I’ll admit I am a bit late to the party with this one, but I expect I may not be alone so it is worth sharing.

I hope you join me as we unpack these topics, aiming not just for understanding but for practical insights that resonate with our daily tech experiences. Here's to another week of exploration and discovery in the digital world.

As always, this is not legal or professional advice, nor is it the opinion of my employer or any affiliated organisations. I am simply sharing my journey and perspectives with you. Please read the full disclaimer at the bottom of this newsletter.?

“AI Systems” A comparative analysis

Having listened to some leading MEPs speak last week, it sounds like we might not be waiting on the final text of the EAIA too much longer. In the meantime, there are many reasons for why you might want to start the groundwork for compliance with AI related laws now.? In this part of the newsletter we will have a look at the definition of "AI Systems", with the caveat that this is all subject to change because we do not have the final text and we are missing important context around changes made to the definition since the release of the text that is currently available on EUR-lex.? For professionals in this field, it can be helpful to plan and prepare for what's coming. While we may not have all the details of the provisions just yet, examining what we know today is still valuable to enhance our understanding and prepare for future requirements.

Comparing the OECD and "old" EAIA Definition

If you’re anything like me, you’re keen to get your hands on the final text of the AI Act. We love a legal nerd moment. I enjoy the challenge of deciphering how it will translate into practical application. Today, we will only look at the definition of "AI System" as it will be written in the articles, not the recitals. While they don't have the same legal authority as the articles, recitals are key for the proper understanding of the articles, particularly in areas where there's ambiguity and nuance, which is common in complex fields like technology and data laws. The recitals help to illuminate the intent and goals of the legislation. This is yet another caveat. Now, let's get into it.

The OECD definition and the forthcoming text of the EU AI Act have now been aligned. According to Euractiv: ‘This definition was discussed in mid-October in the OECD’s Committee on Digital Economy Policy and Working Party on Artificial Intelligence Governance. According to a presentation given in this joint session, the timeline had been adapted “to inform the EU AI Act”’.

It would be helpful if all countries adopted this definition, as it would contribute significantly to the harmonisation of AI law's scope of applicability. That said, examining the scope of a new law's applicability is a sensible starting point. Similar to how GDPR programs often began with cataloguing personal data, compliance with the EU AI Act will likely involve many organisations starting with a list of "AI systems," assuming they fall within its territorial scope.

The Act's territorial reach is extensive, much like the GDPR. Based on the currently available text(referred to as the "old text" for simplicity), the Act applies to:

  1. Providers that are putting AI systems on the market or putting them into service in the EU, regardless of whether the provider is based in or outside of the EU.
  2. Users of AI systems in the EU.
  3. Providers and users of AI systems that are located in a third country, where the output produced by the system is used in the EU.

The above is edited to make it easier to read. There are some exceptions to the territorial scope, which I am not going into today. In short, the scope is broad and extraterritorial.?

Material scope

When looking at the material scope, and comparing the old definition to the OECD definition of AI Systems, it appears that we see a broadening of the scope. Let’s take a look:?

Old EAIA Definition:

"'artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with."

OECD definition:?

“a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”?

Firstly, it's a relief not to have to consult an annex while reading a definition. Often, legal documents are so interlinked that understanding one requires juggling multiple tabs and texts. Annex 1 in the old text details aspects like machine learning and expert systems, essentially techniques that could be included under "AI systems." When contrasting the two versions, a notable change is the replacement of "software" with "machine-based system." This shift steers the definition towards categorising the nature of the system rather than focusing on the specific techniques and approaches employed. Additionally, there's a shift from "human-defined objectives" to "explicit or implicit objectives." This broader terminology includes objectives that an AI system can infer. The OECD definition also emphasises that AI systems can differ in their "levels of autonomy and adaptiveness after deployment." Recognising this feels logical, as higher levels of autonomy and adaptiveness may necessitate stricter controls. This understanding should guide the AI governance measures needed for the responsible use and deployment of such systems.

Could this mean that all OECD countries would use the same definition?

From the OECD About Page:

President Biden's Executive Order on AI

For good measure, I revisited the definition in the White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (“EO”). At a very high level, I found it similar, in parts, to the OECD's. Like the OECD, the EO uses the term “machine-based system.” However, it also shares a similarity with the old EAIA definition, as it refers to “human-defined objectives,” rather than objectives an AI system might infer on its own from the inputs. Here is the EO’s definition:

“ a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.? Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action."

The EO definition, while incorporating machine- and human-based inputs, doesn't delve into how AI systems interpret these inputs to produce outputs. Nor does it touch upon the concepts of autonomy and adaptiveness. There appears to be a stronger focus on the processing and output generation.

Conclusion

The OECD's definition reads as the most expansive, but the three definitions are similar enough to start an AI Systems catalogue no matter which one you apply. Its inclusion of the AI Systems' capability to infer objectives from inputs and the varying levels of autonomy and adaptiveness is really important. As highlighted above, this helps ensure that the controls implemented are proportionate to the risks posed by a given system.

ChatGPT and what helped me this week

When I prompt ChatGPT I want it to know things about me and have that context upfront. I also want it to respond in a certain way. It’s very helpful not to have to repeatedly provide this context. There is a feature for this, which you can tailor to your exact needs. I’ve used it to create a learning plan and even a fitness schedule that I can adhere to with my current schedule and preferences.?

Screenshot of custom instructions for ChatGPT assuming you want to learn to cook like a Michelin star chef

It’s called 'Custom instructions'. It has been available for a while, but I had not tried it out yet. OpenAI provided the following examples “a teacher crafting a lesson plan no longer has to repeat that they're teaching 3rd grade science. A developer preferring efficient code in a language that’s not Python – they can say it once, and it's understood. Grocery shopping for a big family becomes easier, with the model accounting for 6 servings in the grocery list.”?

Basically, you don't have to keep repeating yourself, which makes life a bit more efficient. You can also specify things like your goals, your level of knowledge, the tone you'd like ChatGPT to use in its responses, and even whether you want it to offer opinions or stay neutral.

That's it for this Sunday. Thanks for reading!

Emerald

?? If you enjoyed this newsletter, please consider liking, sharing and following me for more. I am just getting started and ideally we get this to a point where people read what I produce during my Sunday mornings. You can subscribe by clicking this link.

Also, consider leaving a comment below. I'd love to hear your thoughts.

Disclaimer Everything here consists of my personal views at the point in time the post is published. It's here to inform and inspire, but it's not legal or professional advice. Any specific legal questions should be directed to a qualified attorney who can provide advice tailored to your individual circumstances.

The views I express here are solely mine. They don't reflect the opinions of my employer or any affiliated organisations. It's me, sharing my journey and thoughts with you.

I strive to keep the information in this newsletter accurate and current. However, the fields I discuss here are always evolving, so please keep that in mind as you read this and consider checking other sources as well. I might update the content occasionally to stay on top of the latest trends and developments.

Amalia Barthel, CIPM, CIPT, CRISC, CISM, PMP, CDPSE

I build privacy management programs| AI & Certified Privacy Engineer| Lecturer, Instructor and Advisor| University of Toronto School of Continuing Studies| Digital Governance, Risk & Privacy Coach| Opinions are my own

9 个月

Jasper is superior to ChatGPT by a landslide

Woodley B. Preucil, CFA

Senior Managing Director

9 个月

Emerald De Leeuw-Goggin Very interesting. Thank you for sharing

Clare Daly CIPP/E

?????????????????? | ?????????? ???????????? ???????????? | ???????? ???????????????????? |

9 个月

Very helpful to see the differing definitions set out and compared- thank you. A great read

要查看或添加评论,请登录

社区洞察

其他会员也浏览了