ChatGPT and the future of AI in Europe: What’s at stake?

ChatGPT and the future of AI in Europe: What’s at stake?

Artificial Intelligence is already reshaping our world and exploding into every part of our lives. After two years of discussion, tomorrow’s committee vote in the European Parliament is a key moment for the EU’s AI Act. And it comes at a time when generative tools like ChatGPT have reignited the debate, as AI brings significant changes to our homes and workplaces.??

Given the press frenzy of recent months, it’s easy to forget that AI has been around for a long time. And it seems like we’ve been talking about regulating it for just as long. In this respect, the EU is far ahead of the rest.?

Generative AI has rocked the debate in Europe, but humans have been adapting to technological change for millennia. Think of the humble hammer. With the same tool you can build a house, but also smash a window.???

Around four years ago, the Commission’s High Level Expert Group (of which I was part) helped put in place the building blocks of today’s risk-based approach. The higher the risk, the more you need to do to prove that your AI tool is safe.?

The issue with generative AI is that it does not fit neatly into one category or another. The technology behind it is general purpose AI, a software that can be used for multiple tasks, and therefore on its own does not have particular risk attached – it is how you train it, on what data and for what use that really matters.??

This has been a key question for lawmakers as we enter the final stages of negotiations, because it has huge implications for where the responsibilities for making sure it is safe and where the regulatory burden lies.?

Tomorrow’s vote will also have huge implications for Europe’s economy.?

Today, only 3% of the world’s AI unicorns come from the EU and we see 10 times more private investment in AI in the US, and 5 times more in China. By 2030 the global AI market is expected to reach $1.5 trillion, and we need to make sure that companies in Europe are tapping into that, without getting tangled up in red tape.?

In short, we want Europeans to be the creators of AI, not just the users. Overburdening them now will simply mean they give up, move somewhere else, or get outpaced by innovators working in other countries. I am particularly concerned about the growing AI healthcare sector in Europe – people like OnCompass, our Future Unicorn Award Winner 2021, which uses AI to spot cancer in scans that doctors would not be able to see otherwise. AI use in health will be categorised as high- risk, so how can we make sure we keep these jewels from leaving our economy???

The regulations in place also need to be easy to implement for companies without huge legal departments to put in place. This means that the wording must be precise and offer legal clarity, both on the definition of what AI is, and in the list of uses considered high risk. Vague words today can have huge implications for innovation down the line. To take one example I saw from an earlier text, AI defined as having ‘varying levels of autonomy’ could also mean having no autonomy at all.??

This is where harmonised standards also play a key role. These are the rules that translate European legal texts into specific instructions when manufacturing or coding a product. If you do it like this, in a way that can be easily measured, you are sure to be in conformity with EU law. The problem is that these standards don’t yet exist. We must ensure that they are ready and that they can cover all legal requirements. At least two years of testing and trial are needed to understand how the regulation will play out in a real business environment before it becomes law.??

Going back to the hammer analogy above, the maker of a tool cannot and should not be held liable for everything that someone chooses to do with it. With AI it’s more complicated, but it makes sense that most obligations should still fall on the user of the tool. The person supplying the technology (or algorithm) must provide information about how it works, but there should be flexibility in allocating responsibilities based on the actual uses.??

Lastly, my liberal mind tells me that while protective regulations may be necessary, the biggest market failure on tech and AI that needs to be addressed by regulation is the educational system. A digitally engaging democracy with citizens having the right skills and knowledge on digital is crucial for Europe to remain competitive in the global market.??

?? Hanzo Ng

HRDF Certified Corporate Sales Trainer | B2B & B2C Sales Consultant | Lead Trainer and Founder of Sales Ninja, Hero Training, and ChatCoach.ai l Invented AI-powered Training Solution l Microsoft Certified

1 年

A thought-provoking post! The point about harmonized standards resonates with me. Establishing clear guidelines that align with EU law is crucial for companies to navigate compliance effortlessly. However, we must ensure these standards are comprehensive and adaptable to evolving technologies. I wonder how we can strike a balance between legal clarity and fostering innovation. Moreover, addressing the educational system's role in preparing citizens with digital skills is essential. I'm curious, how can we ensure that the regulations are precise yet flexible enough to accommodate the rapidly evolving AI landscape? Your thought?

回复
Anders Juncker

AI Conversation Specialist hos PensionDanmark

1 年

I don’t get it: humans are in frenzy over technology (AI) when they ought to be in frenzy over humans (ethics).

Aitor González

Thinks to reflect (Please, please don′t follow me)

1 年

Well chosen analogy, but unfair. Tools are made with an intention, be it a hammer , a gun or an AI model. You make a gun to fire it at some point, your choosing has been that. An AI model that by their own creators ( Altman, Sutskever, Brokman, Suleyman and so many others) has the potentiality of backfiring (putting at risk an astonishing 300M jobs) in their own wording, is not a hammer. The comparison of the European spirit with autocratic economic models, be those of extreme capitalism or Chinese control as the safe haven for the Unicorns, is a clear example of forgetting history. What about the safe haven for the people...plain techno-optimists have biased the conversation at such level that AI is considered unstoppable, undeniable, inevitable. With those cards on the table bets are off.

回复
Ignacio Manrique de Lara

Digital Strategy Consultant for Digital Transformation and Growth. Extensive international experience in SaaS, and Digital companies in B2B and B2C markets.

1 年

I completely agree with your views, Europe needs to be a key player in #generativeai since this is going to change the way we work and interact in the future

要查看或添加评论,请登录

Cecilia Bonefeld-Dahl的更多文章

社区洞察

其他会员也浏览了