EU's first AI rules boost security?

EU's first AI rules boost security?

What's happening and why you should care:

  • The EU's rules on AI called the AI Act took effect on August 1st, 2024 to safeguard "high-risk" applications and "build an ecosystem of trust". The full enforcement is coming in 2027
  • Regulation in AI is fragmented and we are far away from global harmonisation. The U.S. and China, the leaders in AI innovation, do not have similar rules on systemic risks of AI. They follow different frameworks which do not prohibit the development of AI
  • AI and data are crucial geo-political pillars linked to security and energy independence. Whoever "controls" data, computing power, and benefits from investment and supportive regulation could take the lead
  • Automotive players, such as 大陆集团 and 宝马 have already developed Ethics for AI


The first-ever regulation on AI takes effect in the EU

The European Union's AI Act officially took effect today marking a significant milestone as the first AI rules aiming at mitigating the risks posed by Artificial Intelligence.

Primarily, the framework prescribes mandatory legal requirements for "high-risk" AI applications. These include:

  • risk-mitigation systems,
  • high quality of data sets,
  • logging of activity,
  • detailed documentation,
  • clear user information, human oversight,
  • and a high level of robustness, accuracy, and cybersecurity.

"High-risk" AI systems include for example AI systems used for recruitment, or to assess whether somebody is entitled to get a loan, or to run autonomous robots.

The framework also introduces general-purpose AI models to deal with human-like activities.


Self-driving cars could pose a High-Risk application of AI, therefore, rules are in place to safeguard development and execution | Image made with CoPilot
Full enforcement of the EU AI Act is coming in 2027

Why regulate AI?

The development of AI without strong governance poses many risks,

  • from the standpoint of technological innovations, such as cyber security and misinformation (GenAI hallucinations),
  • as well as its social and economic impacts, such as ethical development
  • and the impact of robotics on the labour market.

From a regional perspective, the EU also faces geo-political risks of falling behind innovation in AI and dependency on Asia and the U.S. due to the concentration of capital, talent and innovation in the U.S. (e.g. OpenAI and tech giants) and China (leader in Generative AI patents).


Artificial Intelligence is evolving fast raising technological, social and ethical concerns

Artificial Intelligence covers many technologies and applications, such as

  • Computer Vision and Machine Learning for face recognition, environmental perception to enable autonomous driving, etc.
  • Natural Language Processing for Voice Assistants
  • Generative AI, e.g. OpenAI's ChatGPT

The evolution of AI is making huge strides fueled by breakthroughs in AI models (GAN), data availability, advancements in computational power and strong investments, among others.

Innovation is marching strong

  • The analysis of the Patent Landscape Report on GenAI by the WIPO revealed that Tencent, Ping An Insurance Group and Baidu own the most GenAI patents

Investments are on the rise

  • Investments in AI startups surged to $24 billion from April to June, more than doubling from the previous quarter, according to Pitchbook.
  • Amazon spent $2.75 billion on AI startup Anthropic in its largest venture investment yet

The volume of data produced in the world is growing rapidly, from 33 zettabytes in 2018 to an expected 175 zettabytes in 2025, according to IDC

Today 80% of data processing and analysis that takes place in the cloud occurs in data centres and centralised computing facilities, and 20% in smart connected objects, such as cars, home appliances or manufacturing robots, and computing facilities close to the user (“edge computing”). By 2025 these proportions are set to change markedly.


How Europe plans to use AI regulation to boost innovation

Europe has developed a strong computing infrastructure essential to the functioning of AI. Additionally, Europe holds large volumes of public and industrial data, the potential of which, according to EC’s analysis, is currently under-used. Over half of the top European manufacturers implement at least one instance of AI in manufacturing operations.

Between 2015 and 2018, EU funding for research and innovation for AI have risen to EUR1.5BN. The Coordinated plan on AI developed with Member States is proving to be a good starting point in building closer cooperation on AI in Europe and in creating synergies to maximise investment in the AI value chain.


AI is a crucial revenue pool for the Automotive industry

  1. New?sensors, Supercomputers & Autonomous Driving Software?to support ADAS &?Autonomy.?By 2030, 67% of new vehicles sold globally will have?Level 2+ and 3 autonomous driving capability?and?25% of them will have Level 4.
  2. Software & AI?represent significant new value pools for automotive players.?The software will reach 30%?of the overall vehicle content?of a?D-segment car in?2030.?Automotive AI start-ups?raised $2.4 Billion between Q1 2021 and Q1 2023.
  3. Cloud-based development?of new features promises scalability, efficiency, security, and staying always up to date.

3 key use cases of?Generative AI in Automotive?and the challenges they face

Design:?Developers can utilize GenAi for?simulation of scenarios?and "edge cases", thus improving efficiency and performance. However, the regulatory landscape is not ready to?provide clarity and requirements for?robustness.

In-vehicle usage:?Faraday Future’s FF 91 will feature the brand’s?Generative AI Product Stack. Use cases include entertainment and social media. However, data quality, ethical guidelines and privacy policies are needed to filter out offensive or intrusive content.

Personalized Mobility?by personalizing the passenger's?journey?or traffic optimisation.?One of the main concerns is cyber security for these applications related to Intelligent Transportation Systems.

Automotive players, such as Continental and BMW have already developed Ethics for AI

Continental is developing a code of ethics for the use of AI

In June 2020, Continental announced it is working on ethical rules for AI. Continental is using AI in camera-based ADAS features as well as gesture-based HMI for in-vehicle cockpits.

“Artificial intelligence can and must only be programmed and used in accordance with clear ethical principles,” explains Dirk Abendroth, chief technology officer of Continental Automotive.

The code of ethics corresponds with international regulations such as the EU’s ethics guidelines for trustworthy AI. It applies to all Continental locations worldwide and serves as a guide for all collaboration partners of the company.

Equality is at the heart of Continental’s code of ethics for the development and usage of artificial intelligence. “The focus of the new regulatory framework is on the transparency of computer-based decisions as well as on data security”, according to their Press Release.

BMW Group’s 7 principles for AI

In October 2020, the BMW Group announced its 7 principles covering the development and application of AI. The company is already using AI in many functions, from production to logistics and other areas:

  • Customer and vehicle functions: Driver Assistance Systems to help customers drive safely, park and stay connected; and in HMI with the BMW Intelligent Personal Assistant, launched in 2019, which allows vehicle functions to be operated using voice commends (“Hey BMW”);
  • R&D in the following areas: AI-based energy management in vehicles; Acoustic analytics: sensory enhancement in the sensor model for automated driving functions; AI in requirements management.


要查看或添加评论,请登录

Georgios Stathousis的更多文章

社区洞察

其他会员也浏览了