How EU AI Act Will Influence the AI Development in Europe

How EU AI Act Will Influence the AI Development in Europe

Hundreds of hours of debate, pessimistic forecasts from critics, and fears of users ended with a completely predictable result. On March 13, 2024, the European Parliament passed a law on regulating artificial intelligence.

Sure, similar acts have appeared in the USA and China, but European law is the most notable of them, as it balances restrictions and opportunities.

As one of those who, together with AdvantISS , implements AI-based products, particularly for the European market, I have researched the new act in detail. Today, I will tell you how the law will affect businesses that use AI.

Powerful Restrictions or Logical Cautions?

I note right away: the innovations will have both a negative and a positive impact. In general, the law focuses on regulating the use of artificial intelligence by businesses.

Still, it also partially impacts the user segment (if the user does not create any AI model in an isolated space for their purposes).

Key changes are related to sensitive industries such as healthcare, finance, energy, and government portals that have integrated AI into their tools or plan to introduce it.

However, even basic businesses have faced several limitations regarding applying artificial intelligence in their operations. In particular, conditions have appeared to regulate AI access to some user data. Brands involved in the release and support of GPAI (General Purpose AI) will now have to adjust the algorithms.


Moments on Which the Law Focuses

The new act covers all types of AI and its derivatives, including enterprise systems and bots with user profiles. It sets requirements for developing digital products based on the technology and even limits its use to all user groups.

The key focus of the law is on the ethical side of the technology used by businesses.

For example, the new requirements include slightly higher security standards for digital products and the prohibition of collecting sensitive data (especially for systems based on common LLMs and servers with Data Lakes). Let's take a closer look at several key aspects of the law.

Data Privacy

The main innovation is a system that prohibits collecting sensitive information about users and storing it on public servers. Businesses can still access names, dates of birth, regions of residence, interests, etc. However, bots' generative capabilities are now limited to prevent the misuse of information for improper gain.

The same applies to data storage and transmission systems, the requirements for which have grown significantly and become more serious.

Credibility of Information

Generative AI can generate any facts, even of dubious quality or based on lies (who decides what is true and what is not?). Okay, we've already seen Bard assure us that James Webb took the first pictures of an exoplanet or how the British government's AI gave advice on how to fight COVID-19.

Such cases are not uncommon, so the European Parliament obliged the companies that develop and release bots based on LLM to check and adjust the worldview of AI to prevent similar precedents (censorship as it is).

Sources

From rock paintings to works of today and data from open sources. LLMs use all available information to learn and develop. But is it really effective and safe?

As you know, in addition to verified facts and qualitative data, hate posts are spreading on the net, together with propaganda, agitations, and ridiculous theories (hello to reptilians from Alpha Centauri).

Imagine this information becoming part of a generative LLM that will create a business strategy for your company. Well, brands train bots on isolated data but not publicly available GPAI. That’s why the European Parliament has established rules that oblige checking the sources and validity of information before adding it to the bot training system.

Ethical Moments

We've already seen examples of AI recommending the firing of women who complain about sexual harassment or quite seriously suggesting that someone there must be murdered. This is the result of uncontrolled training of the bot, which was literally imposed with harmful ideas and concepts.

That’s why the European Parliament obliged the distributors of LLM to check and control user information and filter harmful trends.

For example, if a bot receives a request that contains keywords, the system should flag it accordingly and exclude the information from the LLM indexing and training system.

Red Lines

Finally, about limitations. The new law strictly regulates numerous niches and their possibilities of applying artificial intelligence. For example, bots are now prohibited from providing health information. Similarly with payment methods (card numbers, CVV, etc.).

Still, bots should not provide information about treatment methods, algorithms for exploiting vulnerabilities, or the personal data of other users.

LLMs cannot monitor life support systems, including power grid data, defense platforms, etc. These restrictions are not accidental but are justified by potential risks that can negatively affect safety, infrastructure facilities, and people's lives.


The Potential Impact of the Law on the Development of AI

Many business representatives are outraged by innovations that limit their opportunities for profit (of course, they have not used affiliate marketing or black promotion methods before). However, in practice, not everything is as bad as it seems.

The Law on AI regulates how and in what aspects the technology can be used in business. Well, it sets the limits of what is allowed, justifying them with ethical and security risks that can lead to negative consequences.

Next, I will talk about several challenges and opportunities that the law of the European Parliament creates for most business representatives.

Diminishing Opportunities

You cannot uncontrollably collect, process, and analyze all types of information and use it to increase company profits.

However, in practice, the limitations are purely conceptual, as for most types of businesses, sensitive data is not that critical to generating revenue.

Conditionally, if you work with the commercial segment, you can still analyze information valuable to the niche and generate personalized offers to consumers.

Or, in the case of using CRM with AI, you still have the option of using a bot to work with customers, scan existing data, and convert it into marketing revenue or increased sales.

Therefore, honestly, I do not quite understand the reasons for the wave of negativity that arose after the law was adopted.

Rules of Work

Innovations will slightly increase the cost of developing and maintaining systems that use artificial intelligence technology. The reason is the increase in requirements for the security of digital solutions and algorithms for the use of bots.

Conditionally, if earlier you could integrate LLM into the basic solution with minimal adjustments, then in the future, you will have to change the platform and adapt it to the requirements. For example, you will have to deploy individual data storage, encrypt communication channels, and isolate the bot from components containing sensitive information. All this takes time and money.

The same applies to the practical use of bots, for example, in the customer service system. You should control the potential tone of LLM's conversations with users, the ethicality of their responses, and their content.

The more sensitive information that can potentially get into the LLM, the more requirements and restrictions the law sets for such cases. Although it does not prohibit the use of AI, it limits its potential.

Transparency

You should notify customers when a bot communicates with them or when your AI collects information about them. Ideally (according to European law), you should obtain consent for the collection and processing of data, as well as explain how, where, to what extent, and why it will be used.

This practice aims to ensure that your audience understands that the information about each person is used to improve their UX and is aimed at optimizing the services provided by the business.

Also, the law takes care of every person's right to privacy, limiting the owners of AI in matters of uncontrolled use of sensitive information.

Quality of Information

Well, among the most positive moments, I can single out the quality of information that will be received and offered by businesses that follow the AI law.

Just imagine that your data sources will be clearly structured, cleaned of unnecessary information, and collected with the consent of a loyal audience. This means that you will have a stable source of income with strong scaling potential thanks to organic promotion.

Generative abilities of LLM help you offer bonuses to the audience in exchange for new leads, or improve the user experience by promoting the spread of information about your brand. Sounds good?

I may be exaggerating, but I believe the new law will stimulate businesses to select sources for AI training more effectively. The quality of data and customer satisfaction will also improve.


Let's Summarize

The AI law from the European Parliament is not exclusively black or white. It comprehensively regulates the activities of brands and helps clean artificial intelligence-based systems from harmful information.

It also aims to catch criminals who use LLM to extort funds, cause moral harm to users, and mislead them.

Businesses will indeed face some limitations, primarily of a technical nature, which will partially affect their marketing potential. However, at the same time, the law introduces regulation that can be used to benefit from legitimate activities, such as improving the quality of services for customers or increasing the productivity of corporate infrastructure.

We at AdvantISS work within the framework of current European legislation and do not see any problems.

Have you adapted your systems to the new law? Share your experience in the comments!

Iryna Begma

Head of Sales at MLex, LexisNexis

5 个月

It definitely will. And more updates to come. Recommend getting updated on the AI regulations with MLex

AI development regulations are necessary to ensure ethical use ??

Oleksandr Khudoteplyi

Tech Company Co-Founder & COO | Top Software Development Voice | Talking about Innovations for the Logistics Industry | AI & Cloud Solutions | Custom Software Development

5 个月

Yeah, I agree that the EU AI Act is going to have a huge impact on the development of AI. The Act prohibits certain AI applications, such as social scoring and real-time biometric identification, due to their potential to harm individuals' rights and freedoms. However, I believe that this is a necessary step to ensure the safety and protection of us ??

要查看或添加评论,请登录

社区洞察

其他会员也浏览了