The EU AI Act: A Step Forward or a Stumbling Block for Startups?
Edgaras Margevicius
Founder & Managing partner @ Prevence | M&A, Venture Capital and Private Equity | Competition law | Corporate | Business & Startups advisor
With all eyes on the EU Artificial Intelligence Act and its impact on the tech and startup industry in Europe, this month’s article naturally centers around this pressing topic.
The EU Artificial Intelligence Act officially entered into force on August 1, 2024, starting the timeline for various prohibitions and obligations outlined in the law. Although it will take approximately two years until the legislation is fully effective, there is already a great deal to discuss. Many have raised concerns that the new AI Act is not yet fully developed for implementation and could impose a significant burden on startups and scale-ups due to the red tape and compliance costs it will create.
Considering the existing legal frameworks that already regulate tech and AI, and recognizing how many companies currently struggle to comply with those regulations, the question arises: is more regulation necessary, or are measures that foster better connections between technology and the law needed to enhance sustainable compliance?
“Undercooked” AI Law Criticised by EU Tech Experts
Critics have voiced concerns that the EU Artificial Intelligence Act – the pioneering legislation aimed at ensuring ethical AI use – will make it even more difficult for startups to thrive in Europe. Beyond the regular tasks and duties of emerging tech companies, such as finding investors, establishing market presence, and keeping up with the latest technological breakthroughs, startups will now also need to navigate red tape, compliance costs, and regulations that many experts argue are insufficiently developed for implementation.
Cecilia Bonefeld-Dahl, Director-General for DigitalEurope, has highlighted the potential obstacles posed by the law for startups, stating that “While others will be hiring coders, we (in Europe) will be hiring lawyers.”
This regulation, set to be enacted in August and gradually implemented over the next two years, is seen as a worldwide precedent, stemming from the EU's ambition to position itself as the "global center for reliable AI."
Where the EU Artificial Intelligence Act Is Still Lacking
While the goal of promoting ethical AI and establishing the EU as a leading destination for reliable AI is admirable, many critics feel the legislation may have been introduced too hastily and could dampen the growth of Europe's emerging AI industry.
The law, which was developed quickly and lacks clear details on critical issues like intellectual property rights, is not fully supported by other legislation. Critics worry that these rushed regulations might restrict technology use rather than merely addressing broader AI risks. Officials are currently working to close regulatory gaps before they take effect, particularly concerning copyright law and AI-generated content. Additionally, there is a lack of clarity on how AI should be tested and which government bodies should oversee compliance.
This lack of preparedness, coupled with potential lobbying pressures, highlights the need for precise rules, greater stakeholder involvement, and effective enforcement. Furthermore, the Act appears to conflict with other international AI regulations, including an AI treaty from the Council of Europe. This complex legal environment could hinder startups and slow tech innovation. Moreover, the Act's shift away from focusing solely on high-risk AI suggests that the law aims more to manage risks than promote innovation.
In summary, while the Act represents a significant step toward regulating AI, it requires further refinement to address concerns about clarity, implementation, its interaction with other regulations, and its potential impact on innovation.
领英推荐
What Areas of Law are Already Regulating AI (and Why)
In Europe, several legal frameworks already govern AI deployment. The General Data Protection Regulation (GDPR) is a crucial rule managing AI's interaction with personal data. This regulation aims to prevent unchecked automated decisions about individuals, commonly seen in social media and digital advertising. GDPR mandates data privacy, holds tech companies accountable for data misuse, and aims to prevent market power abuse by requiring consent before using personal data. However, many companies fail to comply fully, with studies showing that around 70% of apps send data to third parties without obtaining the required consent, and only 3% fully adhere to GDPR norms. These violations, though not always legally challenged, indicate a broader problem with implementation.
Additionally, national competition laws are vital in maintaining AI's legal boundaries. These laws prevent monopolies and market dominance abuses that could be intensified by extensive AI use by large corporations. They work swiftly to counter monopolistic tendencies, ensuring a competitive market that fosters growth and innovation while protecting consumers.
Despite these existing regulations, there remains a pressing need for more specific, AI-focused laws to address the rapidly evolving AI landscape.
Final Thoughts on AI Regulation in Europe
The AI landscape in Europe is poised for significant transformation with the introduction of the new European AI Legislation. This ambitious regulatory framework seeks to set global standards for the ethical and secure deployment of AI technologies, underscoring the European Union's commitment to fostering innovation while safeguarding fundamental rights. However, the legislation has sparked controversy and debate. Critics argue that the law, in its current form, is not yet ready for implementation due to its ambiguity on several key points. These uncertainties present significant challenges for businesses, especially startups, already navigating a complex ecosystem of funding, market entry, and growth.
The necessity for such a law is clear. As AI technologies become more prevalent, the potential for misuse and ethical violations grows, necessitating a robust framework to mitigate these risks. The European Legislator's intent to create a secure and ethical AI environment is commendable. However, the approach must be balanced to avoid stifling innovation or imposing undue burdens on startups.
Emerging companies often lead technological advancement but are also the most vulnerable to regulatory challenges. Excessive compliance requirements and bureaucratic obstacles could hinder their progress, diverting valuable resources from innovation to regulatory compliance.
It is essential that the European AI Legislation evolves to provide clear, precise guidelines that protect societal interests and the dynamic nature of the tech industry. By doing so, it can ensure a thriving, innovative AI landscape in Europe that benefits all stakeholders without sacrificing the agility and potential of its burgeoning startup ecosystem.