What Companies Should Do Now with the New EU AI Act

What Companies Should Do Now with the New EU AI Act

The introduction of the EU AI Act is a significant regulatory shift for companies using artificial intelligence (AI). This legislation aims to ensure that AI technologies are safe, transparent, and ethical. To comply, organizations must begin building or adapting their AI governance structures now. Here’s a step-by-step guide on what companies should do to prepare.

1. Source the Right People and Build a Multidisciplinary Team

The first step is to appoint the right people to oversee AI compliance. This responsibility doesn’t belong to one department—it requires collaboration across multiple functions. AI governance must be supervised by a multidisciplinary team, including:

  • Security experts to handle data protection and cybersecurity risks.
  • Product development teams to ensure AI solutions are built with compliance and ethical standards from the ground up.
  • Legal and compliance teams to interpret the EU AI Act and integrate it with GDPR and other regulations.
  • IT professionals to monitor and manage the technical aspects of AI.

It’s also essential to secure C-suite buy-in. Leadership needs to recognize the strategic importance of AI governance, ensuring enough resources and collaboration between departments to align on compliance.

2. Assess the Risk Associated with Your AI Systems

Companies must assess the risks associated with their AI systems. The EU AI Act categorizes AI technologies based on risk levels:

  • Prohibited AI: Systems that are banned (e.g., social scoring by governments).
  • High-Risk AI: AI used in critical sectors like employment, credit scoring, healthcare, and law enforcement.
  • General-Purpose AI (GPAI): Broadly applicable AI systems, such as large language models.
  • Limited-Risk AI: Systems with minimal risk but still requiring transparency, such as chatbots.
  • Unregulated AI: Low-risk systems with no significant compliance requirements.

The first question to ask during this assessment is: What type of data is involved? If personal data is involved, both GDPR and the AI Act will apply.

3. Create Policies and Procedures Around Ethics and Transparency

Once you have assessed the risk, the next step is to create policies that align with ethical standards and transparency. These policies should reflect principles like fairness, safety, and privacy protection. Ensure that the company has clear procedures for ethical AI usage.

Training is crucial—train your entire organization on responsible AI usage. Every team member, from engineers to executives, should understand the ethical implications and compliance obligations around AI.

4. Determine Your Role in the AI Life Cycle

To comply with the EU AI Act, companies must understand their role in the AI life cycle. Are you an:

  • AI provider (developing and supplying the AI system)?
  • Deployer (implementing the AI system in your operations)?
  • Importer or distributor (selling AI systems within the EU)?
  • Operator (utilizing AI systems in a specific operational context)?

Determining your role will clarify both your obligations under the AI Act and your responsibilities under GDPR. For instance, an AI provider may have different compliance obligations compared to an operator or importer.

5. Copyright and Trade Secrets: Protect and Respect Intellectual Property

  • The use of AI pose significant risks about the products obtained. AI works with lakes of information and questions snipes. It is known that most AI have been trained with information that is not licensed and there currently claims in US courts regarding copyrights

?

Lastly would like to comment what areas should be reviewed when building your AI Governance:

·?????? Policies and Ethical Guidelines: Develop policies that reflect your organization’s values, ethical principles, and guidelines for responsible AI usage. These should include provisions around fairness, accountability, and human oversight.

  • 2) Explainability and Documentation: Ensure that AI systems are explainable—both in terms of how they function technically and how decisions are made. Maintain thorough documentation outlining the AI system’s architecture, data usage, and decision-making processes, which will be critical for compliance and audits.
  • 3) Commercial Agreements: Incorporate AI-related considerations into your contracts. Ensure that commercial agreements with vendors, partners, and clients clearly define responsibilities related to AI compliance, intellectual property, and data usage.
  • 4) Risk Management: Create a framework for risk identification, mitigation, and management. AI systems should be monitored continuously for potential risks, such as bias or cybersecurity threats. Implement regular assessments to identify new risks as the AI system evolves.

?

Excellent article Carlos. Thank you for keeping us informed about the scope and applications of AI.

回复

要查看或添加评论,请登录

Carlos Landazabal Angeli的更多文章

社区洞察

其他会员也浏览了