The Implications of the EU AI Act on Responsible AI Professional Learning and Up-skilling
https://www.istockphoto.com/portfolio/MininyxDoodle?mediatype=photography

The Implications of the EU AI Act on Responsible AI Professional Learning and Up-skilling

The EU AI Act, a groundbreaking piece of legislation, comes into force today, marking a significant milestone in the regulation of artificial intelligence, as all companies that market, deploy, or use AI systems within its jurisdiction are now required to comply.?

For a high-level, comprehensive summary of the EU AI Act, you can refer to this article.?

This is a game changer: Responsible AI is no longer just an ethical choice or a marketing slogan; it is now—to some extent—a regulatory requirement.?

This shift establishes responsible AI as the norm, making it synonymous with standard AI—something we, at the Mila Responsible AI learning team, have long advocated for in our Trustworthy and Responsible AI Learning (TRAIL) program.?

As the AI ecosystems in Europe and beyond examine the impact that this new regulation will have on businesses, it is becoming increasingly clear that the implications for professional training and skills development will be significant.?

In an effort to identify key areas for up-skilling, company executives, team leads, human resources specialists, schools and universities, as well as public policy professionals, should anticipate the following emerging trends and needs.


Implications for AI Practitioners

The AI Act distinguishes distinct kinds of entities (see full list and definitions) and imposes different types of obligations to each depending on the level of control they have on the technology and the level of risk the systems present.?

Providers (Developers) of high-risk AI systems are at the forefront of the AI Act's regulatory requirements. These systems, which could significantly impact people's safety or fundamental rights, must comply with strict obligations, including risk management, data quality, documentation, transparency, and human oversight.?

Organizations deploying high-risk AI systems are also significantly affected; they must ensure they understand and manage the risks associated with these technologies. This involves developing capabilities in areas such as AI ethics, risk assessment, and system monitoring. This requires a comprehensive understanding of the systems’ functionalities and the potential risks they pose.?

Finally, general-purpose AI providers, whose systems can be adapted for various applications, face a unique challenge. They must anticipate and mitigate potential risks associated with the versatile use of their technology. This necessitates a broad understanding of various application domains and additional regulatory requirements.?

It is evident that there is a need to upskill legal and technical teams, particularly AI professionals working in organizations that qualify under one of the above categories.


This is clearly stated in the act, Chapter 1, Article 4:

“Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.”

Definition 56 further states:

“‘AI literacy’ means skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.”


Among the knowledge and skills to be developed, we can include:

  • Ethical AI: Understanding the moral implications of AI systems at hand.
  • Data Quality Management: Ensuring the integrity and accuracy of data used in AI models.
  • Compliance and Documentation: Mastering the documentation processes required for regulatory adherence.

  • Monitoring and Reporting: Establishing mechanisms to continuously monitor AI system performance and compliance.
  • Impact Assessment: Evaluating the potential impacts and ethical considerations of AI applications throughout the AI Life Cycle.

  • Interdisciplinary Knowledge: Understanding the diverse contexts in which AI systems could be used.
  • Proactive Risk Mitigation: Anticipating and addressing potential misuse or harmful outcomes with socio-technical tools.


Implications for Other AI Professionals and Leaders

Increasing proficiency in Responsible AI will not be solely a matter for legal and technical teams.

As the act is applicable across the entire AI lifecycle, compliance will be everyone’s job. There will be a need for collaboration across departments, from tech to sales, from legal to design. AI literacy, and Responsible AI literacy throughout the workforce, will facilitate and ensure businesses compliance.?

Furthermore, as new codes of conduct, standards, and additional bricks of legislation emerge, having a head start in training and creating a genuine culture of skills enhancement on these issues will prove to be a major competitive advantage in the European Union's markets.

Thus, in addition to technical professionals, we can note the importance of training professionals in:

  • Procurement who will now need to consider AI compliance as a critical factor in their purchasing decisions. This includes understanding the AI Act's requirements and ensuring that contracts include necessary clauses for compliance and risk mitigation;
  • Human Resources who will play a pivotal role in implementing the AI Act's provisions, particularly those related to employee rights. The Act enables employees, candidates or workers to lodge complaints about AI systems that they believe infringe on their rights. HR professionals must establish clear channels for handling such grievances, ensuring transparency and fairness in the process.?They will also play a role in training employees in responsible AI, fostering a culture aligned or ahead of regulatory standards.
  • Executives across organizations, not only to ensure global and cross functional governance structures are adopted and followed but to understand the implications of adopting responsible AI practices and support their teams accordingly. It is essential in a change management and corporate culture development approach, that leaders understand how to best approach and adopt AI.


Implications for AI/ML Higher Education Institutions and Schools

While the need to develop learning programs for AI experts currently in the workforce is a priority and a clear business opportunity for continuous learning organisations, providing future AI practitioners with the right knowledge and skills should also be top of mind.

Educational institutions can and must play a pivotal role in preparing the next generation of AI professionals. The EU AI Act invites higher education institutions and universities to review their curricula and ensure that they cover responsible AI material, including AI ethics, impact assessments, and socio-technical mitigation strategies. Students trained in these areas will possess a competitive advantage in the job market, equipped with the knowledge to navigate the regulatory landscape in which they operate.?

In addition to AI practitioners, the AI Act suggests that the demand for AI auditors will soon rise. However, because this is both an emerging profession and one that requires a unique combination of skills and expertise, it is fair to say there is a pressing need for rapid upskilling to prevent a shortage of qualified professionals. Educational institutions will be expected to develop and offer advanced training programs and credentials to build a large pool of talented AI audit practitioners.


To go further and better understand what so-called responsible Artificial Intelligence (AI) practitioners or AI ethicists are, I invite you to read the research article “What does it mean to be a responsible AI practitioner: An ontology of roles and skills” by Shaleleh Rismani and AJung Moon. The article examines what responsible AI practitioners do in the industry and what skills they employ on the job. This ontology of existing roles alongside skills and competencies needed for each role serves business leaders looking to build responsible AI teams and educators with a set of competencies that an AI ethics curriculum can prioritize.


Last, but definitely not least, the question we should be asking is: who will train all these individuals?

To address the education gaps mapped out, and thus to uphold and enforce current and upcoming AI regulation, there is an urgency to meet the growing demand for education with more qualified teachers. These are nascent, technical, interdisciplinary and rapidly evolving topics and too few qualified people are knowledgeable enough to actually teach them.

A global strategic plan for "train-the-trainer" initiatives should be developed and supported by policymakers. Only then will we be able to meet the AI Acts ambition for more responsible and trustworthy AI systems.?


In a nutshell?

The EU AI Act aims to ensure that AI is used responsibly, preserving human dignity and promoting safety and ethical innovation. It is important to note that its application goes beyond the borders of the EU and will not only affect companies globally but will set a precedent and serve as inspiration for regulators around the world.?

It mandates responsible AI practices, making it crucial for organizations to adapt = upskill swiftly. Now is an exciting time for all stakeholders—businesses, educators, and policymakers—to come together and ensure a smooth transition towards better AI.?

At Mila, we are committed to fostering a culture of responsible AI and helping AI professionals stay ahead in this evolving field. Visit https://mila.quebec/en/learning to find out more.

Such a great read, thank you! It's worth noting that the EU AI Act does not only apply to companies based in the EU - also those companies that have AI products/systems on the European market, regardless of where they are based, might be affected by these rules.

Nadia Pérez

Canadian Risk Manager, Data Governance Specialist and Human-Centric Technology Proponent | Latina In Tech

3 个月

Thank you for sharing! Responsible AI should not be an exception or a nice offer. I’m glad to see that it is, as you mention, a regulatory requirement to some extent. AI providers and deployers will need to invest in training and learning. But the implications go beyond. Loved the article!

Anna Jahn

Senior Director, Public Policy and Inclusion at Mila - Québec AI Institute

3 个月

Really thoughtful article on the growing need for well trained and skilled AI practitioners across the AI ecosystem who can assess, measure and mitigate AI risks.

Alexis Beas

SaaS | HR Innovative Solutions | Human Capital Management Cloud Solutions | Innovation | Entrepreneurship | Martech

4 个月

Excellent article, thank you for this insightful point of view !

要查看或添加评论,请登录

社区洞察

其他会员也浏览了