Procurement's Modern Call
EU AI Act and Procurement

Procurement's Modern Call

In a world where artificial intelligence (AI) is transforming industries, the European Union has taken a bold step with the proposed EU AI Act. As we unlock the potential of AI in procurement, it is paramount to understand the intricacies of this groundbreaking legislation.

The EU AI Act seeks to establish a comprehensive regulatory framework for AI across all sectors, excluding the military. With a focus on risk classification, the Act distinguishes between banned practices, high-risk systems, and other AI applications, setting the stage for a global standard in responsible AI governance.

This article aims to dissect the EU AI Act and its implications for the procurement industry. As AI becomes an integral part of procurement processes, the need for regulation is evident. The EU AI Act not only addresses potential harms and risks associated with AI but also shapes the ethical landscape of AI adoption, making it a crucial topic for procurement professionals.


Definition and scope

The European Commission proposed the Artificial Intelligence Act (AI Act) to govern artificial intelligence within the European Union. This legislation aims to establish a unified regulatory and legal framework for artificial intelligence, covering all sectors except for military applications. The AI Act does not confer rights on individuals. Instead, it focuses on regulating artificial intelligence system providers and entities that use them professionally.

The EU Artificial Intelligence Act aims to classify and regulate AI applications based on their potential to cause harm. This classification falls into three primary categories:

  • Banned practices: involve the use of artificial intelligence to manipulate individuals through subliminal messaging or by exploiting their vulnerabilities, which may result in physical or psychological harm. Additionally, the indiscriminate use of real-time remote biometric identification in public spaces for law enforcement purposes is prohibited. Lastly, authorities are not allowed to use AI-derived 'social scores' to unfairly disadvantage individuals or groups. The Act prohibits the latter completely, while proposing an authorization regime for the first three in the context of law enforcement.
  • High-risk systems: are those that pose significant threats to health, safety, or fundamental rights. Providers must undertake a compulsory self-assessment conformity assessment before launching them in the market. For critical applications, such as medical devices, the provider's self-assessment under the AI Act requirements must be considered by the notified body conducting the assessment under existing EU regulations, such as the Medical Devices Regulation.
  • Other AI systems: that do not fall under the categories mentioned above are not subject to any regulation, and Member States are largely prevented from further regulating them via maximum harmonization. National laws related to the design or use of such systems are not applicable. A voluntary code of conduct for such systems is planned, although not initially.

The Act also proposes the establishment of a European Artificial Intelligence Board to encourage national collaboration and ensure compliance with regulations.

Regulatory Framework

Delving into the specifics of the procurement landscape, the Act's significance is magnified, offering a bespoke lens through which to understand and address the unique challenges posed by AI adoption within this sector. With AI integration becoming increasingly ubiquitous in procurement processes, the Act emerges as more than just regulatory oversight; it becomes a guiding principle, shaping the ethical landscape of AI adoption in procurement.

This tailored focus on procurement is particularly noteworthy as it addresses the complexities and nuances associated with the integration of AI in the industry. The Act doesn't merely prescribe rules but, rather, provides a strategic roadmap for procurement professionals to navigate the multifaceted risks linked to AI. By fostering a clear understanding of these risks, the EU AI Act empowers professionals in the procurement field to adopt responsible and transparent AI practices.

In essence, the EU AI Act is not solely a regulatory measure; it embodies a visionary approach to harmonizing innovation with ethical considerations, specifically tailored for the intricate landscape of procurement. As the world of business continues to unlock the vast potential of AI, this legislation stands as a testament to the European Union's commitment to ensuring responsible and sustainable AI adoption, setting a standard that extends far beyond its geographical boundaries.

?


Strategic Compliance

As businesses integrate AI into their procurement processes, the Act requires a profound shift in the way companies approach technology adoption. Procurement professionals have a crucial responsibility in steering their organizations towards responsible and ethical AI use.

The AI Act provides precise definitions for the various actors involved in AI, including providers, deployers, importers, distributors, and product manufacturers. This involves ensuring compliance with regulations and holding all parties involved in the development, usage, import, distribution, or manufacturing of AI models accountable.

To comply with the Act, companies that adopt AI for procurement should follow a structured three-step approach:

Step 1: Model Inventory

To understand the implications of the EU AI Act, companies should assess their current and future AI models, including those they plan to acquire from third-party providers. Financial services organizations can simplify this process by using existing model repositories and implementing model governance. If an organization does not have a model repository, they should conduct a status quo assessment to determine their potential exposure. Proactive identification can begin by examining existing software catalogs or, if they are unavailable, by sending surveys to various business units. Even if AI is not currently being utilized, this approach can still be effective.

Step 2: Risk Classification

AI models should be classified based on risk, aligning with the risk categories outlined in the EU AI Act. The Act provides examples of models posing an unacceptable risk, such as those involving real-time remote biometric identification in public spaces or social scoring systems. High-risk models are allowed but must meet strict requirements, including undergoing a conformity assessment before being released to the market, being registered in an EU database, and adhering to comprehensive risk management, data governance, and security measures. Examples of high-risk systems include those used in critical infrastructure, hiring processes, credit scoring, automated insurance claims processing, and setting risk premiums. Transparency is necessary for models that fall into the limited or minimal risk categories, such as chat bots or deep fakes, to inform users about AI involvement.

The EU AI Act Risk Classification


Step 3: Aligning Practices

If you are a provider, user, importer, distributor, or affected person of AI systems, you need to ensure that your AI practices are following these new regulations. To begin the process of full compliance with the AI Act, you should take the following steps:

  1. Assess the risks associated with your AI systems
  2. Raise awareness
  3. Design ethical systems
  4. Assign responsibility
  5. Stay up-to-date
  6. Establish formal governance


The 6 Steps



The Broader Landscape

As we navigate through the intricacies of the EU AI Act's regulatory landscape, the analysis presented thus far unveils a comprehensive framework for responsible AI adoption in procurement.

Moving beyond the surface, these regulations are not just legal safeguards; they represent a call for ethical reckoning within the AI landscape. The insights gained from risk classifications and a structured approach signal a departure from mere compliance to a strategic commitment to ethical AI practices.

Delving into the broader implications, the penalties outlined within the AI Act transcend financial consequences. Ranging from €10 million to €40 million or 2% to 7% of the global annual turnover, these fines signify more than punitive measures. They emerge as strategic levers, compelling a profound commitment to ethical AI practices among businesses in the procurement sector.

Considering the potential impact on the procurement industry, these penalties are not isolated threats. Instead, they herald a paradigm shift in how procurement professionals approach AI adoption. Compliance becomes a linchpin for safeguarding reputations, fostering trust, and positioning the industry as a standard-bearer for responsible AI integration. In this dynamic landscape, the penalties are not deterrents; they are catalysts for a new era of ethical and responsible AI practices in procurement.


Procurement's Ethical Odyssey

This exploration has revealed a regulatory framework that goes beyond legal compliance. We have analyzed the Act's risk classifications, which not only provide safeguards but also demonstrate a strategic commitment to ethical AI practices in procurement.

It is important to revisit the core elements of the AI Act. It is not just a set of rules, but a visionary guide for responsible AI integration that emphasizes the ethical landscape of procurement. The outlined penalties carry more weight than just financial consequences. They represent a transformative force, pushing businesses towards a future where ethical AI is of utmost importance.

In conclusion, this exploration of the EU AI Act prompts reflection. The way businesses approach AI in procurement is no longer solely a matter of compliance, but rather an ethical imperative. The fines are not meant to be punitive measures, but rather catalysts that propel the procurement industry into an era where responsible AI is not just a choice, but a necessity. The EU AI Act serves as a beacon, guiding us towards a future where innovation and ethics coalesce, setting new standards for AI adoption worldwide.




Dümpelfeld & Partners GmbH

Visit us on www.duempelfeldpartners.com

Sent us direct inquiry via [email protected]

Or reach out to us via +49 (0) 211 83 86 7900


要查看或添加评论,请登录

Dümpelfeld & Partners GmbH的更多文章

社区洞察

其他会员也浏览了