The upcoming EU Artificial Intelligence ( AI ) and its impact

The upcoming EU Artificial Intelligence ( AI ) and its impact

In my previous posts, we saw what risks AI can introduce and why it is so important to have controls in place to stop the accidental or deliberate misuse of AI. The sad fact is that companies usually prioritize profit over controls and will try to reduce costs wherever possible. If it costs too much to secure an AI system, then the company might simply decide not to do it!

This is where regulations come in to enforce compliance to a minimum set of standards that everyone has to follow?

Regulations are important as they make corporations accountable and help to ensure that AI as a technology has minimum safeguards put in place across the board. The consequences of not complying can be regulatory fines or even the removal of an AI system from the market. On the other side, complying with the regulations can help the company market their product as being “fully compliant” giving them a competitive advantage over others.

Global AI regulatory landscape

Organizations in the business of making AI systems have historically relied on self-regulation without much oversight. There were no specific regulations in place and AI systems came under the umbrella of other regulations such as data or consumer protection. Seeing the potential risks involved, governments across the world are rising to the challenge and putting in new regulations to ensure AI risks are identified and mitigated appropriately. Several legislations are being passed in the U.S, UAE, China, and other countries to take the lead in the AI race.

The most important regulation by far and the one expected to have the most impact around the world comes from the European Commission which in April 2021 issued a proposal for a new act to regulate AI.?Similar to how it set the stage for global data privacy laws with the General Data protection regulation (GDPR) in 2018, this act is expected to have wide-reaching implications across the world. EU rules usually end up setting the standard for the rest of the world because of all the companies that work in it, so we can expect this act to become a blueprint for other countries to derive their own AI laws.

The EU AI act - What you need to know

As the world's first concrete proposal for regulating artificial intelligence (AI), the EU's draft AI Regulation is going to have a huge impact on the debate on AI and how companies are going to adopt AI in the future. The act itself takes a risk-based approach to regulating Artificial Intelligence and categorizes AI systems as follows:

  1. Unacceptable risk
  2. High risk
  3. Limited risk
  4. Low risk

The basic risk-based principle is that the higher the risk that the AI system poses, the more obligations on the company to prove to regulators how the system has been built and how it will be used. Systems labeled as Unacceptable AI are simply banned such as those systems that use facial recognition technologies, systems used for social scoring that rank people based on their trustworthiness, and systems that manipulate people or exploit vulnerabilities of specific groups

The bulk of the regulation focuses on high-risk AI systems which have to comply with a deep set of technical, monitoring and compliance requirements which we will look into detail shortly. Systems classified as limited risk are subject to transparency obligations while the remaining minimal risk systems do not have obligations but are recommended to put in codes of conduct to make sure good practices are followed.

No alt text provided for this image

? How the AI act categorizes requirements based on risk

“High Risk” AI systems under the proposed EU act

The act identified AI systems as being “high risk” when they can potentially endanger the life or health of persons or their fundamental rights. The act has a list of high-risk AI systems also some of which are mentioned below:

  1. critical infrastructure.
  2. education and vocational training.
  3. employment.
  4. access to and enjoyment of essential private services and public services and benefits.
  5. immigration, asylum, and border control management; and
  6. the administration of justice and democratic processes.

The key requirement for high-risk AI systems will be to undergo a conformity assessment, be registered with the EU in a database, and sign a declaration confirming their conformity. Think of a conformity assessment as an audit in which the AI system will be checked against the requirements of the regulation which are listed below:

●?????the implementation of a risk-management system.

●?????technical documentation and record-keeping.

●?????transparency.

●?????human oversight.

●?????cybersecurity.

●?????data quality.

●?????post-market monitoring; and

●?????conformity assessments

●?????and reporting obligations.

These audits can be done as self-assessments by the company making the AI or an assessment by a third party (currently only AI used in biometric systems need to undergo third-party conformity assessments while others can just go the self-assessment route ). If the system gets changed after the assessment, then the process has to be re-done.

The following diagram illustrates this process:

No alt text provided for this image

Once the assessment is passed, the result will be a nice CE logo on your product which confirms that it is now ready to enter the market for EU customers.

Who must comply?

Like the GDPR, the scope of the regulation is not just limited to EU also as similar to the GDPR the law can cross borders and apply to:

●?????Providers who place AI systems on the market or put them into service in the EU.

●?????Users of AI systems located in the EU.

●?????Providers and Users of AI systems located in third countries, where the outputs of the AI system are used in the EU ( this will be of importance to companies marketing their products to the EU )

How should you prepare?

If you have ever implemented the EU’s GDPR then you would understand the EU does not mess around when it comes to non-compliance and can enforce serious fines for breaking its rules. The new AI act also follows this trend and fines for using prohibited AI systems (those presenting unacceptable risks) can go up to €30 million or 6 percent of annual global revenue (way above the maximum fine under the GDPR). Companies who provide misleading information to authorities can also get fined up to a maximum penalty of €10 million or 2 percent of global revenue.

If your AI system is coming under the scope of the new act then it is not something to be taken lightly.

While some have criticized the new EU regulation for being too restrictive resulting in Europe possibly falling behind other nations in the AI race; chances are high that this act will get enforced so it is best to start preparations now rather than leave it for later. Taking concrete actions now will ensure you are on the right side of this regulation when it gets enforced.

The first and most effective step would be to conduct a gap assessment against this regulation and see where your organization falls and what you must do to be fully compliant. Your company might not potentially have the relevant expertise to conduct these assessments so you would need to reach out to third-party excerpts who can guide you. Another step would be to create an AI governance framework in your organization to manage and mitigate AI risks as they appear.

We will read more about this in the coming weeks !

???My book on AI governance and Cyber-Security

??????Udemy course on AI governance and cyber-security

???My Blog

???My Youtube Channel?

要查看或添加评论,请登录

Taimur Ijlal的更多文章

社区洞察

其他会员也浏览了