How companies can ensure that they are able to take full advantage of the benefits of AI technologies while minimizing any potential negative impacts

How companies can ensure that they are able to take full advantage of the benefits of AI technologies while minimizing any potential negative impacts

Introduction

In late February, the European Commission released a draft regulation on trustworthy artificial intelligence (AI). The goal of this new legislation is to ensure that companies are able to take full advantage of the benefits of AI technologies while minimizing any potential negative impacts on privacy. The regulation is subject to public consultation until April 15th and will become effective as soon as it becomes law. Because some elements of this legislation may overlap with existing laws, there are still many unknowns about how it will work in practice. However, if passed as currently drafted, certain aspects may have significant implications for both U.S.-based companies operating in Europe and EU-based businesses seeking to do business outside their home country's borders

The EU wants to be a leader in AI.

The EU’s General Data Protection Regulation (GDPR) and the new rules on intelligent automated decision-making and data protection in the field of telecommunications will help to ensure that the development of trustworthy AI is done in compliance with strict standards. The EU also wants to make sure that AI will have a positive impact on society, which is why it has launched the flagship initiative “AI for Europe” supporting research, innovation and economic growth as well as trust in artificial intelligence technologies.

The EU wants to be at the forefront of this development, setting standards for trustworthy AI that can be applied internationally.

The European Commission is seeking to establish a framework that will guide the development of trustworthy AI.

The European Commission is seeking to establish a framework that will guide the development of trustworthy AI. This framework will help ensure that AI systems are designed and developed in a transparent way and are trustworthy, as well as being developed in a way that respects the rights of EU citizens.

The EU has been proactive in developing this framework. In October 2018, the European Commission announced a €1 billion investment in AI research over the next 10 years and made a commitment to invest €700 million in AI R&D projects over the next three years. This is part of its effort to ensure that the EU is at the forefront of AI development.

AI systems will be deemed high-risk if they are used to evaluate job performance, access social benefits, or make predictions in health care.

AI systems will be deemed high-risk if they are used to evaluate job performance, access social benefits, or make predictions in health care.

Low-risk AI systems include:

  • computer vision that helps with robotic inspection of a parts manufacturing line
  • image recognition software that allows people to tag their friends and family in photos on social media

High-risk use cases of AI include:

  • collecting information about customer behavior based on customer profiles or browsing history and then making decisions based on these data points; this would apply to any company’s website where there is an opportunity for users to provide personal information as part of the transaction (for example, purchasing something). An example of this might be if a user chooses their age range when signing up for an online dating service.

Many of these requirements will not apply to low-risk AI systems.

Low-risk AI systems are defined as those that are used to perform tasks that involve data processing, but do not involve making decisions with significant consequences. Examples of low-risk AI systems include voice recognition and translation software.

Such systems will be subject only to privacy impact assessments and written representations from the organization that they have been designed with strong protections for individuals’ privacy rights in mind; these representations must be made available on request by regulators or consumers who contact an organization for more information about how it uses its own personal data (or other types of sensitive data).

The certification requirements generally follow the structure of product safety legislation.

The certification requirements generally follow the structure of product safety legislation. There are two categories of AI systems: high-risk and low-risk.

High-risk AI systems are subject to more stringent certification requirements than low-risk ones. This is because of the inherent complexity, potential impact and unpredictability associated with high risk technologies.

The regulations contain a number of mandatory requirements that apply to all new products regardless of their category (i.e., low or high risk). These include reporting any known risks associated with your product, making sure you have processes in place for resolving those risks and providing regular updates on how you plan to meet those obligations moving forward (i.e., compliance program).

All companies that develop high-risk AI systems must self-assess their products for compliance with the certification requirements.

All companies that develop high-risk AI systems must self-assess their products for compliance with the certification requirements. This can be done by reviewing the Documentation Packet provided by NIST and consulting with relevant experts in your organization. If you do not already have an internal expert in this area, we recommend seeking outside help from a consulting firm or other trusted resource.

For more information on how to assess your system for compliance, please the NIST self-assessment guide (https://www.nist.gov/programs-projects/ai-privacy).

If a company elects to have its system certified, it may seek certification from an EU-designated body.

A company that elects to have its system certified may seek certification from an EU-designated body. The certification process is governed by the European Data Protection Board (EDPB) and includes a number of requirements that must be met before a company can obtain a certificate. These include:

  • A formal application for the use of the technology with the relevant national data protection authority.
  • The appointment of a DPO, who will act as an independent contact point for any questions or concerns regarding data privacy issues within your company. This person must have knowledge of GDPR policies and regulations, know how they apply to AI technologies, and be able to answer questions on behalf of your organization if required by law enforcement agencies or other state institutions.
  • A review process conducted by an EU-designated certifying body chosen by you based on criteria such as experience with AI development projects; ability to meet confidentiality standards; willingness to work closely with other stakeholders involved in developing this technology (including vendors); availability of staff who are trained specifically in areas relating directly back towards this type of project (for instance, privacy experts).

A manufacturer may be required to conduct random audits of its performance during the five years following certification.

A random audit by an independent third party is a good way to ensure that your company's AI systems are operating at their best. It can also help verify whether your policies and procedures regarding data protection and privacy are being followed.

An audit will typically be conducted in person by a specially trained auditor who visits the company and looks at how its AI systems operate, as well as conducts interviews with staff members. The purpose of this is to check that the system is working properly and that there have been no changes that could affect how it operates. Auditors look through documentation about how the system works, any changes made since its inception (including updates or patches), any training materials for users or administrators, policies about access permissions for different types of data held within the system (for example patient records), etcetera."

Companies should carefully review the new draft regulation and begin considering how it may impact their businesses and AI development strategies in Europe.

Companies should carefully review the draft regulation and begin considering how it may impact their businesses and AI development strategies in Europe.

Companies that develop or use AI technologies should start by reviewing the draft regulation to determine whether any of their current practices may be modified in order to comply with its requirements. For example, companies may need to update their privacy policies accordingly so that they can provide sufficient information about how they process personal data using machine learning methods (or other types of artificial intelligence).

Moreover, since the draft's implementation requires companies with more than 250 employees who process personal data for automated decisions based on profiling or algorithmic processing to appoint a DPO (data protection officer), companies should also consider how this requirement might affect them as well as how they plan on ensuring compliance with this provision by appointing an appropriate individual.

Additionally, when developing algorithms used by automated decision processes under article 22(1)(c) - which states that: "Eu citizens must be able to access information about their rights at any time" - companies must ensure that those rights are clearly explained in clear language within any publicly available documents describing such processes (e.g., online privacy notices). This means that rather than just having one centralized location where all relevant information pertaining to these processes is located; instead each relevant document containing such information should contain links back towards these central locations so as not mislead users who are searching for specific terms like "rights" or "law".

Conclusion

The European Union has adopted a new set of rules for the development and deployment of AI systems. These regulations will help ensure that companies developing such systems are held accountable for their activities, but they also provide some flexibility to businesses that wish to operate in the EU market. The regulation sets forth requirements regarding certification and testing procedures as well as reporting processes.

Yehor Konovalov

Co-founder, CEO - M. System Аgency

6 个月

Peter, thanks for sharing!

回复
Habibullah Ahbab ????

I help BUSY Entrepreneurs become the face of their business, just like ELON MUSK, With my proven 4-Step Marketing Process.

1 年

Exciting times for the AI community! The new EU regulation marks a significant step towards responsible AI adoption. Looking forward to delving into your insightful analysis and contributing to the conversation.

回复
Iain Borner

Developing a culture of trust in global organisations

1 年

Great article, and incredibly hot topic right now. Companies can ensure they are able to take full advantage of the benefits of AI technologies while minimizing any potential negative impacts on data privacy by implementing a strong data governance program. This can include regular reviews and audits of data usage, implementing strict access controls, and providing transparent communication to customers about data collection and usage. Additionally, companies can use privacy-enhancing technologies such as differential privacy and secure multi-party computation to protect sensitive data.

要查看或添加评论,请登录

Peter Borner的更多文章

社区洞察

其他会员也浏览了