In detail: Global AI at work - Regulating responsible AI use

In detail: Global AI at work - Regulating responsible AI use

The use of artificial intelligence (AI) in employment has been growing at a rapid pace, with its transformative potential in various guises continuing to be seen as the technology rapidly evolves.

With the ability to drive innovation, increase productivity, create jobs in new areas, inject more objectivity into processes, and improve the quality of work undertaken, it’s unsurprising that AI is high on the boardroom agenda and is seeing unprecedented investment. In the first half of 2023 alone, AI firms saw over $40bn in venture capital investment.

However, this rapid expansion raises concerns over fairness, job security, AI bias and discrimination and data security, resulting in calls for robust regulation to address both legal and ethical challenges. With AI technologies often relying on automated decision making, where significant decisions are made about people with no human involvement, or profiling, where an automated process analyses or predicts a person’s abilities or behaviours, it’s easy to see why regulating the appropriate use of AI is a growing area of focus.

In our previous briefing on this topic - AI at work: addressing the employment risks to realize the rewards – we considered some of the key AI issues in relation to its use in the workplace and the actions that employers should take in response. In this briefing, we take a deeper dive into some of the evolving regulation around the world. A comparison of just a few sample jurisdictions highlights the legal and practical challenges that multinational companies are likely to face from the variety of approaches to AI regulation around the world.

The race is on: a global standard?

In 2019, the Organisation for Economic Cooperation and Development (OECD) and partner countries formally adopted the first set of intergovernmental policy guidelines on AI, agreeing to uphold international standards that aim to ensure AI systems are designed to be robust, safe, fair and trustworthy.

Two years later, the United Nations Educational, Scientific and Cultural Organisation (UNESCO) produced a global standard on AI ethics – the ‘Recommendation on the Ethics of Artificial Intelligence’. Acknowledging AI as a critical technology for growth and innovation, and the importance of appropriate regulation given the associated risks, that framework that was quickly adopted by all 193 Member States.

Fast-forward to the second half of 2023, where global efforts to address AI-related risks have continued, including with the creation of a United Nations High-level Advisory Body on Artificial Intelligence, the G7 Leaders issuing a Statement on the Hiroshima AI Process, and the signing of the Bletchley Declaration on AI safety by the EU and 28 governments.

In parallel to global efforts, governments around the world have been racing to lead the international conversation on AI governance, developing their own AI strategies and associated rules. By building trust in AI through suitable regulation, governments are striving to accelerate the adoption of AI in their jurisdictions in order to maximise the benefits that the technology can deliver, while attracting investment and stimulating the creation of skilled jobs in new areas. At the same time, they are seeking to balance that drive for trust and progression against avoiding AI regulation being overly burdensome, to avoid the risk of activities and investment being instead taken to other jurisdictions.

Inevitably, not all governments or regions considering AI regulation are adopting the same methodology. As well as different standards and requirements, different approaches to regulation are being seen, including applying overarching principles across all technologies, sectoral-focused strategies and frameworks targeted at specific technologies. As a result, multinational companies will need to closely monitor developments in their operating locations and business sectors, and are likely to be required to flex their approach to accommodate the different requirements across borders and business areas.

The UK: "A pro-innovation approach to AI"

With the AI industry contributing approximately £3.7 billion to the UK economy in 2022, and providing an estimated 50,000 new jobs, AI is high on the UK government’s agenda.

The UK government's policy paper, "A pro-innovation approach to AI", published in March 2023, outlined the UK's proposed AI regulatory framework. The stated underlying aim is to create a cross-sectoral, principles-based framework to build trust and in which innovation can thrive.

Deciding against creating a single regulatory function to govern AI, the UK’s AI framework sets out a set of key principles (safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress) to support existing regulators to each develop a bespoke approach for AI development within their sectors. UK regulators will be encouraged to incorporate the key principles into guidance, alongside risk assessments and other tools.

The UK’s cross-sectoral approach is based on the premise that AI is a general purpose technology having applications in many industry sectors, and each sector will have its own nuances and influences from an AI perspective that can then be reflected in the regulation. Although the risk with such model is inconsistency between sectors, this is addressed in the proposed AI regulatory framework through the UK government having a variety of central risk oversight functions. This will include monitoring the regulatory frameworks evolving in other countries, reviewing the effectiveness of measures taken, and ensuring that the approach across sectors is broadly consistent.

In practical terms for employment practices that are shaped or influenced by AI, the UK approach means that regulation is likely to come from the existing regulatory bodies, including the Equality and Human Rights Commission, the Information Commissioner Office, the Health and Safety Inspectorate, the Financial Conduct Authority and the Employment Agency Standards Inspectorate. Employers in the UK already look to these bodies for guidance, meaning that the approach has the benefit of being an extension of current practices where such bodies will publish, identify and advise employers of appropriate standards.

The US: “Safe, secure, and trustworthy development and use of AI”

In the US, there are no federal laws that specifically regulate AI in employment, although the National Institute for Standards and Technology (NIST) has been appointed as the lead agency to promote trustworthy AI. NIST has published standards and guidance, including an AI Risk Management Framework.

In addition, some States have started to address the issue of the use of AI technologies, including in employment. For example, a new law in New York City prohibits employers and employment agencies from using certain AI tools in the hiring or promotion process unless the tool has been subject to a bias audit and prior notice has been given to the employees and job candidates.

Most recently, a detailed Executive Order was issued in October 2023 which sets into motion a comprehensive US strategy for the responsible development and use of AI. That Executive Order establishes new standards for AI safety and protection. With some similarities to the proposed UK approach, the Executive Order directs US executive departments and agencies to develop standards, frameworks, guidelines, and best practices in using their existing authorities to regulate AI. In addition, new reporting requirements will come into force in January 2024 for private sector developers of the most powerful AI models (dual-use foundation models).

From an employment perspective, the Executive Order will lead to steps to address both AI-related workforce disruption such as job losses, as well as the adoption of principles to mitigate AI’s potential harms to employees’ wellbeing. The Council of Economic Advisers is tasked with evaluating the effects of AI on the labor-market, which the Secretary of Labor will then use to inform the steps that may need to be taken at a federal level to assist impacted workers. In addition, the Secretary of Labor, in consultation with labor unions and workers, is tasked with developing principles and best practices for employers.

Companies operating in the US and using AI technologies in their operations will therefore need to closely monitor developments, both arising from the Executive Order and at State level. In the employment field, such developments are anticipated to focus on the principles and best practices around employers’ AI-related collection and use of workforce data, as well as social protection for employees whose work is monitored or augmented by AI.

The EU: “A risk-based approach”

The European Union has recently proposed an AI Act, which is currently making its way through the EU’s ordinary legislative procedure. As a proposal for a regulation, it will be directly applicable and immediately enforceable in the Member States upon entering into force, which is anticipated to be during 2025 or 2026.

The AI Act takes a risk based approach to the use of AI technology and sets out what companies can and cannot do based on that risk linked to certain AI uses. Amendments to the proposed AI Act may still be made as it works its way through the legislative process, but AI in the employment context is currently considered high risk if it is used in relation to: the recruitment or selection of candidates; evaluating candidates; hiring, promotion and termination decisions; task allocation based on individual behaviour, or personal traits or characteristics; or performance management.

From a practical perspective and given the range of employment practices falling within the high risk category, the AI Act is likely to require significant measures to be implemented by employers using AI in employment to ensure compliance. This is due to increased governance being applied in high risk use cases, including the application of robust principles around data governance, technical documentation and record-keeping, transparency and human oversight. In addition, since the requirements of the AI Act are proposed to extend to those companies based outside the EU where an AI system is placed on the EU market or its use affects people located in the EU, it will not just be employers based inside the EU that must take such measures.

Compared to other models around the world, the AI Act is considered to be at the stricter end of the regulatory landscape in terms of its requirements linked to the risk-based approach. That standing is further reinforced by the proposed sanctions for non-compliance, which could see companies face fines of up to EUR40 million or 7% of worldwide annual turnover (whichever is higher), and the exitance of a separate proposed new EU AI Liability Directive will also make it easier for claimants to prove damage caused by AI systems.

China: “Application-specific AI regulation”

China was one of the first countries in the world to roll out regulations around the development and deployment of AI technologies. Using a targeted approach with regulations for specific technologies, rather than umbrella regulations attempting to cover all applications, China’s model is in many respects different to the approach taken by other jurisdictions.

Much of the regulation in China has focused on the use of algorithms in AI technologies and generative AI. This has included a new governance framework for regulating algorithmic recommendation systems which came into force last year, as well as a new framework regulating generative AI effective from August this year - “Interim Measures for the Management of Generative Artificial Services” (GAI Measures).

Analogous with the principles adopted in many other parts of the world, included within the GAI Measures are requirements for effective measures to prevent discrimination and to increase the transparency, accuracy and reliability of generative AI technology. Such principles are also reflected in the AI governance frameworks in China for other technologies too, including reinforcing the need to ensure compliance with existing privacy rights and rules, as well as ensuring the protection of the physical and psychological wellbeing of individuals.

One of the advantages of China’s targeted approach with regulations for specific technologies is that it can specifically address issues or concerns linked to the particular technology, including in relation to the rights and protections of specific groups. For example, in response to concerns raised about the role algorithms can play in creating exploitative and dangerous work conditions for some groups of workers (for example, delivery workers), the rules for recommendation algorithms grant enhanced protection to such groups.

Providers of ‘work coordination services’ are required to ensure that algorithms related to assigning orders will not jeopardise the legitimate rights and interests of workers, including their labour rights in relation to salary composition and payment, work times, rewards and penalties. For example, algorithms may be used to optimize round-trip routes for takeout delivery workers, reduce work intensity, fairly assess the optimal order capacity of workers and support reasonable working hours.

As the range of AI technologies continue to evolve, it is anticipated that the regulation around AI technologies will keep developing in China targeted to those application-specific developments. Companies using AI technologies in their operations in China will therefore need to keep track of those developments and should also be mindful of any extraterritorial scope. For example, similar to the EU AI Act, the GAI Measures also extends to companies based outside China that provide generative AI services within China.

Next steps and how we can help

As governments and regions seek to develop principles, standards, and priorities designed to strike a balance between the need to encourage innovation and the need to build effective guardrails to protect against societal harms and ensure the safe and secure development and use of AI, a plethora of new laws and regulations are anticipated to continue to progress.

Employers using any type of AI systems in their employment processes and practices can expect to see increased regulation in this area across many operating locations and should continue to monitor developments. Although many of the developments will be built on similar themes that are relevant to the safe ethical use of AI, the differences in approach between jurisdictions and sectors are likely to bring new challenges for companies operating across borders.

Our teams across the world can help companies monitor the horizon for new and upcoming developments that might impact their businesses. Our teams also have significant experience of supporting employers to steer through the legal, regulatory and practical implications associated with the implementation and use and AI technologies in the workplace.

For more information, please contact:

Hannah Mahon , Partner

Jack Cai , Partner

Rachel Reid , Partner

Guy Huntington

Trailblazing Human and Entity Identity & Learning Visionary - Created a new legal identity architecture for humans/ AI systems/bots and leveraged this to create a new learning architecture

9 个月

Hi Hannah, Jack and Rachel, My take on Ai, laws, regs, privacy and security is quite different than most others. So if you want to know why then read on. Note - this will be a long set of messages. because there's lots to discuss... 1. Down in the legal, security, privacy and identity weeds, it all starts with, based on risk, knowing who one is interacting with. I strongly suggest you read “The Challenge with AI & Bots - Determining Friend From Foe” - https://www.dhirubhai.net/pulse/challenge-ai-bots-determining-friend-from-foe-guy-huntington/ It refers to these three articles I also strongly suggest you folks skim: * “A Whopper Sized Problem- AI Systems/Bots Beginnings & Endings” - https://www.dhirubhai.net/pulse/whopper-sized-problem-guy-huntington/ * “Hives, AI, Bots & Humans - Another Whopper Sized Problem”- https://www.dhirubhai.net/pulse/hives-ai-bots-humans-another-whopper-sized-problem-guy-huntington * “AI Leveraged Smart Digital Identities of Us” - https://www.dhirubhai.net/pulse/ai-leveraged-smart-digital-identities-us-guy-huntington/ Hopefully now your eyes are wide open to the underlying challenges which almost no one on the planet is talking about. I'll continue in the next message...

回复

要查看或添加评论,请登录

Eversheds Sutherland Employment, Labor and Pensions Updates的更多文章

社区洞察

其他会员也浏览了