I have been wrting a lot about AI and indeed in my Human rights lecture I will be talking about AI again and human rights law. In this article I am discussing AI in the context of employment law and potential cases.?
Artificial Intelligence and employment law?
How would you react if you had won an Employment Tribunal, only to be told that the Judge’s decision had been written by AI?
This thought-provoking scenario recently became a reality when a Judge used ChatGPT in part of a judgment. While the Judge emphasised the importance of using AI tools, this incident sparks questions about what generative AI might mean more widely for employment tribunals, employment law and managing employees.?
An area I am very interested in.?
What areas to look at for? and impact of AI on employment law for employers??
All organisations as employers should be considering as employers? the following issues in my view,?
- Key differences between various types of AI technologies.
- What is driving the current focus on generative AI like ChatGPT compared to other types of AI.??
- The potential risks and challenges associated with AI in decision-making and how employers can address them.?
- How AI systems sit alongside employment law concepts such as ‘fairness’ and ‘reasonableness’.??
- Data protection considerations and legal obligations for employers when using AI for decision-making and information processing.?
- How organisations can establish effective governance and policies around AI technology.??
- Role of training and transparency in the responsible use of AI in the workplace.?
- The government’s approach to regulation of AI at work.?
What are the key components of a policy to regulate employee use of AI?
Here are some key questions to consider if you’re thinking about introducing a workplace policy to regulate the use of generative AI, such as ChatGPT.
Ofcourse for more structured advice please do get in touch.? This is a general view.?
Firstly do have a think about whether you need an AI policy at all?? I am not a fan of long policies and in my lectures I talk about this so consider;?
Why is the policy necessary?
?If external AI assistance is being utilised in your workplace, it is likely that you will need to put some limitations or a framework around its use. Some employers are imposing blanket bans on the use of generative AI because of the risks around accuracy, confidentiality, discrimination and data security at this early stage of its development. In those circumstances, a detailed policy would not be required.
I am not always for policy but…..Having established the need and a rationale for a policy, the following key points should be considered for inclusion. Look at the scope of the policy, so you need to consult on it and does it align with other policies at all?
- Scope – including listed types of AI tools that are covered, the parts of the business and which categories of employee are included. Does the policy cover employees/workers/consultants/contractors/volunteers/work experience placement students?
- Consultation – are you legally required to consult your workforce? If not, then consider whether consultation with your workforce may be beneficial, given the level of concern about the impact of AI amongst many employees and the complexity of the issues.
- Other policies – are there other policies which should be referenced or would crossover with an AI policy? For example, IT policies, data security policies and Bring Your Own Device policies.
- Ownership and roles – who is responsible for developing and maintaining the policy? How often will the policy be reviewed and, if necessary, updated? Given the pace of change in relation to generative AI, frequent review will be required. Will the policy be enforced by the same people who have ownership of the policy?
- Record keeping and monitoring – will you require employees to record their use of AI? How will any monitoring take place and how will you ensure you comply with existing legal limitations on monitoring? Will this involve any new types of monitoring? If it does involve new monitoring, it may be necessary to undertake a data protection impact assessment / privacy impact assessment.
- Use of generative AI – how do you or your business or organisation anticipate AI tools being used for business activities in your organisation? Will its use be limited to internal use only, or for external client/customer work? Will decisions be based on generative AI and, if yes, what decisions and who will be affected by those decisions? Will employees be allowed to use their personal devices or will it be mandatory for generative AI to be used only on employer-provided devices, or will both be an option? Will you require employees to ‘opt out’ of the application’s ability to train itself using data gathered after a user has entered a prompt? If not, this should be covered in your impact assessment.
- Risk of inaccuracies – what would be the impact of inaccurate outputs of the generative AI tools? Would these significantly affect individuals? How will any inaccuracies be managed and who will be accountable for the risk of these internally?
- Guidelines – what standards will you require of employees using generative AI for their work? When developing these, you will need to take into account the risks that are likely to be triggered when using AI for business activities. A non-exhaustive list of examples is as follows. Confidentiality of use of data – for example, employee, customer or supplier data. Compliance with data protection policies and legislation. Intellectual policy rights and licensing – for example, any licensing conditions imposed by AI applications’ terms of use and restrictions on unauthorised entering of third-party data. Equalities, discrimination and ethics – ensure that discriminatory language is not used when entering prompts into applications. Consider cross-referencing other equality/diversity policies, and any Code of Conduct and/or organisational ethics policies. Think about explicitly extending restrictions on bullying, harassment and discrimination to use of generative AI, and ensure that use of generative AI is only authorised for ethical, responsible use.? Security measures – cross-referencing other IT security policies, strong passwords etc. ‘Human in the loop’ requirements – what is your approach to ensuring that critical thought is applied to AI generated outputs? How will you ensure that you comply with existing legal requirements in relation to automated decision making? How will you be able to explain any decisions made in reliance on AI data?
- Training – what training on AI will be offered, and how will it be provided? Will it be voluntary or mandatory? One of the main concerns currently being reported amongst employees is a lack of training on AI, despite a widespread belief that AI will have an important impact on job roles. What technical support will be provided on the use of applications? Who is the point of contact for training and support?
- Enforcement – who should breaches be reported to, what sanctions will be imposed for breach of the policy and what factors will be taken into account when considering the seriousness of the breach? Consider whether you will require employees to provide access to devices on which they have used a generative AI application and whether they will need to provide passwords/log in details.
These are some of the broad, high-level issues which should be taken into account at the policy and drafting stage. ?
The extent of the use of generative AI within a workplace will depend very much on an individual organisation’s requirements and the extent to which it wishes to build in protections against the various risks currently associated with employee use of AI applications.
Careful consideration should be given to the creation and implementation of AI policies (whether on the employer or employee side) – the requirements will depend very much on each individual employer; there is no ‘one size fits all’ approach. But a well-drafted, written policy which is clearly communicated to staff and regularly updated is recommended to provide reassurance to employers who wish to harness this powerful new technology whilst limiting the risks.
House of Commons briefing - Artificial Intelligence and employment law?
This article was not written by AI.?
To further discuss your needs do contact my clerks and do subscribe to this newsletter.?
I am available to speak and lecture specifically on the issue.
Please look out for the second edition of my book Penni on Cybersecurity
and second Edition of book co written and published with
Helen Wong MBE
published by
Bloomsbury Publishing Plc
Director Scientific Services and Operations SaaS | Ethical and Inclusive Digital Transformation | Award-winning Inclusion Strategist | Trustee | International Keynote Speaker | Certified WorkLife Coach | Cultural Broker
12 个月Excellent guidelines, Dr. Sally Penni MBE. My only caveat is about "Human in the loop requirements". As a technologist and inclusion strategist, I don't subscribe to "human in the loop" approaches. Judgment is a human capability, not a machine one. Instead, I posit that "machine in the loop" is the way to go to harness the advantages of AI tools in human-centered decision-making.
Practicing Barrister at Law |TEDx Speaker|Bencher Gray’s Inn| Author 16 books| Non Exec Dir| Host of Talking Law Podcast|Host of The Law and Guidance Podcast |CCMI. FRSA. CEO WITLUK & WOMEN delegate for UN Delegation.
1 年Richard Turvey
Founder and MD of Business of Science Ltd, and TransitionPlus Ltd. Experienced Chair, NED, Coach and Mentor.
1 年Sherin - of interest?
Practicing Barrister at Law |TEDx Speaker|Bencher Gray’s Inn| Author 16 books| Non Exec Dir| Host of Talking Law Podcast|Host of The Law and Guidance Podcast |CCMI. FRSA. CEO WITLUK & WOMEN delegate for UN Delegation.
1 年Hannah Sherlock need to commission me!