The Verdict - July 2023
Epoq Legal UK
For most people, legal services are complicated, costly and intimidating. We created Epoq to change that.
Contents:
- Tribunal provides guidance on the right to freedom of belief and expression in the workplace
- The legal risks of using AI tools in your business
Tribunal provides guidance on the right to freedom of belief and expression in the workplace
In a recent case, the Employment Appeal Tribunal has provided welcome guidance on what employers should consider before taking any action to limit an employee's right to freedom of religion and belief and freedom of expression.
The case
A Christian woman working at a secondary school was suspended and, following a disciplinary hearing, subsequently dismissed for gross misconduct after the school received complaints about posts she made on Facebook.
Her posts had criticised teaching in primary schools which she felt normalised same-sex marriage and gender fluidity.
She claimed at an employment tribunal that the school's actions amounted to direct discrimination because of her protected beliefs.
The tribunal dismissed her claims.
It found that the school hadn't dismissed her because of her beliefs but because it felt that the language she used in her posts might lead someone to conclude that she held homophobic and transphobic views.
She appealed the decision.
The Employment Appeal Tribunal (EAT)
The EAT allowed her appeal.
It found that her posts were a manifestation of her belief and an exercise of her legal right to freedom of thought, conscience and religion and freedom of expression.
The tribunal was therefore required to assess whether the school's disciplinary action was necessary to protect the rights and freedoms of others, while at the same time recognising the importance of her right to freedom of belief and expression.
Her case was sent back to the tribunal which now has to reconsider its decision.
Guidance
To assist the tribunal, the EAT provided general guidance on how to approach the right to express religious or other philosophical beliefs in the workplace (although they noted that all such cases are fact specific).
The guidance notes that the freedom to manifest and express belief (religious or otherwise) is an essential right in any democracy, even if some may find that belief offensive.
However, employers may limit that right to the extent necessary to protect the rights and freedoms of others (e.g. through disciplinary action).
Before taking action to protect the rights of others, an employer must first consider whether:
- their goal is important enough to justify that action;
- that action is connected to their goal;
- a less intrusive measure could be taken; and
- they've properly weighed the severity of the action against the importance of their goal.
Employers should look carefully at:
- the content of the expression;
- the tone used;
- the extent of the expression;
- the employee's understanding of the likely audience;
- the nature and extent of the intrusion on the rights of others and any impact on the employer's ability to run their business;
- whether the employee has made clear their views are personal or whether they might be seen as representing the views of the employer;
- whether there's a potential power imbalance given the nature of the employee's role and that of those whose rights are intruded upon; and
- the nature of their business, in particular where there's a potential impact on vulnerable service users.
What this means for you
This guidance gives employers much needed clarity on how to approach complaints about an employee's expression of religion or belief.
You must get the balance right between allowing your employees to express their protected beliefs and making sure they do so in a way that doesn't harass or discriminate against others.
When action is necessary, you should always assess whether that action is proportionate and consider less severe alternatives.
Consider using the guidance to draw up a social media policy that clearly sets out what you'll take into account when deciding what action to take (if any) when you receive a complaint.
While there's no 'one size fits all' approach, the hope is that the guidance will encourage employers and employees to resolve any disputes in the workplace rather than in an employment tribunal.
The legal risks of using AI tools in your business
The recent popularity of ChatGPT has many businesses considering using generative AI tools to improve the way their business operates. However, if you intend to implement AI solutions in your business, it's important to be aware of the risks involved.
领英推è
What is generative AI?
ChatGPT is an example of generative AI – an artificial intelligence technology that uses algorithms to generate new content, including audio, code, images, text, simulations, 3D objects, and videos, based on patterns learned from existing data.
More specifically, it's a type of generative AI called a large language model (LLM) that's designed to understand and generate human-like language using a text interface.
There are a number of ways to use generative AI in your business, including to:
- improve productivity and efficiency through automating routine tasks;
- create marketing assets;
- write search engine optimisation copy
- provide enhanced data insights; and
- communicate with customers.
You should be aware though that your business's use of generative AI raises a number of legal issues.
Data privacy and protection
Data privacy
By using generative AI tools, there's a risk that you'll inadvertently make personal data publicly available.
Any information you input into an online generative AI tool is transmitted to its provider. The provider can then use this information to generate future outputs that could be disclosed to the public.
See the ICO's guidance on AI and data protection and its blog article 'Generative AI: eight questions that developers and users need to ask' for more on data privacy risks.
Data security
AI models are susceptible to adversarial attacks, where malicious actors exploit vulnerabilities to manipulate the model's behaviour. These attacks can lead to compromised decision-making processes, financial losses, or reputational damage.
Additionally, the integration of AI within business processes can create new avenues for insider threats.
The National Cyber Security Centre has advice on the security of ChatGPT and LLMs here, and tips on assessing AI tools for cyber security here.
Intellectual property rights infringement
Lack of transparency about the origin of materials used for training generative AI models raises concerns about intellectual property rights, in particular copyright.
Copyright protected information may be used to train generative AI models, which may amount to infringing the copyright of the rights owner. This information could in turn end up being reproduced verbatim (or almost verbatim) in replies to a user-prompt, without any credit or reference to the source or the author.
Generative AI models also currently can't properly list and credit the materials they reproduce, making it difficult to obtain the necessary authorisation from the rights owners, while AI model owners often waive any responsibility.
Error and bias
AI is only as good as the data it's trained on.
If this data is old, incomplete and inaccurate, AI tools will produce inaccurate or out-of-date results.
This can lead to 'hallucinations', in which a tool confidently asserts that a falsehood is real.
Similarly, training data that contains bias will result in tools that propagate bias and discriminatory practices.
You should always critically assess any response produced by a generative AI model for potential biases and factually inaccurate information.
You should also establish protocols for the regular review of datasets used to train AI models to ensure they remain up-to-date, accurate and to remove bias.
Regulatory requirements
Existing regulations continue to apply; this includes the UK GDPR, as well as sector-specific regulations, e.g. for financial services and transport and product safety regulations.
The Government has also published a policy paper outlining its intention to develop a framework for regulation to be implemented by existing regulators.
* This bulletin is for general purposes and guidance only and does not constitute legal or professional advice. Its contents should not be relied or acted upon without specific advice from a licensed legal practitioner.