AI Action Summit 2025: Is UK law adequate for the challenges of AI?
Lucy Mason
Innovation Lead at Invent | AI Regulation and Policy, Defence, Space and Security Expert
By Lucy Mason and James Wilson
As Artificial Intelligence (AI) continues to evolve at pace and AI tools are rapidly becoming embedded in business, it’s worth considering whether this could lead to significant increases in criminal activity (including new types of offence), or enable other forms of harm, and examine whether our current legal system might struggle to adapt to these. There are concerns that AI agents could challenge how we think about and assign liability, by acting autonomously or taking unexpected actions to achieve a given outcome.
AI itself is, of course, only a tool like any other technology or machine: the legal principle of mens rea applies. Much of the current development of AI is focused on the goal of achieving Artificial General Intelligence (AGI), which is broadly defined as AI that is as capable in a diverse range of competencies equivalent to an adult human being. One possible by-product of the efforts being put into this is that the AI may develop sentience, or even consciousness. However, in the absence of these developments, AI is just a technology that does not have any intent of its own nor any understanding of right or wrong, or ability to ascribe meaning to its actions, and therefore cannot be held responsible. An AI machine cannot be brought to a court of justice to answer for its actions. Moreover, an AI machine cannot be punished for its actions, either through a fine or imprisonment, as these have no retributive or learning value for it: at best, the AI machine can be switched off. In its current state of development, AI will only act based upon the training it has received, and it is the responsibility of the developers to ensure that its goals, and approach to achieving them, is aligned to our best interests. It is possible that at some point in the future, a sufficiently advanced AI might develop its own intentions and take action on its own behalf, outside of the intentions of its programmer or user, in which case AI may be treated as a separate legal entity and probably a new kind of law and justice system will need to be developed to manage it. This would be an extremely significant development, which would signify that the AI had a developed a degree of moral agency and a level of consciousness. Such a state would necessitate a significant level of pre-emptive work by both the development teams involved, and the regulatory/state authorities responsible for ensuring that the AI’s actions remain fully aligned to humanity’s best interests. The UN’s Universal Declaration of Human Rights would also need to be revised to ensure that human rights are suitably protected after the appearance of this equivalent (or possibly superior) consciousness, which would need its own form of rights and personhood. ???
But for the moment, at least, the responsibility remains with the humans creating and using the tool, which is one good reason to maintain a responsible human-in-the-loop in AI systems, especially in safety-critical systems. AI tools and agents could be used by people to facilitate and commit many forms of crime such as online fraud, cyber crime, cyberstalking or online harassment. In such cases the responsibility for the offence lies with the individual who used the tool or agent to perpetrate the crime: laws criminalising fraud, for example, are technology-agnostic, and the crime has been committed regardless of the means used. The criminal justice system will be able to consider the pros and cons of each specific case, such as understanding what outcome the AI tool had been asked to achieve, and whether the human setting the parameters had taken reasonable care to prevent the risk from occurring, in time building up case law. It’s possible under existing law that in some instances, a level of liability may lie with the developers of the AI tool, if they created it knowingly specifically for the purposes of committing the crime, or were reckless in creating a tool without sufficient safeguards that could be used to commit a serious crime: however, this is yet to be tested in courts and it may be difficult to reach a threshold of evidence to support such a conclusion. In one area of law, on conspiracy, the statutes consider this offence between two or more people only, so the law may need to be updated to cover the instance where one of the parties was an AI agent.
AI could also be integrated into machinery which causes harm unintentionally – such as an accident caused by a driverless vehicle, drone or UAV.? Such cases could be extremely complex and contextual to investigate, and liability may be shared among many actors in the case-specific chain of events: from the developer and programmer of the software, to the manufacturer, seller and distributer, to the operator or even the policymakers who failed to regulate the technology suitably. ?Law courts may find themselves under-equipped to understand the technical detail and level of responsibility borne by each party, depending on whether the vehicle operated as intended, was subject to safety checks, or malfunctioned. This scenario is further compounded by the well-documented ethical and cultural complexities related to harm caused during an accident as the result of the decisions taken by the algorithm controlling an autonomous vehicle.?
领英推荐
A third scenario is one in which an AI agent has acted outside of the knowledge or intention of the human user, who was unable to predict such an outcome. It would be difficult in such a case to prove that the human user had any intention for such an outcome to occur: nevertheless, there is a kind of analogy with ‘dangerous dog’ legislation where the dog is an autonomous entity, but its owner is held responsible for its actions. The buyer and operator of AI agents should act with a suitable level of common sense, foresight, and responsibility to prevent obviously harmful outcomes: while the creator of the AI agent should have considered the risks of harm, and taken action to mitigate them, for example by instilling rules about what the AI agent may and may not do at a programmatic level. Unless such an approach is taken, there is an incentive for human operators to avoid finding out what their AI agent is doing, if ignorance was considered a defence against being liable for the outcomes.
In other areas such as protecting people’s personal data, the use of AI likely falls under existing Data Protection/ GDPR regulations, which in the UK are overseen by the Information Commissioner, and would be managed through existing channels. Copyright and Intellectual Property ownership generally well covered in law, however there are a number of well-publicised legal cases ongoing related to challenges around the unauthorised use of copyrighted material for AI model training. This has led to some currently unresolved questions around whether the current understanding of “fair use” of protected materials is appropriate for the future. One additional concern around copyright relates to the previous point around AI gaining rights and a form of personhood; under these conditions, there is some discussion as to whether this would enable the AI platform to retrospectively challenge copyright or ownership of any designs created using the model.
Other potential harms from AI should be managed by the supply chain – in particular developers, integrators, and sellers – under Health and Safety, Product Safety, Liability, and Corporate Law regulations, and software safety approaches which protect the end user against harms caused by unsafe products or activities. It is unclear how Freedom of Information rules might apply to the use of ‘black box’ AI models, as it could be practicably difficult to identify or remove information used or generated by the model: this sort of areas is where explicable AI models might help.
It's hard to see any areas where the use of AI creates genuinely new types of crime or harms where new laws would be needed. To future-proof the legislative system further, it might be useful to consider clarifying some areas of law so they can be more readily used, specifically areas of law which deal with ‘upstream’ responsibilities by developers and distributors, or online service providers. Other areas of law could be strengthened, such as making data poisoning to manipulate AI models training data a specific offence (likely covered by computer misuse and fraud laws and statutory guidance on supply chain interference), making it an offence to use AI tools to evade detection (likely to be perverting the course of justice), and outlawing deepfake incidents generated to cause mass panic (likely wasting police time, an existing crime). To future-proof even further against emerging technologies, it might be useful to establish in International Human Rights law a principle of the right to mental freedom and privacy, avoiding the future development of brain-computer interfaces to establish guilt, or intention to commit crime.
In conclusion, current UK legislation seems generally adequate to cover the range of potential crimes and harms which AI could help to enable, as most of these crimes already exist in non-AI-enabled forms. However, AI could make such crimes easier to commit and increase the volume of harm, potentially overwhelming law enforcement agencies and the criminal justice system. Courts will increasingly find themselves exploring the multiple ways AI is being used in all sorts of crimes, and may struggle with the technical insights, know-how, and capability to manage complex digital evidence. As policy-makers meet in Paris at the AI Action Summit, they may consider how to boost the resources that will be needed to manage the likely increases in AI-enabled crimes.
Innovation Lead at Invent | AI Regulation and Policy, Defence, Space and Security Expert
3 周Alex Murray Lewis Lincoln-Gordon Peter G. Paul Taylor Dr Carolyn Lovell
Innovation Lead at Invent | AI Regulation and Policy, Defence, Space and Security Expert
3 周Capgemini Invent Alex Slater Paul Dixon Morgan Rees Conor McGovern Sandeep Kumar MBA MSc CISSP FBCS CITP Matt Pound