Harber v Commissioners for HMRC [2023] UKFTT 1007 (TC)
KBG Chambers
KBG Chambers is a leading Barristers' Chambers on the Western Circuit. Highly recommended in the Legal 500.
Following Harber v Commissioners for HMRC [2023] UKFTT 1007 (TC) Samuel Knight explores the dangers arising from the increased usage of AI in civil claims.
“Chat KC” – A long way off, it seems…
Summary: This article explores the recent decision of the First-Tier Tribunal in Harber v Commissioners for HMRC [2023] UKFTT 1007 (TC) and, in particular, the Tribunal’s comments regarding the use of AI systems such as ChatGPT in civil litigation. The article will conclude that AI, in its current form, is dangerous when used for those purposes and that there is a long way to go before AI replaces lawyers.
The recent decision of the First-Tier Tribunal (FTT) in Harber v Commissioners for HMRC [2023] UKFTT 1007 (TC) raises some interesting concerns regarding the use of AI systems such as ChatGPT in civil proceedings.
The Facts – paragraphs [1]-[22]
The matter itself was a fairly standard, run-of-the-mill Tax case in which the Appellant disposed of a property and failed to properly notify her liability to capital gains tax. HMRC issued a “failure to notify” penalty of £3,265.11 and the Appellant appealed the penalty on the basis that she had a reasonable excuse arising from her mental health condition and/or because it was reasonable for her to be ignorant of the law.
The decision on that underlying issue is not addressed in any great detail in this article, but in summary, the FTT dismissed the application as they did not consider that the Appellant had a ‘reasonable excuse’ for failing to notify HMRC of the payable capital gains tax (see paragraphs [52]-[63]).
The interesting point about this case is that the Appellant, in preparing her appeal, consulted an AI program rather than a human lawyer for assistance. Consequently, the Appellant:
“provided the Tribunal with the names, dates and summaries of nine First-Tier Tribunal (“FTT”) decisions in which the appellant had been successful in showing that a reasonable excuse existed. However, none of those authorities were genuine; they had instead been generated by artificial intelligence (“AI”).” (paragraph [2] of the decision)
?
The AI Generated Cases
Some of the fictitious cases relied on by the Appellant are set out in brief at paragraph [20], and a few examples from there are as follows:
(a) The leading authority on the approach the FTT should take in reasonable excuse appeals is the UT judgment in Christine Perrin, commonly referred to simply as Perrin. The cited case of "David Perrin" uses the same surname and also concerns an appeal against a penalty on the grounds of reasonable excuse. However:
??????????? (i) the appellants have different first names;
??????????? (ii) the dates of the judgments are not the same; and
??????????? (iii) Christine Perrin lost her appeal whereas "David Perrin" succeeded.
(b) In the cited case of "Baker v HMRC (2020)", the appellant challenged a penalty on the basis that his mental health difficulties provided him a reasonable excuse. This mirrors what happened in the Richard Baker judgment identified by Ms Man, see §17(1) above. However, that case was decided in a different year from the cited case, and Mr Richard Baker lost his appeal, unlike the appellant in the cited case.
(c) In the cited case of "Smith v HMRC (2021)", the appellant successfully claimed a reasonable excuse on the basis of mental health difficulties. In Smith v HMRC [2018] UKFTT (TC) in which Mr Colin Smith similarly submitted that he had a reasonable excuse on the basis of "confusion and poor health", but that case was again decided in a different year from the cited case, and Mr Colin Smith lost his appeal, unlike the appellant in the cited case.
Not only were the superficial details of the cited cases incorrect (such as names and dates), but the outcomes were themselves completely wrong as well. The AI system simply took the names and brief facts of the decisions and bent them into shape to suit the Appellant’s case. Those circumstances are neither helpful to the Appellant nor the Tribunal and clearly do not facilitate justice.
It is worth noting that the Tribunal found at [23] that the Appellant legitimately did not know that the AI cases were not genuine; nor did she know how to find that out by searching databased such as BAILII or the FTT’s own website. Nevertheless, the Tribunal was pretty scathing at [24] when it decided that:
“We acknowledge that providing fictitious cases in reasonable excuse tax appeals is likely to have less impact on the outcome than in many other types of litigation… But that does not mean that citing invented judgments is harmless. It causes the Tribunal and HMRC to waste time and public money, and this reduces the resources available to progress the cases of other court users who are waiting for their appeals to be determined. As Judge Kastel said, the practice also "promotes cynicism" about judicial precedents… Although FTT judgments are not binding on other Tribunals, they nevertheless "constitute persuasive authorities which would be expected to be followed" by later Tribunals considering similar fact patterns…”
Why is this important?
Aside from the implications quite rightly pointed out by the Tribunal at [24], the danger of using AI in the law is substantial.
Increasing Input Will Decrease Output
AI, like all computers, is not perfect. If it gets incorrect inputs, then it gives incorrect outputs. The danger of this was aptly highlighted by the SRA in its “Risk Outlook Report: the use of artificial intelligence in the legal market, 20 November 2023” cited at [20(3)] of the judgment:
“All computers can make mistakes. AI language models such as ChatGPT, however, can be more prone to this. That is because they work by anticipating the text that should follow the input they are given, but do not have a concept of 'reality'. The result is known as 'hallucination', where a system produces highly plausible but incorrect results.”
If AI goes around “hallucinating” or completely misunderstanding cases that are then relied upon by lay clients and the general public, then our already stretched justice system will find itself pushed to breaking point as Courts and Tribunals are forced to double- and triple-check everything or spend time debating cases precedent that does not exist. Increasing the input of fictitious cases generated by AI will decrease the output of the system in actually getting things done.
Professional Ethics
This case ought really to be a cautionary tale for all Counsel and solicitors who seek to rely on AI in preparing their cases more efficiently. Whilst legal representatives should be commended for an increased awareness and usage of time-saving technology, there are serious ethical challenges that can arise from being wholly reliant on that technology – especially when it gets it wrong as it so often does.
From the perspective of the Bar, over-reliance on an AI system to generate case law as in Harber, may, should it later turn out to be fictitious, lead to breaches of the following elements of the BSB Handbook:
Core Duty 1 - You must observe your duty to the court in the administration of justice.
Conduct Rule 3.1 - you must not knowingly or recklessly mislead or attempt to mislead the court.
Conduct Guidance 4.3 - recklessly means being indifferent to the truth, or not caring whether something is true or false.
Conduct Rule 3.3 - you must take reasonable steps to avoid wasting the court’s time.
It seems unthinkable that counsel would engage in this sort of practice, but it did happen in the US case of Mata v Avianca 22-cv-1461 (PKC) cited at [20)(5)] of the Harber judgment. Whilst “Chat KC” may be a long way off, good old-fashioned human KC’s (and we mere mortal junior members of the Bar) should treat AI with an appropriate level of suspicion whenever we use it or see it used in legal proceedings.
领英推荐
This is Dangerous for Clients
A little learning is recognised as a dangerous thing: first-year Law students are not let loose on a complex, multi-track trial after their first week of lectures and most people know better than to listen to that “dodgy bloke in the pub” who has “been around the block a few times” and claims to “know the court system like the back of his hand”. So why are we so quick to trust an AI?
It is understandable that, in the current climate, many lay people do not have the funds to afford proper legal representation and are not given an opportunity to obtain any such representation via the safety net of legal aid. The dangers of using AI to prepare your case, like Mrs Harber, need to be made clear to people so that they can (a) avoid wasting the court’s time and (b) avoid the wrath of a (rightfully) angry Judge who feels as though they have been lied to or misled.
Unless and until AI can think creatively and critically, it cannot be allowed to exist as a viable replacement to human lawyers. That is not a statement born (solely) from a desire to keep the legal profession alive; it is a statement born from concern that people who need proper legal help are going to be in a worse situation because there is no proper legal help available and the AI system they use is just not sufficient to really help them solve their legal problems.
An Attempt by AI to Write This Article
My conclusions follow but before getting to them I thought I’d see how this article would have turned out turn had I let AI loose on it, so I did and tasked ChatGPT with the following question:
Write a 5-page article on the recent decision in Harber v Commissioners for HMRC [2023] UKFTT 1007 (TC) and the dangers of using artificial intelligence in civil litigation.
This is the result of that exercise:
Title: Navigating the Legal Landscape: Harber v Commissioners for HMRC [2023] UKFTT 1007 (TC) and the Perils of Artificial Intelligence in Civil Litigation
Introduction
The intersection of artificial intelligence (AI) and the legal realm has reached a critical juncture with the recent decision in Harber v Commissioners for HMRC [2023] UKFTT 1007 (TC). This case has underscored the potential dangers associated with the use of AI in civil litigation, raising important questions about accountability, transparency, and the reliability of machine-driven decisions. In this article, we delve into the intricacies of the Harber case, exploring its implications and shedding light on the broader concerns surrounding the deployment of AI in legal proceedings.
Background of Harber v Commissioners for HMRC [2023] UKFTT 1007 (TC)
The case of Harber revolves around a dispute between an individual taxpayer, Mr. Harber, and the Commissioners for Her Majesty's Revenue and Customs (HMRC). The crux of the matter was the use of AI algorithms by HMRC in assessing Mr. Harber's tax liability. The algorithms in question were designed to analyze vast amounts of financial data to determine potential irregularities and assess tax liabilities.
In the proceedings, Mr. Harber challenged the validity and fairness of HMRC's use of AI, arguing that the automated decision-making process lacked transparency and denied him the opportunity to understand and contest the basis of the tax assessment. The case reached the UK First-tier Tribunal (FTT), resulting in the landmark decision [2023] UKFTT 1007 (TC).
Implications of the Harber Decision
The decision in Harber has significant implications for the use of AI in civil litigation. The tribunal ruled that the automated process used by HMRC was not sufficiently transparent, and Mr. Harber's right to understand and challenge the basis of the decision had been compromised. The case highlights the importance of striking a balance between the efficiency gains offered by AI and safeguarding the procedural rights of individuals involved in legal disputes.
One key aspect of the decision is the recognition that the use of AI in litigation must align with principles of fairness, accountability, and transparency. The tribunal emphasized the need for robust mechanisms that allow individuals to comprehend the logic behind automated decisions, ensuring that AI is a tool for justice rather than a potential source of injustice.
Challenges Posed by AI in Civil Litigation
The Harber case brings to the forefront the challenges associated with the growing reliance on AI in civil litigation. One of the primary concerns is the "black box" nature of many AI systems, where the decision-making processes are opaque and difficult to decipher. This lack of transparency can impede individuals' ability to challenge decisions effectively, raising issues of due process and procedural fairness.
Moreover, the use of AI in legal proceedings raises questions about accountability. If an algorithmic decision leads to an unjust outcome, who bears responsibility—the developers, the users, or the system itself? Establishing accountability is a complex issue, as AI systems often involve a combination of programming, machine learning, and data inputs.
Another challenge is the potential for bias in AI algorithms. If the data used to train these systems contain inherent biases, the AI may perpetuate or exacerbate existing disparities. In the context of civil litigation, this could result in unjust outcomes, disproportionately affecting certain groups.
The Need for Ethical AI Practices in Civil Litigation
?
In light of the Harber decision and the challenges posed by AI in civil litigation, there is a pressing need for the legal community to adopt and advocate for ethical AI practices. This involves ensuring transparency in the design and operation of AI systems, conducting thorough assessments of potential biases, and establishing mechanisms for accountability when automated decisions impact individuals' rights.
Legal professionals should actively engage in the development and implementation of AI systems to guarantee that these technologies align with legal and ethical standards. Collaborative efforts between legal experts, technologists, and policymakers are essential to crafting guidelines and regulations that govern the use of AI in civil litigation.
Conclusion
The Harber v Commissioners for HMRC [2023] UKFTT 1007 (TC) decision serves as a critical milestone in the ongoing discourse surrounding AI in civil litigation. As the legal landscape continues to evolve, it is imperative to strike a balance between leveraging the efficiency gains offered by AI and safeguarding the rights and interests of individuals involved in legal proceedings.
The perils of using AI in civil litigation, as highlighted by the Harber case, underscore the need for a proactive and ethical approach to the development and deployment of these technologies. As the legal community grapples with the challenges posed by AI, it is crucial to establish frameworks that ensure transparency, fairness, and accountability, ultimately preserving the integrity of the justice system in the face of technological advancement.
“Chat KC”
As you can see, AI got it completely wrong and invented an admittedly riveting but entirely fictitious story.
Conclusions
So, where does this leave us? It seems that “Chat KC” is a long way off just yet, and there may still be a place for human lawyers for plenty of time yet to come. However, that does not mean that the dangers of AI in the law should be left in the background until it does become a widespread problem.
Regulators such as the SRA and the BSB, as well as organisations like the Ministry of Justice, the Law Commission, and universities, ought to get their heads together to take a proper look at how AI can be used safely and effectively in the law. The technology is great when it works: it can save hundreds, if not thousands of hours on simple tasks or assisting with research, but it is not a substitute for a highly qualified, critically thinking human lawyer.
Samuel Knight
Pupil Barrister, KBG Chambers
Teacher of Academic Support at Abingdon School, Member of Chartered College of Teaching
11 个月This is a fascinating read and thank you so much for raising awareness… I agree maybe in a few years’ time it will be true but at the moment Artificial Intelligence is far from being ‘intelligent’ and can’t replace professionals- lawyers and teachers - with critical thinking. Merry Xmas!