Responsibility of OpenAI for defamation by ChatGPT

Responsibility of OpenAI for defamation by ChatGPT

ChatGPT is an amazing product, underpinned by amazing technology. It can be incredibly useful. But notoriously, the information the bot produces is not always reliable. As much is admitted by the organisation behind ChatGPT, OpenAI.

Inconsistent reliability is a necessary consequence of the large language model (LLM) underlying ChatGPT. It works by choosing words in a sentence on the basis of probabilities. There is a contingent relationship between the propositions produced by LLMs and the truth. So for example, ChatGPT can produce some basic mathematical errors; it does not always give effect to deductive logic. This is why some AI gurus, like Gary Marcus, argue that AI research needs to go in another direction—one where truth and deference to facts is built into the technology from the ground up.

A truth-centred AI would be wonderful, but that’s not the kind of technology that has taken the world by storm in recent months. ChatGPT vacuums the content from the web and pumps out the best content it can come up with in response to a prompt. Some of that content is great, and some of it is nonsense.

Where nonsense damages a person’s reputation, it may be the subject of a defamation dispute. Recently, that prospect has been realised with respect to content produced by ChatGPT.

Brian Hood is Mayor of Hepburn Shire in Victoria, Australia. Members of the public told Hood that ChatGPT was identifying Hood as party to a high-profile bribery scandal. Prompts fed into ChatGPT that included Hood’s name would return results that would identify him as a wrongdoer in the scandal, who was convicted of a crime and sentenced to prison.

No alt text provided for this image
Brian Hood

Those results were utterly false. Hood was actually the whistle-blower.

Hood has engaged lawyers, who have sent OpenAI a concerns notice. Unless the company remedies the situation in coming weeks, a defamation case is likely. That case will break ground in confirming the capacity of ChatGPT content to be the subject of a defamation action, and the prospective liability of OpenAI for that content.

Prospective liability for content produced by ChatGPT

It should be uncontroversial that content produced by ChatGPT may be amenable to defamation liability.

Throughout Australia, defamatory ‘matter’ is the cause of action: Defamation Act 2005 (WA) s 8. ‘Matter’ is defined in a technology-neutral way to comprehend any potential means of communication: Defamation Act 2005 (WA) s 4. Text generated by a bot is within its scope. So for example, auto-complete results of the Google search engine are capable of being defamatory matter, as the High Court recognised in Trkulja v Google LLC (2018) 263 CLR 149.

If defamatory matter is comprehended by a person other than the person defamed, there will be publication. Following the introduction of the statutory serious harm element in much of Australia (see, eg, Defamation Act 2005 (NSW) s 10A), ordinarily, the content must be seen by a non-negligible number of people to be actionable. But even if the matter is seen by just a handful of people, if the meaning conveyed is serious enough, the serious harm threshold may be satisfied: see Scott v Bodley (No 2) [2022] NSWDC 651, [45].

Who might be liable for that publication? Any person, including a company, that participates in the communication of defamatory matter to any degree may be on the hook as a publisher: Fairfax Media Publications Pty Ltd v Voller (2021) 95 ALJR 767, [30].

When defamatory matter is produced by a machine, and where that machine is produced by a company, the attribution of responsibility for that matter to the company would ordinarily be straightforward, applying orthodox principles on the attribution of corporate responsibility.

The situation is trickier where ‘the machine’ provides links to content, and the content underlying the link is defamatory, but the presentation of the link is not defamatory on its face. In Google LLC v Defteros (2022) 96 ALJR 766, the majority of the High Court held that Google did not publish webpages linked by the results page of its search engine, although it published the page of links. The majority view still leaves room for the possibility that a person could be responsible for content underlying the link, particularly where the link is not presented in a neutral way; for example, where its presentation prioritised as a sponsored link.

The content produced by the ChatGPT bot is quite different to the content under consideration in the Defteros case. The machine has produced content which is defamatory on its face; the user need not dig any further. For the purposes of Australian defamation law, the company behind the tech is a publisher of any content produced by the bot, whatever the prompt inputted by the user.

Would OpenAI have a defence? It would, with respect to defamatory content it does not know about. That defence is available under the Online Safety Act 2021 (Cth) s 235, and also the innocent dissemination defence provided by the Defamation Acts of the States and Territories (see, eg, Defamation Act 2005 (WA) s 32). The innocent dissemination defence may be defeated if the defendant ‘ought reasonably to have known’ that the matter was defamatory, and its lack of knowledge was not due to its own negligence.

Neither of those defences would be available here. The publisher was put on notice by Mayor Hood’s concerns notice.

While defamation defences for tech companies will soon be strengthened by Australian legislators, even if those defences were in force now, they would not prevent Hood from succeeding. The incoming law requires tech companies to have a system for receiving complaints regarding defamatory content, and to remove the content within a certain period of time. ChatGPT has no such system.

The transnational dimension

A person defamed by ChatGPT cannot sue ChatGPT itself. They would need to sue the company, or companies, responsible for it.

These days, ‘OpenAI’ is a corporate group. The non-profit in which Elon Musk and others invested is OpenAI Inc. OpenAI Inc has a for-profit subsidiary, Open AI LP.

OpenAI LP describes itself as a ‘capped-profit’ company; it is a ‘Delaware Limited Partnership’ controlled by a single-member Delaware company. That Delaware company is controlled by OpenAI Inc. Under Delaware law, a Limited Partnership is a distinct entity from its members. OpenAI LP has a its place of business in California, but is relevantly located in Delaware, USA.

While the corporate machinations of OpenAI are a little hard to digest, it seems that OpenAI LP is the appropriate entity to go after if you find yourself defamed by ChatGPT content. From our perspective in Australia, OpenAI LP is a foreigner located outside of our jurisdiction.

Australian courts will readily entertain defamation suits against foreigners outside Australia where the foreigner publishes defamatory content in Australia. John Barilaro did just that when he sued Google LLC over content the company published on YouTube, which was originally created by commentator / activist / pest FriendlyJordies.

Suing a foreign company requires ‘service outside the jurisdiction’, which in the case of a company would ordinarily require delivery of a court document to the company’s corporate headquarters. In circumstances where you know where the foreigner is located, and you can present evidence to a court showing that you have been defamed by that foreigner’s publication, service outside of the jurisdiction won’t be too difficult.

OpenAI LP might then just ignore service, pretending they are not subject to the Australian court’s jurisdiction. The companies behind Twitter have behaved like that when faced with Aussie lawsuits.

But if OpenAI LP were to do that, it would risk ‘default judgment’. This means they lose by failing to show up.

Enforcing a defamation judgment against OpenAI

If an Australian court then determines that OpenAI LP is liable for defamation, it would do so by pronouncing a ‘judgment’. In that scenario, if the court decides Hood is owed damages, he would be the ‘judgment creditor’. He would then need to enforce the judgment against OpenAI LP, which would be the ‘judgment debtor’.

Enforcing a judgment against a judgment debtor within the court’s territorial jurisdiction, or within Australia, is not too difficult—provided you can locate them. Courts have powers to hold people to account. In WA, for example, you can get an enforcement order to force a recalcitrant judgment creditor to pay up. That order might be physically enforced by a sheriff (assisted by police), who can do things like rock up to a person’s house and possess property.

Courts can also enforce their orders by holding persons who disobey orders guilty of contempt of court. Contempt of court is serious. Australian courts have various powers to punish people for contempt. A ‘contemnor’ (a person guilty of contempt) might be smacked with a big fine, or even sent to prison.

What happens, though, if a foreigner refuses to comply with an Australian court’s orders?

With respect to defamation cases, that question is significant. The law of the USA is quite different with respect to defamation liability. Among other things, the US has a constitutional right to freedom of expression protected by their First Amendment. Australia lacks an equivalent. Compared to Australian law, the US takes a different approach to the balance to be struck between allowing freedom of expression on the one hand, and protection of reputation on the other.

In 2010, the US Congress passed the ‘SPEECH Act’. It provides that a US ‘domestic court shall not recognize or enforce a foreign judgment for defamation unless the domestic court determines that’ the relevant foreign law would provide the same protections for freedom of speech as would be afforded by the United States Constitution (SPEECH Act s 3; United States Code, title 28, Part VI, § 4102).

A defamation judgment against OpenAI over ChatGPT content would be very unlikely under US law. This means that the SPEECH Act would likely prevent enforcement of an Australian defamation judgment in the US.

The practical consequence of all this is that even if Hood wins his defamation case, OpenAI may refuse to pay, hiding behind the shield of US law and the fact the company and its assets are not in Australia.

Does this mean the case is not worth bringing?

Not at all.

OpenAI may choose to pay a debt owed by a judgment of an Australian court in order to ‘keep up appearances’ in the Australian market. Although our country is not as powerful as those who would provide the bulk of ChatGPT’s user base, it is a developed nation of 25 million people, and a US ally. It would be good politics, and good business, for OpenAI to pay up.

If OpenAI refuses to pay, it risks being held in contempt of court. And if OpenAI is in contempt, then its directors face personal risk for refusing to cause the company to obey the court if they know that the company should have obeyed. An English court once explained:

‘In our view where a company is ordered not to do certain acts or gives an undertaking to like effect and a director of that company is aware of the order or undertaking he is under a duty to take reasonable steps to ensure that the order or undertaking is obeyed, and if he wilfully fails to take those steps and the order or undertaking is breached he can be punished for contempt. We use the word ‘wilful’ to distinguish the situation where the director can reasonably believe some other director or officer is taking those steps’: Tuvalu v Philatelic Distribution Corp Ltd [1990] 1 WLR 926, 936E–F (Woolf LJ).

Australian courts will follow the policy of that decision: see Australian Competition and Consumer Commission v Goldstar Corporation Pty Ltd [1999] FCA 585, [41] (Kiefel J).

English courts have also been willing to exert their contempt powers against persons outside of the court’s territorial jurisdiction; see Dar Al Arkan v Al Refai [2015] 1 WLR 135. Australian courts would likely follow suit.

No alt text provided for this image
Sam Altman of OpenAI

With respect to the situation of OpenAI, it is notable that its CEO and founder is Sam Altman. A little googling reveals that Altman is a very wealthy American. His current partner is Oliver Mulherin, an Australian software engineer. If Altman ever wants to visit Australia, then causing OpenAI LP to not be in contempt of an Australian court would be a sound decision.

Even in a worst case scenario, if OpenAI never pays a defamation judgment pronounced by an Australian court, then the judgment creditors to such judgments—people like Mayor Hood—are not left without nothing. They will be left with vindication of their reputation; with proof that they have been defamed. For many defamation plaintiffs, that is the real point of suing.

Conclusion

ChatGPT is awesome. It has the potential to deal a great deal of good for humanity. But it also has the potential to deal a great deal of damage. That damage may be felt around the world, not just in those jurisdictions close to OpenAI.

It is entirely reasonable that Australians avail themselves over the remedies provided by Australian law when foreign companies cause them damage. That moral claim is even stronger in circumstances where foreign companies make a deliberate decision to be available to the global market, and so within the Australian market. OpenAI chooses to make ChatGPT available in Australia; its amenability to Australian law is a foreseeable and reasonable consequence of that decision.

On the subject of AI, Gary Marcus recently told The New York Times that ‘[w]e have a perfect storm of corporate irresponsibility, widespread adoption, lack of regulation and a huge number of unknowns’. Civil litigation, like that following Mayor Hood’s defamation dispute, may fill the gap left by legislators and other regulators. If the result is that OpenAI takes steps to make sure its technology minimises harm to individuals, then I support it.

Dr Michael Douglas is an academic and defamation litigator. His PhD was on the subject of cross-border defamation litigation. Contact: [email protected]

Keung L.

Paralegal | Legal Executive | Documentation Specialist | Electronic Records Specialist | Legal Technologist | E-Discovery | Intellectual Property | IT Specialist |

1 年

My greatest nightmare is to write a defamatory statement about ChatGPT?? or another AI ( other AI platforms are available). Then the AI is smart enough to compose a writ against me and sue me in the my jurisdiction. Also, filing the court papers and compiling the evidence packs for litigation! ?? Such nightmares may come true in future if AI gets smarter and have an artificial legal counsel to fight its case. It could go the same way as IBMs Deep Blue chess computer but for law. ?? Oh the irony, divine intervention and prophecy. AI will be less Terminator and more like Horace Rumpole or Lord Denning! ??????????????

Neerav Srivastava

PhD (Monash). Lecturer, Deakin Law School

1 年

Hi Michael, re defences I’d argue that chatgpt (Open AI) is the primary publisher. What do you think? It may be the first time the defamatory statement has been published, unlike a search engine the accuracy of the statement is important, in producing the content there is analysis involved - even if it’s a repetition it’s a selection - and so it’s not the conduit.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了