When AI Gets It Wrong: The Advocate Act Fiasco

When AI Gets It Wrong: The Advocate Act Fiasco

Picture this: It’s a sunny afternoon, and I’m at a friend’s house, casually chatting with her daughter, who’s knee-deep in law books, preparing for her exams. The room is filled with the smell of fresh coffee and the sound of diligent page-flipping—classic exam prep ambience.

Amidst the stress and highlighter fumes, she throws a question my way: “Hey, do you know when the Advocate Act was introduced?” I, feeling quite confident and channeling my inner legal guru, reply, “Oh, that was in 1961!” She looks at me, puzzled, as if I’ve just claimed the moon is made of cheese.

“No way,” she insists, “I’m sure it’s not 1961.” She showed ChatGPT on her mobile screen. Now, as a seasoned advocate myself, I’m somewhat invested in proving my answer right, so I say, “Let’s put this to the test. Type ‘AI is wrong’ and see what happens.” With a mixture of curiosity and skepticism, she does just that and, lo and behold, there’s a screenshot of ChatGPT backing me up.

But, ah, the plot thickens! She’s not convinced, so she googles the answer, and it seems like her little AI assistant has had a slip-up.


I told her few years back similar incident had happened, albeit on a grander stage. A lawyer, in a rather ambitious attempt to fast-track his case, uses ChatGPT to whip up legal precedents. The result? A concoction of fictitious cases that would make a novelist blush. The court’s reaction? It was like discovering your cherished family heirloom is actually a well-crafted replica.


The judge, in a rare moment of judicial drama, considers sanctions as the legal community reels from this AI mishap. The poor lawyer, Steven Schwartz, finds himself at the center of a legal storm. Turns out, ChatGPT’s idea of “real” cases was as imaginative as a bedtime story, and Schwartz, unaware of the AI’s creative liberties, ended up citing cases that didn’t exist outside the realm of fiction.

The moral of the story? Even AI, for all its digital wizardry, isn’t immune to mistakes. While it can be a helpful tool, it’s not quite ready to replace the good old human touch—or in this case, human fact-checking. So next time you’re in doubt, remember: whether it's the Advocate Act or the latest legal precedent, always double-check before you type “AI is wrong” into your search bar.



要查看或添加评论,请登录

Bhargavi Rao的更多文章

社区洞察

其他会员也浏览了