The AI buck still stops with the humans

The AI buck still stops with the humans

As artificial intelligence becomes ever more prevalent it’s getting trickier to tell true information from fake. But when things go wrong it’s no use blaming the bots.

As the saying goes: “A lie can run around the world before the truth has got its boots on.” We’ve witnessed this over the past decade, with the internet driving a rise in fake news and disinformation in the 2010s, until the situation reached a perfect storm during the pandemic. But I wonder if the worst is yet to come.?

There is an unprecedented volume of information bombarding us every day, it is increasingly difficult and time-consuming to distinguish between what is credible and what is fake, and now, artificial intelligence (AI) chatbots are increasingly mediating our access to information.

Like all technology, AI itself is neutral. In fact, in 2024, AI is nothing new or even that exciting anymore – we’ve all been using it for years. If you’ve used Siri, a navigation app, facial recognition to unlock your phone, or Gmail’s spam filter, you’ve used AI.?

AI filters and shapes our world

However, the generative AI (GenAI) entering the mainstream through tools such as ChatGPT has sparked the AI debate again, including among the accountants attending our user conference in May. The big step change is that GenAI doesn’t just make decisions about existing data, it can create new content or data. It does this by learning from massive data sets, spotting patterns and structures, and using those to create new content.?

This means that the chatbots present a view of the world, including facts, figures and analysis calculations, that we can use to make decisions. And it makes sense that instead of using the internet to research the top place to eat in Santorini this summer, for instance, we can ask a chatbot to do this for us. This is quicker and more efficient, and the efficiency and productivity benefits for the workplace are obvious.?

In the race to do more, and work quickly and smartly to support our businesses and clients to make better, data-informed decisions is it any wonder that GenAI tools are the next big thing? But what happens if we can’t trust them??

And we can’t always. The AI training data sets are pulled from the internet. So they contain the same falsehoods, misinterpretations and ambiguities that the internet does. Plus, the datasets tend to be up to two years old – think about how much has happened in the past two years and consider what is missing.?

Given what we know about the truthfulness of the information already out there on the internet, having lived through the rise of fake news, why would we blindly trust AI chatbots at all? Given they are also taking data from the internet, generative AI has the potential to amplify existing false information. Plus, it can also contribute new falsehoods, through fictions it creates itself (these are called hallucinations). But, bizarrely, it seems like we do trust the bots wholeheartedly.

Egg on their faces

Legal cases have already been thrown out of court because lawyers based their arguments on fictitious case histories that generative AI presented as the truth. In New York in June 2023, a judge fined lawyers $5,000 for presenting fictitious cases in a legal brief. Then in July, lawyers arguing a case in Johannesburg used fictitious details to support their client’s case. Their client had to pay the defendant’s legal fees, but the judge maintained that the egg on the lawyers’ faces was punishment enough.?

(This is fairly ironic given lawyers are typically the first to say you should rather hire them than, for instance, let AI review your legal documents.)?

AI hallucinations

Why do these hallucinations happen? AI uses pattern recognition, not true understanding, to learn from training data sets. The outcome is based on the statistical likelihood of certain words appearing in a certain order and not on the AI truly understanding what’s going on. If the training data is missing information, or includes biased, incorrect or misleading data, the chatbot can’t recognise this and will present it as fact. And it does this in a confident, authoritative way that makes the replies sound authentic.?

So, in the example above, when researching a dinner spot in Santorini, if you used ChatGPT you would have found yourself eating at 2021’s hotspot, because ChatGPT’s training data only goes as far as September 2021. If you had used an alternative chatbot, say, Claude AI, you would have fared somewhat better and at least found out what last year’s top restaurant was – Claude’s data goes as far as August 2023.?

Legal and regulatory implications

A slightly disappointing meal out is one thing. However advising clients based on out-of-date tax laws or incorrect data could significantly impact their tax filings, budgeting, forecasting and investment strategies. At worst, this could have legal and regulatory implications, and at best, you’d have egg on your face, just like the lawyers.?

On the one hand, generative AI is a tool that is quickly becoming part of our daily lives out of necessity. It’s the only way we can keep pace with the constant change in today’s world and deliver excellent service to our clients. Shorter budget cycles and quicker time to make decisions are something I prioritise, but on the other hand, the tool itself acknowledges it is flawed, has no common sense, and should be fact-checked instead of being blindly trusted.?

You can’t blame the AI

“But the AI made a mistake” cannot be a defence when things go wrong. Air Canada discovered this earlier in the year when a court ruled it was liable for incorrect information that a chatbot supplied to a customer. The company argued that the bot “was responsible for its own actions”, which is clearly both laughable and a slippery slope to very dangerous place. A passenger had used the bot to check the refund terms of a ticket. When it came time to claim the refund the airline maintained that refunds could not be granted retroactively – contrary to what the chatbot had said.?

There’s another irony here. If businesses are constantly encouraging us to engage with their chatbots, the very least they could do is ensure the information is correct and then comply with the information supplied. The legal tribunal agreed, and rejected Air Canada’s contention that the bot was a separate legal entity. The airline was forced to pay the refund with interest and fees.?

The buck stops with you

So today it might feel like a lie can circumnavigate the globe and travel to Mars before the truth has even blearily opened its eyes and thought about its first coffee of the day. But businesses should beware that for them the buck stops with them, and they are responsible for what they say, whether or not AI was involved.


As published AccountingWeb - June 2024

?


David Lacey

Communications Manager at Orchid Systems

4 个月

Thanks Kevin - a nicely written and accessible article about the topic on everyone's lips. All your own work, or did you have help from ChatGPT? (Joking, sort of...you never know these days, but I guess that's the point??.)

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了