AI IS NOT PERFECT
Kamran Hameed, LL-M, MS-RMI, CRIS, TRIP
Expert in Insurance and Risk Management.
As I wrapped up my final classes last week — more on that in the coming weeks — I received my weekly newsletter from Law.com via email. One of the articles, "Lawyer's Use of Artificial Intelligence Leads to Disciplinary Action," by Michael A. Moran, caught my attention.
The article, which may be behind a subscription wall, detailed a Florida attorney facing disciplinary actions for using "inaccurate citations" and wholly fabricated cases in his pleadings. Although the attorney denies the allegations and attributes them to oversight in citations provided by his client, the use of Artificial Intelligence is a significant aspect. Moreover, these AI queries resulted in incorrect citations and fabricated cases, which were then used in a real court proceeding, compounding the issue.
This incident marks the second AI-related snafu to cross my desk in a year. While the verdict in this case remains uncertain, in the previous instance, each attorney involved was fined $5,000, and they were ordered to notify each affected judge falsely identified as the author of the fabricated rulings.
The very next day, as I pondered the ramifications and consequences of such actions in the legal field, I received my IRMI Construction Risk Manager newsletter, edited by Ann Hickman, CPCU, CRIS. Hickman's initial message, "Open AI is Challenged by Complex Insurance Questions," provided further insight into AI's role. She recounted an experience where she posed a common construction insurance question to an open AI tool: "Is construction defect an occurrence under the CGL policy?"
I recently asked an open AI tool a common construction insurance question: "Is construction defect an occurrence under the CGL policy?" On the surface, the answer seemed surprisingly good, but a closer inspection revealed several errors.
The second sentence - above - answered my 'pondering' on the issue of AI use in law, risk management, and insurance. Hickman is right. My own experience shows that when asked complex questions; especially, pertaining to insurance coverages, insurance law, and various other topics of risk management, not only OpenAI/ChatGPT but also Google's Gemini and Microsoft's Copilot provided answers that were "surprisingly good" on the surface, but closer inspections revealed several errors.
For instance, when I asked ChatGPT about exclusions in the ISO General Liability policy, it listed eleven exclusions but omitted crucial ones like "Liquor Liability" and several exclusions under Coverage B. Microsoft's CoPilot fared even worse, consistently failing to list more than five exclusions and inventing its own exclusion titles instead of adhering to those listed on the ISO's CG 00 01 04 13 form.
Professionals familiar with the CGL form, such as underwriters, brokers, and insurance product designers, understand these basics thoroughly. However, inexperienced individuals relying on such incomplete information could lead to catastrophic consequences, including multi-million-dollar disasters/claims and potential legal liabilities.
领英推荐
It's important to clarify that I'm not suggesting AI is entirely detrimental. Reflecting on my own experiences, I wish I had access to ChatGPT during my struggles with coding homework while pursuing my Associate in Computer Sciences in 2015-2017. Furthermore, during my time at FSU Dr. William T. Hold/The National Alliance Program in RMI's Master's in Risk Management and Insurance program, neither ChatGPT nor any other AI tools were available - or at least we were not aware of any. Had they been introduced or known then, these tools could have greatly assisted us in tasks like calculating the dreaded "Incurred But Not Reported" (IBNR) amounts in Data Analytics class or multiple finance projects in Personal Financial Planning class.
Furthermore, AI is great for performing various general tasks. For instance, I have used ChatGPT to recommend the names of my upcoming websites. I have also used ChatGPT to recommend keywords that are on my profile's About section. At times, I have sought help to correct or add missing WordPress coding for my websites. ChatGPT is also great for summarizing long articles or showing important points the article made.
However, when it comes to generating serious stuff like academia, law, references, etc., the AI can at best produce what the tech experts call, "Slop." Slop has been found in journalism, law, and academia fields alike. This issue pertains not only to ChatGPT, but is also persistent with other services like MS Copilot, Google Gemini, and Meta AI.
Before the AI loses its credibility completely, it needs to up its game. Let's not the AI be the new Wikipedia. Wikipedia has yet to prove its credibility in academia and in general despite being in publication for years. Perhaps, it is time for the developers to enact gatekeepers, and fact-checkers, and write algorithms that fetch information from credible sources. Until then AI is not perfect, has and will have its pitfalls.
What has been your experience with AI in your professional field? Have you encountered similar challenges, or do you see more potential than pitfalls? Share your thoughts and comments below.