Episode 20: Two truths about AI

Episode 20: Two truths about AI

Lots of good things come from Canada: beavers, BlackBerrys, Michael J. Fox, maple syrup, and Canadian Underwriter magazine. The latter occasionally delivers decent insights into insurance technology.

Canadian Underwriter recently published an article called ‘As AI proliferates, so does insurers’ risk exposure’. In it, two real-world examples push back against the odd belief that AI can improve almost any insurance-related process.

In one, an AI-driven rating model set an annual consumer motor premium at £2.25 million for a 95-year-old. Due to limited historical data inputs about risks at the tail, the AI overreacted to the outlier.

More serious AI blunders are reported south of the Canadian border. Class action lawsuits are under way on behalf of individuals who were denied health insurance claims. Plaintiffs say the AI adjudicating the expenses saw no value in treating people with short life expectancies.

The article goes on to discuss the risks arising from slap-dash insurance of third-party AI-related risk exposures. I won’t dwell on that angle, but it’s a good time to present (and hopefully resolve) two contradictory opinions about AI.

1. AI is fantastic. It has tremendous potential for insurance technology in areas ranging from submission triage to risk verification, and of course claims confirmation. Clever programmers have been using AI in their code for at least two decades, long before it displaced Blockchain as the must-have tech for anyone with a systems budget.

2. AI is terrible. It isn’t actually intelligent, so it may, and often does, draw incorrect conclusions which fly into the face of anything a human would put forth. It therefore cannot be relied upon for anything of importance, and is simply a hyped-up technology driven by excitement about generative AI and Large Language Models.

Both these opinions are true, because:

? Like any product, not all AI is well made.

? Even the best AI is only as good as its trainer and the data used in training.

? AI often delivers fantastic outputs, but also some howlers

Because of these facts, it’s a terrible idea to use AI in the absence of human judgement. Unfortunately some people do blindly accept AI outputs, as the healthcare example shows. Quotech deploys AI judiciously to help our clients make better decisions. All the systems you use should do that, even if they’re not engulfed in AI hype.

要查看或添加评论,请登录

Quotech的更多文章

社区洞察

其他会员也浏览了