#20 - Generative AI: the harder the fall?
Bertrand ROBERT
Supporting Insurtech & insurers to reinvent the industry | 1st employee @ alan, ex-COO @ wakam
Since last year, generative AI has been at the height of the hype, according to Gartner's famous graph. Logic would dictate, according to the "Hype cycle", that the next stage should be less fun. And we're starting to hear a few dissonant voices.
In using ChatGPT, we've all seen its limitations at one time or another. Under these conditions, responding correctly to 90% of requests is attractive, but it's not enough to go to an industrial roll-out.
So how can we prevent generative AI from joining other innovations of the past in the "through of disillusionment"?
Over the past few months, I have had the opportunity to accompany a number of insurance companies as they explore the wonderful world of AI. Here are some of the lessons I have learned.
Pitfalls to avoid
The promise of generative AI is such that it's tempting to go in headlong. I can think of at least three pieces of advice to follow to avoid the story falling apart.
?? Think Problem VS Solution
When we talk about generative AI, it's easy to go straight to the solution: "everyone swears by generative AI, we have to go there too, otherwise we'll look like ??". Not asking the "why" question before diving in is the best way to build a solution that doesn't address any real problem.
When it comes to GenAI, my experience is that it's better to start with well-known use cases that you want to augment with AI: you know what problems still need to be solved, and what incremental benefits you could get out of it. It's a much more fertile ground for experimentation than embarking on disruptive quests, where we don't know where we going to end up, and how we are going to get there.
?? Data is king
We tend to forget that the "PT" in GPT stands for "pre-trained"; yet to train an AI model you need data.
So it's one of two things: either you use pre-trained models and you'll get results that aren't necessarily 100% relevant to your use, or you use your own data... provided you have some.
In 'traditional' businesses such as insurance, having a structured, complete and up-to-date knowledge base is not something you can take for granted. With AI, this becomes a real differentiator, particularly with the possibilities offered by RAG, which limits the risks associated with training data. It is therefore key to make an objective inventory of the data available, its level of quality, and how it can be used to define the initial roadmap.
???Define the rules of the game
AI is a factory of fantasies: infinite extension of the realms of the possible, but also job destruction, risks of fraud and usurpation, etc. There are plenty of subjects.
AI arouses the same fears in teams today as outsourcing did in the past: if you put your finger in it, you're starting a cycle that you don't know where it will end.
Whether you are optimistic or paranoid, it's important to adopt a transparent roadmap, to spell out what you do or don't want to do with AI, the resources you're allocating to it and how your teams are being supported.
From a more practical point of view, deploying AI requires the involvement of both 'business' and 'tech', so we need to clarify the playing field if we want collaboration to take place without hidden agenda.
When to go for it?
There have been times when I've seen the temptation to move towards Generative AI where traditional automation solutions are perfectly qualified to do the job. It's all about the hype!
I see three criteria for judging whether moving towards generative AI is worth it. It's worth noting from the outset that these conditions can be applied to many techno solutions, beyond GenAI
?? Once a UX, always a UX
AI is there to make the experience simpler, more automatic, more fluid. No need to develop.
... Except on one point: reliability. AI starts off with a handicap: even when its reliability is superior to that of humans, the slightest fault will be pointed out. Look at the setbacks of the autonomous vehicle: the slightest collision is a target, even though the rate of occurrence is ridiculously low. That's just the way it is, and we have to live with it.
That's also why a human supervision phase will always be preferable.
领英推荐
???ROI
Another obvious point. When we talk about Generative AI, it's easier to assess the "R" in ROI if we start with existing use cases.
Estimating the unit productivity gain is a priori easier; betting on reliability or the percentage of transactions that will be affected is already less obvious.
The 'I' of ROI is even worse: when you request with a supplier to provide an AI capability, you'll find that there's still a lot of trial and error when it comes to pricing; if you build your own solution, the resources and iteration timescales are complicated to anticipate.
So there's no solution? Yes, work in successive iterations to steer the trajectory from both a technological and a business model point of view.
???Bandwidth
Contrary to what using ChatGPT might lead you to believe, generative AI does not operate in plug-and-play mode. You have to learn, test, iterate. In short, you need a minimum of patience and - as I said earlier - you always need to be focused on the problem to be solved, not on the solution.
What's more, this requires a close working relationship between Tech and Business. Both must have this bandwidth... and at the same time.
How do we get there?
I've identified three ways forward. They resonate with the three conditions set out above.
?? "AI inside"
There isn't a CRM platform, ERP or even a Cloud management solution today that doesn't claim to boost its functionality with AI.
The idea is to upgrade the tools we already have. On paper, this is the simplest option.
But you have to ask yourself to what extent AI adds real value to the product compared with its original functionality. And, of course, what price will be asked in return.
Next, you need to assess how the model can be used in your own business. An e-retailer's CRM needs are not the same as those of a B2B insurer. And behind this subject, there are specific issues when dealing with insurance, such as the use that will be made by AI of personal data, health data, etc.
?? Vertical application
The second tactic is to adopt an application dedicated to a specific functionality, whose capabilities are multiplied by AI: routing emails, taking phone calls, scanning documents, etc.
The assumption here is that vendors have in-depth expertise in the subject they are dealing with, which suggests that the impact of AI will be maximized.
The downside is that this new component will have to be integrated into the company's infrastructure. In insurance, many players have not yet migrated to the cloud, whereas these dedicated solutions are natively cloud-based. This is not an obstacle, but it does add complexity.
???Horizontal platform
The last option is to build your own AI assets, using these emerging tools that enable you to develop 'assistants' that meet your company's specific needs.
While this guarantees genuine customization of applications, it also means that you need to have the necessary skills to maximise their impact. This means investing in more than just tools and the DNA that goes with them.
The conditions for success go beyond technology alone
In AI, as in most new technologies, success depends not just on technical skills, but on the ability to explore shifting territory.
We find the same obstacles that can hamper traditional organisations... and not just when it comes to AI.