Thoughts from the Digital Insurance Summit
I was one of the hundred-odd people who attended Forefront Events Digital Insurance Summit in Melbourne last week, and here are my thoughts of the day, with the TL;DR takeaways being:
My overall conclusion was that seamless digital experiences are still a work in progress, irrespective of the size of your organization, and that delivering them is a ‘painting the Harbour Bridge’ activity that never concludes, so allocate your funding accordingly.
Digital Basics
The notion of what digital entails was discussed at length and while the focus was on consumer interactions, rather than broker systems, it was generally agreed that whatever your vision is, it needs to permeate and empower the entire business because no single team can unilaterally ‘go digital’. Indeed, many participants noted that facilitating change within the business is a key activity because all the tech in the world will not make you digital if staff don’t embrace the new ways of working.
Indeed, using ways of working was suggested as better terminology than labelling the effort as a digital transformation, and I agree given that Millennials are overtaking Baby Boomers as the largest generational group in Australia. The ABS commented after the 2021 Census that “Millennials are of working age and are upskilling,” and there is no doubt that they’re tech savvy and comfortable interacting digitally rather than over the phone or in person. A telling observation is that consumers aren’t comparing digital experiences between insurance companies, they’re comparing them to Amazon and every other business with a digital presence, and I can’t fault the logic because that’s what I do!
More importantly, customers are increasingly interacting using just their devices, with some insurers seeing digital sales nearing 50% for new business and renewals.
Living with the legacy
Irrespective of the terminology and how you promote it, ‘digital’ is a means to unlock business scale and reduce operational friction.
Some examples noted processing claims faster with the same team, and cost-effectively prospecting into more demographics, with more tailored messages, within the same marketing budget. I also wondered about integrating an acquisition more quickly and with less organisational stress, because having program managed some technology workstreams for a travel insurance acquisition, I know how challenging this is.
What digital clearly isn’t is just the ‘front door’ of your business, because putting a lick of paint on that when everything inside remains steam pumps and pressure valves is going to result in a poor experience across the board.
Interestingly, though, rip-and-replace of entire applications suites is not common, which led to the advice of embracing your stalwart systems…and the people who run them! And where companies are doing wholesale application replacement, it did not necessarily eliminate business friction because the sheer scale of change leads to fatigue. Also, with so many moving parts, setting and maintaining a clear vision and strategy is paramount.
A challenge noted with legacy systems is how they typically hard-code business processes in ways that limit automation efficiency. Having worked on the architecture for an insurance system replacement, I can vouch for that, because while tools that skim the user interface can drive automation, the better solution is deploying core systems that can natively talk with other systems using Application Program Interfaces. APIs scale up in a more robust fashion and allow you to stitch workflows together more easily.
Customer Experience
Interaction personalisation prompted a lot of interesting thoughts, but an observation that the best personalisation is if the customer does not even know it’s being applied nailed the concept for me. It was also noted that while customers are changing how they engage, their fundamental expectations for accurate and timely interactions that deliver value in a secured fashion have not, and insurers need to be mindful of this.
One challenging aspect discussed in a breakout session is how to seamlessly move customers between channels, particularly where the interaction is asynchronous. Should a customer who starts their journey on your website chatbot in the evening reasonably expect to pick it up at the same place the next day? What about a week later? And how do you quickly transition the journey context from the chatbot to the call centre if they decide to call?
There is no easy answer because it depends on how your CX strategy treats engagement and how your technology platforms handle engagement, but it was noted that these need to align, or you will trigger an operational mismatch. Whatever you decide to do, one request was not to promote products on your portal that the customer has already purchased! Having experienced this with Amazon’s Kindle bookstore, I can attest that telling a customer you know them and then immediately showing them you don’t is a quick way to undermine their trust.
Drowning in Data
Good data governance was a recurring theme, with the take home message that this is a business function, not an IT responsibility, and that the best performers are embedding this mindset.
I liked the concept of data as a hazardous material which was mentioned as a light-hearted way to highlight how holding data carries risk. Cybersecurity was often mentioned alongside data because protecting consumers in a world of ecosystems collecting huge amounts of their data is problematic enough but overlay privacy laws and compliance requirements and this gets complicated.
The explosion of data points was raised, with vehicle telematics used as an example of how an insurer could harvest driver behaviours for competitive advantage. Tesla Insurance’s granular telematics and data autonomy allows them to offer discounts of up to 30% in parts of the United States, but you need systems and infrastructure to take advantage of what’s going to be a torrent of information. And not directly controlling or owning parts of the torrent, adds complexity to your data handling practices. Data ingestion was raised in this context, and not just in terms of working through the connectivity issues, the question is how to determine data relevancy for a new use case to ensure that you process only what’s required.
Data relevancy was a theme, with specific references to knowing where your data is and why you’re holding it. Having profiled data for multiple organisations, I know how difficult it is to uncover your data sources, map how it is handled, and audit its use. There are tools that assist with this activity, but it can be hard to confirm even fundamental policy decisions such as data retention periods. The decision to retain personal information for extended periods may not be best practice, as the Latitude Financial data breach highlighted.
For me, the best data comment was the plea that, “If I give you my data, please do something useful with it.” It is certainly a reasonable request but if your systems are locked in and your processes generate friction, then it is one that is not necessarily easy to deliver.
AI and Bots
Surprisingly, artificial intelligence wasn’t the hottest topic. That may be because it is still early days for a range of technologies that need tuning to align with the requirements of a regulated industry. Irrespective, one aspect of AI that was generally agreed is that despite the hype, you can't expose a generative AI like a large language model directly to customers unless you’re tightly controlling the content. Trials using LLMs to surface ‘best content’ for agents and customers suggest that rewriting your existing content in an LLM-friendly fashion yields better results than just allowing the LLM to harvest your knowledge base.
Aside from this cautionary note, back-office AI / LLMs were seen as immediately useful for activities such as fraud detection, risk assessment, and compliance reporting because the scope is easier to define and the returns more readily quantifiable.
I also felt a brief observation about Lemonade was interesting. Founded as an ‘insurtech’ intending to disrupt traditional insurers using AI, data, and process automation, Lemonade was complimented for their CX, but not for their loss ratio. As the speaker noted, you need to check that deploying AI makes the business better. If not, don’t do it.
In that context the in-depth discussions about bots, including suggested 'first steps', were useful. Bots in this context mostly referred to software robots (RPA, see definition in "Extraneous Thoughts" at the end), but customer interactivity applications such as chatbots and voice bots were flagged. One strong recommendation was, “Do process mapping before you build a bot!” and having been involved in this with an insurance company call centre, I endorse the sentiment. It is tempting to dive in and overlay RPA on a ‘known’ process, but software robots need exact, detailed, and unambiguous instructions to ensure that idiosyncrasies in the process do not unsettle them. Even where the tool captures an existing process by watching what users do, you should still review the process flow and identify where interactions can be optimised to ensure they scale.
The experience with voice bots seemed limited, but it was suggested that they risk irritating customers by misunderstanding natural language (and accents especially) and should be implemented only after extensive testing. Yellow Pages was early to this issue, with their 1992 'Go Go Mobile' advert comedically alluding to the risk, and while current technology is very good, deciphering speech remains a difficult thing for computers to do.
An interesting use case was using machine learning to predict complaints arising from the claims process. Not only were the results used to refine the process, but the ML could rate who was likely to complain. In my experience, ML is the workhorse of AI, and I discussed this with a Data Science GM, who cautioned against expecting a predictive or generative AI of any kind to be 100% accurate. It is an important point because AI is often presented as an infallible machine. It isn’t. Kris Hammond, co-founder of proto-LLM company, Narrative Science, offered the advice at an event I attended a few years ago that if the salesperson can’t tell you how their AI works, you shouldn’t buy it, and I feel that applies to homegrown AI as well. Implementing a black box that delivers outcomes using algorithms you cannot validate is a business risk that should not be ignored.
Change Management
领英推荐
The difficulty of change was noted numerous times during panel discussions, with agreement that this is difficult…but that if you get it right, speed of change is a competitive advantage.
Of course, ideas about how to enact change differed, as did views about staff ‘migrating’ to digital ways of working. One attendee mused about why updates to the apps on our phones are taken in stride, but changes to business apps trigger the need for extensive training. I would say that is a factor of consequence, but I’ve too often seen change management cut from project plans as the budget runs dry, so in many cases staff are unfortunately left to somewhat work it out by themselves! This usually leads to resentment and resistance that undermines not only the benefits of whatever system you are implementing, but the management teams involved, and I don’t recommend it.
More structured approaches were discussed, such as Allianz’s ‘Cloud for Her’ programme where three hundred non-technical staff are being introduced to cloud basics, which I think is brilliant. Not only does it increase the likelihood of fresh ideas, but it demystifies what is a commonly misunderstood mode of IT delivery. And it exposes staff to new opportunities, which is often motivating and certainly reinforces that the business cares about their professional development.
Culture was often mentioned in conjunction with change management, and particularly in the context of bringing staff along on the digitisation journey. Or transitioning staff to digital ways of working, as it was noted that ‘going digital’ is not a destination, it is the new business-as-usual.
Conclusion
My primary takeaway is that technology is not the constraint to digitisation. The constraint is complexity, clarity, and pace of change. Presenting a personalised customer experience that is low friction and high context – even as the customer moves between channels – is certainly not straightforward. Already customers are increasingly choosing the web over a phone call, and the industry is responding, but as several speakers noted, that response is best driven from the top of the organisation.
Certainly, the next few years will be interesting because digitisation is a transformation imperative that cannot be ignored, and laggards are likely to find themselves being squeezed out as consumers shift away from phones and in-person transactions.
I'm always happy to chat about tech and its impact, so don't hesitate to comment or reach out directly if you prefer.
Extraneous Thoughts
While I expect that the technical terms I've used are now common knowledge, I thought it would be interesting to ask ChatGPT to define them for a general business audience. Specifically, this is GPT-4, accessed via Microsoft's Bing interface and the results were interesting.
Robotic Process Automation (RPA)
"Robotic Process Automation (RPA) is a software technology that allows businesses to automate repetitive and routine tasks. It uses software robots to mimic the actions of a human worker interacting with digital systems and software. This can include tasks such as extracting data, filling in forms, and moving files. RPA can help businesses increase efficiency, reduce costs, and improve accuracy by automating these types of tasks. It’s like having a virtual workforce that can work 24/7 without taking breaks or making mistakes. This frees up human workers to focus on more complex and higher-value tasks, such as innovating, collaborating, and interacting with customers. ??"
This is a reasonable definition and note that GPT-4 added the smiley emoji, that wasn't me (I've removed them from the next examples). The software aspect is key here. Some years ago, after I had visited Telstra's robots lab, a colleague at the time asked me how long their keyboards lasted. It took a little probing to realise that they had imagined humanoid robots like 3CPO from Star Wars sitting at desks pounding on keyboards, and while we laughed at that, I still think it's a good mental picture for RPA. The robots aren't physical, but they do mimic keyboard and mouse activity as they interact with applications. Because of this, they can stitch disparate workflows together, reducing the 'swivel chair' problem where staff need to repeatedly shift between application screens to do their work. Adding short cut keys to trigger the robots means that staff can direct the automation, which is a common productivity improvement for call centres.
There are many RPA vendors, and it is increasingly an add-on for mainstream software platforms (including Microsoft O365 via Power Automate), so you likely already have this capability within the business and will not need to buy a new tool to trial RPA, though you may need to purchase an additional subscription.
Unlike APIs, RPA has the advantage that application permissions match what your staff doing the same work will have, because they are pretending to be those staff. But bear in mind software application vendor license terms and conditions when using RPA though. I was discussing this recently with a former sales rep for a US platform company and he noted disagreements with customers deploying RPA because the vendor felt that using a robot, even though they had the same credentials as a staff member, was a violation of the terms of service on the basis that the robot could operate concurrently to a staff member and also 24/7.
Once you have trialled RPA, you should develop a strategy and roadmap to deploy it, ideally sequencing a mix of quick win projects to demonstrate and socialise the benefits, with some larger, more complex projects that drive enduring productivity gains happening in the background. RPA is like any software project that touches many staff and needs the same detailed requirements gathering, project establishment and management, rigorous unit and acceptance testing, and an agreed change and adoption plan.
Machine Learning (ML)
"Machine learning is a type of artificial intelligence that allows computer systems to learn and improve from experience without being explicitly programmed. It uses algorithms to analyze data, learn from it, and make predictions or decisions based on that data. Machine learning can help businesses in many ways, such as automating routine tasks, finding areas to maximize efficiency, managing unstructured data, gauging risk more effectively, driving strategic business advantage, improving personalization, and solving big problems humans can't. It's a powerful tool that can help businesses make better decisions, increase efficiency, and gain a competitive advantage."
I am not as happy with this definition, feeling it lacks the punch of the RPA one, but I have to acknowledge that ML is a broad church of techniques, as the methods map below highlights. It is a few years old now, so new algorithms need to be added, but it certainly conveys the idea. Deciding which algorithm is needed for which problem is typically left to your resident data scientist or IT partner, but the takeaway here is that there isn't a single "ML", just as there isn't a single "AI".
Also of note, Bing included some references in the description above which I've removed for brevity, but I appreciate that Microsoft is citing sources when they are clearly used to generate a response.
(As an aside, the point about a single AI is cleverly addressed in Fredric Brown's 1954 wry short story, Answer. It is worth looking up given the current fears of AI as an existential threat to humanity.)
My first job out of university was using what are now known as machine learning techniques to visually examine industrial processes and act on them. Problems included counting and classifying grains of rice in a quality assurance sample, searching for cell abnormalities in blood tests, and my favourite, checking the tops of beer bottles to detect glass chips that had been introduced during the capping process. At the time, our methods were rudimentary, the computer hardware was eye wateringly expensive, and the speeds we could achieve were hardly ever fast enough. What hasn't changed is needing a solid understanding of the problem and the benefit you are expecting from it so that you can develop a robust business case and determine how to test for success.
Oh, the other thing that hasn't changed is how compute hungry ML algorithms can be. This is why most of them are run in the cloud, because buying and operating the necessary number of computers and graphics processing cards in-house is usually cost prohibitive.
Large Language Model (LLM)
"A large language model (LLM) is a type of artificial intelligence that uses machine learning algorithms to understand and generate human-like text. These models are trained on vast amounts of data and can perform a wide range of natural language processing tasks, such as generating text, summarizing content, answering questions, and translating languages. LLMs can be used by businesses to improve customer experiences by providing intelligent chatbots, enhance decision-making by analyzing unstructured data, and transform business processes. They are a powerful tool that can help businesses increase efficiency, gain valuable insights, and maintain a competitive edge."
I was disappointed that Bing did not cheekily include a self-reference in this definition, but like ML, this one also seems vague and bland.
It also fails to note that the most revolutionary aspect of LLMs is their 'creativity'. I write science fiction novels as a hobby, so have been playing with LLMs in the context of narrative development, and while they are far from ready to pump out a compelling, frantic page turner of a book, for smaller, more straightforward tasks such as composing emails or generating easy to read PDS descriptive text, LLMs really change the landscape.
They are also highly accessible to non-technical users, with more obviously correctable and usable output. The DALL-E images I used above were the best of each sequence, and some of the ones I rejected were just wrong. But what I couldn't do was easily fix those outside of the DALL-E generation interface (and even within, it is tedious). Obviously, a graphic artist could work with the image file, but we are not all graphic artists whereas almost everyone can take LLM text output and edit it, so the barrier to use is almost non-existent.
While you can type queries into Bing and have it generate GPT-4-based responses, it is more likely that you will deploy either an LLM associated with your software platforms (like RPA, vendors of business applications are hurriedly adding LLM capabilities) or utilise an open-source version within your cloud tenant that ingests exactly what you tell it to.
Note that you need to confirm the LLM is fit for purpose. Meta's Llama 2, for example, is not really open source because the license puts restrictions on commercial use for some users and restricts the use of the model and software for certain purposes.
There are also some pre-installed LLMs that are specific to cloud providers, so while these can lower the effort of deploying an LLM, you need to check that your corporate cloud supports the pre-installed LLM you wish to use.
Application Programming Interfaces (API)
"An API, or Application Programming Interface, is a set of rules and protocols that allows different software applications to communicate with each other. APIs provide a way for businesses to expose their data and services to other applications, allowing for the creation of new products and services, as well as the integration of existing systems. They can be used to streamline business operations, improve customer experiences, and reduce costs. Some popular APIs that businesses use include the Alexa Skill Management API, Google Assistant API, Salesforce Einstein Language API, REST Countries API, and many more. By implementing an API strategy that includes both public and private APIs, businesses can connect with customers in new ways, streamline operations, and stay competitive in an ever-changing market."
Honestly, this is way geekier than I was expecting, and the callout to specific products seems like a promotional engine at work, especially with the Alexa and Google Assistant references because they are not where most business users would go when talking APIs.
Anyway, my more straightforward analogy is that APIs can be thought of as a computer telephone exchange, where the computers call each other up and have a direct and immediate conversation. When the conversation is over, they drop the call and disconnect from each other.
There is also the analogy of posting a letter where APIs interact asynchronously but given that digital transformations generally focus on real-time or near real-time interactions, this mode of API connectivity is generally less desirable.
My experience with APIs is that you need detailed and agreed transaction definitions before you start coding because APIs are 'under the hood' technology and diagnosing why they are not working as expected can be tricky and time consuming!