The ethics of insurtech (and chatbots)
Philip James
Partner at Eversheds Sutherland, Global Data Privacy & Cybersecurity, AI Task Force
Whilst lawyers counsel clients on how to comply with the law, it would be ill-advised to underestimate the importance of the most important critic and adjudicator of all - your customer. So, even if your product is lawful, how it will be perceived by the public and your investment community. Above all:
- to what extent is your new product ethical?
- will its features reinforce or undermine your organisation's hard-earned reputation?
- do you have an ethics committee?
- have you developed an ethical framework or terms of reference for challenging change and the development of new tools and the application of data science?
These are the questions that will set your Insurtech venture apart from the competition. These are the standards which will win those crucial brownie points from investors (i.e. those who benchmark funds against an ethical index), your regulator and your marketing director.
As customer and content is king, so now is your data. The war rages over the control of consumers' data and, for the religious amongst you, your soul. In the same vein as software toppled institutional brands, now data and artificial intelligence is top of the food chain in the the digital savannah. In a recent article in Forbes article by Martijn van Attekum, Jie Mei and Tarry Singh , Software Ate The World, machine learning can enable code that writes itself - meaning that even developers themselves may be out of a job.
The UK is striving to become the leader in Ethics. It is therefore perhaps no surprise that the UK government recently launched the Centre for Data Ethics and Innovation (CDEI). Amongst the first papers it has released this month, AI and Personal Insurance illustrates the opportunities and risks posed by the use of machine learning tools and automation in Insurtech.
In particular, the paper highlights three stages in the insurance life cycle that are likely to change dramatically as a result of the deployment of artificial intelligence (AI): customer on-boarding, pricing; claims management; and risk advisory. In relation to the latter, for instance, AI can help encourage behaviour change in an insured by, say, suggesting safer routes whilst driving or identifying a potential vulnerability in a roof (before the onset of water damage). In turn, there are specific areas which could pose threats to consumers' privacy or their ability to secure a competitive product:
- the collection and sharing of large data sets, in the absence of consumer consent (an issue which is very prevalent at present in relation to real time bidding and programmatic advertising according to statement from the UK Information Commissioner in June this year - Real Time Bidding),
- hyper personalised risk assessments (which may result in subjects becoming un-insurable), and
- new forms of nudging (no, not the slot machine feature, but innovative ways in which customers are subconsciously persuaded into a particular behaviour, in ways which are not dissimilar to surreptitious or subliminal advertising).
Online advertising may segment and target specific customers and chatbots are now using natural language processing to answer customer queries and offer quotes, whether using WhatsApp or Facebook Messenger. According to the CDEI's report [and an article by D. Jefferies in Raconteur earlier this year], Lemonade claims its chatbot can provide a personalised policy in 90 seconds.
Similarly, pricing may be determined by access to dynamic data sets relating to a user's profile and behaviour. These concerns are not new to other verticals, despite their benefits. The Office of Fair Trading (OFT) has previously published its report into Online Targeting of Advertising and Pricing explaining risks associated by profile-based pricing. Whilst recognising the advantages, the report also indicated that personalised pricing may be more likely to be harmful under the following set of circumstances (but would not necessarily be harmful in any of them):
- where there is a lack transparency that price discrimination is occurring
- where it is costly to price discriminate and this has an upward pressure on price
- where repeat purchases and behaviour on a retail website may affects the price they will be charged tomorrow, but consumers do not recognise this
- where concerns about online personalised pricing trigger a reduction in demand for products bought online due to a loss of consumers' trust in online markets.
AI may also be used to identify the likelihood of fraud. As we know, fraud is rife in insurance claims. CDEI's paper provides an example of Hanzo which has created tools to trawl social media sites to identify if there is a flaw in a claim. Whilst this is well intended, there may also be unfair inferences drawn from third party data sources. So, insurers are advised to classify data in the suggested categories: provided; observed; and inferred - and to take into account and address the respective risks associated with each.
In addition, where algorithms are employed to process insurance applications and/or claims, an insurance platform should challenge its logic and seek to demonstrate, on an ongoing basis, that there is no material basis that may prejudice its prospective and existing customers. Insurtech vendors and underwriters should also take note of their respective duties under applicable data protection law. In Europe, which may apply even where only a customers' behaviour is monitored there, any profiling or end-to-end automated decision process (in the absence of human involvement) which results in decisions that may have a significant effect on individuals may be subject to challenge by a regulator or an individual. It will be important therefore to consider whether a privacy impact assessment (PIA) should be undertaken to identify any risks in bias and those risks mitigated.
Alongside privacy, the Chartered Insurance Institute (CII) has just launched its Digital Code of Ethics. It's worth a read and provides some useful guidance on how innovative insurance providers can adopt ethical practices both to achieve commercial objectives and reinforce brand value.
17 September 2019.
Philip James is a Partner at Sheridans and advises clients on data strategy and licensing frameworks. Philip has a particular interest in open finance, geospatial data, fraud, risk governance and cyber security.