ASIC warns governance gap could emerge in first report on AI adoption by licensees
ASIC is urging financial services and credit licensees to ensure their governance practices keep pace with their accelerating adoption of artificial intelligence (AI).
ASIC analysed 624 AI use cases that 23 licensees in the banking, credit, insurance and financial advice sectors were using, or developing, as at December 2023.
ASIC's findings
Use of AI
FINDING 1: The extent to which licensees used AI varied significantly. Some licensees had been using forms of AI for several years and others were early in their journey. Overall, adoption of AI is accelerating rapidly.
FINDING 2: While most current use cases used long-established, well-understood techniques, there is a shift towards more complex and opaque techniques. The adoption of generative AI, in particular, is increasing exponentially. This can present new challenges for risk management.
FINDING 3: Existing AI deployment strategies were mostly cautious, including for generative AI. AI augmented human decisions or increased efficiency; generally, AI did not make autonomous decisions. Most use cases did not directly interact with consumers.
Risk management and governance
FINDING 4: Not all licensees had adequate arrangements in place for managing AI risks.
FINDING 5: Some licensees assessed risks through the lens of the business rather than the consumer. We found some gaps in how licensees assessed risks, particularly risks to consumers that are specific to the use of AI, such as algorithmic bias.
FINDING 6: AI governance arrangements varied widely. We saw weaknesses that create the potential for gaps as AI use accelerates.
FINDING 7: The maturity of governance and risk management did not always align with the nature and scale of licensees’ AI use – in some cases, governance and risk management lagged the adoption of AI, creating the greatest risk of consumer harm.
FINDING 8: Many licensees relied heavily on third parties for their AI models, but not all had appropriate governance arrangements in place to manage the associated risks.
key uses of AI among insurers
Most common uses
Emerging uses
领英推荐
Case study - not disclosing AI use to a consumer making a claim
One licensee used a third-party AI model to assist with indexing documents submitted for insurance claims, which included sensitive personal information, to improve efficiencies for claims staff.
The licensee identified that consumers may be concerned about their documents being sent to a third party to be read by AI, but decided not to specifically disclose this to consumers. The licensee’s documents explained that its privacy policy stated that consumers’ data would be shared with third parties, and the data was at all times kept in Australia.
But consumers were not specifically informed that some sharing of information involved AI, or about whether they could opt out.
While the AI use in this case only involved the provision of administrative support functions to human claims assessors, rather than any AI-driven decisions, it illustrates the complexity of the issue and the potential for loss of consumer trust.
What does good governance look like?
Strategic and centralised
The more mature licensees developed strategic, centralised AI governance approaches. These licensees generally:
AI ethics principles
Twelve licensees had incorporated some of the eight Australian AI Ethics Principles in their AI policies and procedures.
Licensees must consider their existing regulatory obligations
The regulatory framework for financial services and credit is technology neutral. There are several existing regulatory obligations that are relevant to licensees’ safe and responsible use of AI – in particular, the general licensee obligations, consumer protection provisions and directors’ duties. For example