ASIC warns governance gap could emerge in first report on AI adoption by licensees

ASIC warns governance gap could emerge in first report on AI adoption by licensees

ASIC is urging financial services and credit licensees to ensure their governance practices keep pace with their accelerating adoption of artificial intelligence (AI).

ASIC analysed 624 AI use cases that 23 licensees in the banking, credit, insurance and financial advice sectors were using, or developing, as at December 2023.

ASIC's findings

Use of AI

FINDING 1: The extent to which licensees used AI varied significantly. Some licensees had been using forms of AI for several years and others were early in their journey. Overall, adoption of AI is accelerating rapidly.

FINDING 2: While most current use cases used long-established, well-understood techniques, there is a shift towards more complex and opaque techniques. The adoption of generative AI, in particular, is increasing exponentially. This can present new challenges for risk management.

FINDING 3: Existing AI deployment strategies were mostly cautious, including for generative AI. AI augmented human decisions or increased efficiency; generally, AI did not make autonomous decisions. Most use cases did not directly interact with consumers.

Risk management and governance

FINDING 4: Not all licensees had adequate arrangements in place for managing AI risks.

FINDING 5: Some licensees assessed risks through the lens of the business rather than the consumer. We found some gaps in how licensees assessed risks, particularly risks to consumers that are specific to the use of AI, such as algorithmic bias.

FINDING 6: AI governance arrangements varied widely. We saw weaknesses that create the potential for gaps as AI use accelerates.

FINDING 7: The maturity of governance and risk management did not always align with the nature and scale of licensees’ AI use – in some cases, governance and risk management lagged the adoption of AI, creating the greatest risk of consumer harm.

FINDING 8: Many licensees relied heavily on third parties for their AI models, but not all had appropriate governance arrangements in place to manage the associated risks.

key uses of AI among insurers

Most common uses

  1. Actuarial models for risk, cost and demand modelling.
  2. Supporting the claims process: Claims triaging, decision engines to support claims staff, document indexation, identifying claims for cost recovery.
  3. Identifying lapse propensity and prompts to contact consumers.
  4. Automating a component of the claims decisioning process, but humans remain responsible for overall claims decision

Emerging uses

  1. Use of machine learning to increase efficiencies in the underwriting process, focused on automating the extraction of information and summarising key information about a customer’s application.
  2. The use of generative Al and natural language processing techniques to extract and summarise key information from claims, emails and other key documents

Case study - not disclosing AI use to a consumer making a claim

One licensee used a third-party AI model to assist with indexing documents submitted for insurance claims, which included sensitive personal information, to improve efficiencies for claims staff.

The licensee identified that consumers may be concerned about their documents being sent to a third party to be read by AI, but decided not to specifically disclose this to consumers. The licensee’s documents explained that its privacy policy stated that consumers’ data would be shared with third parties, and the data was at all times kept in Australia.

But consumers were not specifically informed that some sharing of information involved AI, or about whether they could opt out.

While the AI use in this case only involved the provision of administrative support functions to human claims assessors, rather than any AI-driven decisions, it illustrates the complexity of the issue and the potential for loss of consumer trust.

What does good governance look like?

Strategic and centralised

The more mature licensees developed strategic, centralised AI governance approaches. These licensees generally:

  • had a clearly articulated AI strategy
  • included AI explicitly in their risk appetite statement
  • demonstrated clear ownership and accountability for AI at an organisational level, including an AI-specific committee or council
  • reported to the board about AI strategy, risk and use
  • had AI-specific policies and procedures that reflected a risk-based approach, and these spanned the whole AI lifecycle
  • incorporated consideration of AI ethics principles in the above, and
  • told ASIC they were investing in resources, skills and capability

AI ethics principles

Twelve licensees had incorporated some of the eight Australian AI Ethics Principles in their AI policies and procedures.

  • Human, societal and environmental wellbeing: AI systems should benefit individuals, society and the environment.
  • Human-centred values: AI systems should respect human rights, diversity, and the autonomy of individuals.
  • Fairness: AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.
  • Privacy protection and security: AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.
  • Reliability and safety: AI systems should reliably operate in accordance with their intended purpose.
  • Transparency and explainability: There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI, and can find out when an AI system is engaging with them.
  • Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system.
  • Accountability: People responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.

Licensees must consider their existing regulatory obligations

The regulatory framework for financial services and credit is technology neutral. There are several existing regulatory obligations that are relevant to licensees’ safe and responsible use of AI – in particular, the general licensee obligations, consumer protection provisions and directors’ duties. For example

  1. Licensees must do all things necessary to ensure that financial services are provided in a way that meets all of the elements of ‘efficiently, honestly and fairly’.
  2. Licensees must not engage in unconscionable conduct.
  3. Licensees must not make false or misleading representations.
  4. Licensees should have measures for complying with their obligations, including their general obligations
  5. Licensees must have adequate technological and human resources.
  6. Licensees must have adequate risk management systems.
  7. Licensees remain responsible for outsourced functions
  8. Company directors and officers must discharge their duties with a reasonable degree of care and diligence

要查看或添加评论,请登录

社区洞察

其他会员也浏览了