AI regulatory update - Australia
Image: Tara Winstead, Pexels

AI regulatory update - Australia

In the first 4 months of this year we published updates on the emerging regulatory regimes for AI in the EU, US, UK, Singapore and PRC; this week we take a look at Australia.

Of course this is not legal advice, but if your organisation uses AI or AI-enabled products then it may help you decide if you need to talk to your lawyers and update your AI Governance framework.

Currently there is no mandatory AI-specific legislation in Australia; although general laws in relation to cybersecurity, intellectual property, competition, anti-discrimination, product liability and privacy and data security may apply to AI systems.

Australia has adopted a number of voluntary measures. In May 2019 it formally adopted the non-binding OECD AI Principles ?together with other member countries of the OECD; and it subsequently reflected them in its 8 AI Ethics Principles, published later the same year. It has also established a National AI Centre (NAIC), led by Australia’s national science agency (CSIRO), which has initiated a Responsible AI Network (RAIN). Amongst other things RAIN is exploring how business can put the Australian AI Ethics Principles into practice.

The current legislative state is under review. On 1 Jun 2023 the Department of Science Industry and Resources (DISR)?published a consultation report titled?Safe and Responsible AI in Australia, inviting public comment on the regulatory settings for AI. The submission period closes 4 August 2023. (This follows an issues paper published by the previous government in Mar 2022 inviting similar input).

Comment: It may be time for the current government to reconsider its light touch approach to AI regulation, in view of heightened public debate around the risks of AI.

A?2023 global study conducted by KPMG and the University of Queensland showed that the large majority of the public (71% of 17,000 interviewees) expects AI to be regulated, with general agreement for some form of external, independent oversight. According to KPMG, public expectation is that Australia sets up an independent AI regulator or regulatory body.

Other Australian voices calling for regulation include the Australian Human Rights Commission (AHRC) which published a report on Human Rights and Technology in 2022. The report makes 23 recommendations for the regulation of AI, including a moratorium on high-risk uses of biometric technologies (such as facial recognition) until stronger human rights and privacy protections are developed in Australia. Following on from this, in Sep 2022, the Human Technology Institute at the University of Technology Sydney published a report proposing a risk-based model law for facial recognition which would require developers to submit risk assessments for applications of such technology before developing or deploying them.

Australia is not currently developing systemically relevant foundation models of the type that are catching the attention of regulators in the EU and elsewhere. But we are users of them, fine-tuned or otherwise, as part of AI-enabled business applications; this is a growing area of innovation. Small developers and deployers of AI-enabled business applications usually have little visibility of the data and design choices made during the design phase of large foundation models (Llama2 being a notable exception to the rule). In addition they are subject to the vagaries of foundation model updates, which tend to happen unpredictably behind API paywalls and can negatively impact the performance of the applications using them. Both factors increase the risks shouldered by business in Australia. For this reason we believe the government’s review of current legislative settings should consider imposing greater disclosure obligations on the developers of very large foundation models available for use in Australia. This could mean publishing the results of pre-release testing and evaluation, disclosing known risks and constraints, providing detailed information about training data sets and providing due notice of updates. That way downstream developers will understand what they are licensing and what additional steps they need to take to ensure responsible deployment in accordance with the 8 AI Ethics Principles.??


Mia Pitassi, PMP

Project Manager implementing technology, streamlining process and reducing waste.

1 年

This is the AI company my J1 partners with, and presenting one of the upcoming workshops I mentioned, Dr Gavin Beck

回复
Dave Timm

CEO at Red Marble AI. Enhancing Human Performance and Elevating Workforce Productivity.

1 年

Great update Bronwyn Ross and Red Marble AI team.

要查看或添加评论,请登录

Red Marble AI的更多文章

社区洞察

其他会员也浏览了