Navigating AI Impact with Industry Leaders in Australia

Navigating AI Impact with Industry Leaders in Australia

Written by Sarah Kaur

Understanding the impacts of Artificial Intelligence (AI) is critical as Australian businesses adopt this technology. The National Artificial Intelligence Centre’s Responsible AI Think Tank has been designing a framework to help Australian companies assess and steward their AI Impact, to be released in early 2024.

The Mission to Steward AI Impact

The approach of a one-and-done audit is insufficient for any company using AI in their systems, services, and products. What’s needed is a continuous levelling up of understanding and improvement of what AI impacts are, could be, and what they mean to the interests of diverse stakeholders.

The term “impact” in this context is a measure of change — both positive and negative — brought about by the actions of companies using AI.

Just as the concept of the Carbon Footprint revolutionised the corporate approach and social expectations of environmental accountability, the development of AI Impact assessment tools is poised to bring a similar transformation in the realm of technology.

Over the past year, the National AI Centre’s Responsible AI Think Tank has been designing an industry-led tool for measuring the external impact of Australian companies’ use of AI systems.

Our mission: “How might leading CEOs and Boards use an ‘AI Footprint’ mechanism to earn and retain the trust of their employees, consumers, clients, stakeholders, and investors in their creation and use of AI systems?”

From the early days of the Responsible AI Think Tank, there was a vision for AI impact reporting to become a key driver at the Executive and Board level to earn trust in Australian companies’ use of AI.

With the leadership of Think Tank Chair, Judy Slatyer, we looked for examples of big transitions that have driven new corporate behaviours. Just as social and consumer demands and international protocols led to Carbon Footprint assessments becoming standard corporate practice, we saw a similar trajectory emerging for AI Impact.

Attempts to quantify AI Impact now are similar to challenges faced in the early days of developing a Carbon Footprint. Initially, both lacked standardised metrics and methodologies, making universal adoption difficult. Both evaluating the impact of carbon and AI require a deep understanding of complex, interrelated systems, with emergent properties.

Learning from the Carbon Footprint and the Net Zero ambition.

The Carbon Footprint concept, which quantifies the greenhouse gas emissions associated with corporate activities, emerged as a critical tool in the fight to keep our climate and habitats livable for us and other species. ‘Net Zero’ evolved from a niche environmental term to a globally recognised metric, guiding policy and corporate strategies towards sustainability.

In the quest to mitigate climate change, ‘Net Zero by 2050’ has emerged as a galvanising metric, a north star guiding diverse actors within the global system. Rooted in years of scientific research, this clear and inspiring call to action centred around carbon emissions, offers a tangible measure of progress. Yet, it’s not without its critiques.

The ‘Net’ in ‘Net Zero’ is often seen as a loophole, fostering a “burn now, pay later” mentality. It leans on carbon capture technologies yet to fully materialise and carbon trading schemes that have birthed a complex new economy. Despite these challenges and arguably modest outcomes of forums like COP conferences, Net Zero has sparked a movement.

It has ignited investments in innovative technologies and economies, shaped policies, and provoked action in international forums, national and local communities, and individuals. The commitment to a low-carbon future, albeit voluntary until 2024, has seen companies invest trillions, acknowledging the market’s demand for a liveable climate.

Assessing AI Impact has unique challenges.

Contrasting the concepts of a Carbon Footprint with an “AI Footprint”, we encounter a different landscape and set of evaluation challenges. Unlike the finite carbon in our ecosystem, AI Impact doesn’t lend itself to a singular, finite measurement. Impact evaluation, though rigorous in its methodologies, can veer closer to art than science.

Unlike carbon dioxide equivalent emissions; AI Impact metrics are context-dependent, lacking a single universal unit of measure like CO2e. This doesn’t diminish the importance of developing ways to track AI Impact metrics, but highlights the challenge in creating an analogous ‘North Star’ for AI Impact.

Much like Net Zero, a guiding principle for AI Impact could catalyse action towards a future where Responsible AI isn’t just a concept but a universally recognised and actionable goal.

Developing a shared vocabulary and understanding around AI Impact is crucial for robust conversations and decision-making for companies to steer their use of AI towards a future we want to live in.

Common AI Impact metrics across industry sectors can be a constellation we navigate by to drive innovation, and guide policies towards responsible AI usage, where technology aligns harmoniously with societal, environmental, and economic wellbeing.

Collaborative development of an AI Impact Navigator.

Unlike many existing AI governance frameworks that primarily focus on policy, legal, and technical management aspects of an AI system’s lifecycle, the Responsible AI Think Tank’s work has been unique in its dedication to capturing the external impact of AI.

Activating the Think Tank’s thought leaders from ASX companies, industry networks and consumer and human rights institutions, we’ve learned that AI impact isn’t just about managing risk. It’s equally about recognising the value and opportunities AI systems can create — not only in achieving company goals but also in generating beneficial societal outcomes.

To maintain a focus on external impact, we’ve centred our approach on thematic areas where different stakeholder groups intersect with specific AI issues. For instance, we’ve explored how AI impact should be considered in the context of corporate license and transparency; and customer experience and consumer rights.

Developing indicators and metrics for AI Impact evaluation, especially for Board-level reporting and eventual public disclosure, has seen us create and throw away several ideas as we iterate our way through to a pilot AI Impact framework for release in early 2024.

Our big insight is that rather than getting the metrics perfectly “measurable”, getting metrics that create the right triggers for critical dialogue and action is the most fruitful aspect of our process. We have witnessed transformational conversations and the collaborative, critical, and creative thinking that appears when cross-functional teams within a company come together for an honest self-assessment of their AI Impacts.

Conversations Over Calculations.

It’s become clear that the conversations and subsequent actions are impactful, and metrics are a necessary but not sufficient condition to drive the change we want. While we strive to define robust metrics, we acknowledge the emergent nature of this field. In the coming year, our focus will shift towards building external trust through ongoing dialogue between corporations and their stakeholders in the public domain.

We are inspired by digital transformation roadmaps companies use to chart a course for integrating technology across a business. They lay out clear steps, timelines, and goals to upgrade systems, processes, and culture. These roadmaps support a strategic focus and conversations around values, and how technology should be employed for value creation for multiple stakeholders. We can adopt similar strategies to developing a framework for Responsible AI governance. We can set clear goals for Responsible AI in the form of indicators, collect and get feedback on how we are progressing against the goals with supporting metrics. At the same time, we need to be continuously flexible to adapt to laws, regulations and emerging expectations of ethical and safe use of AI.

We also recognise that sometimes the impact of AI on stakeholders can’t be predicted in advance. Therefore, we want to track measures of social, environmental, and economic wellbeing, constantly scanning for unknown signals of AI Impact. This vigilance allows us to understand and manage unforeseen consequences, keeping our mission dynamic and responsive.

Heading into 2024, our ambition is to forge ahead with a participatory method for creating and honing AI metrics that not only measure but also guide responsible AI usage. We encourage companies to embrace the ethos, “All our stakeholders trust our use of AI,” and their stewardship of AI’s impact.

We are advocating for a corporate culture where every stakeholder, be they customers, employees, investors, or suppliers, all deeply trust a company’s AI practices. This trust is rooted in transparency and accountability, and it grows through a commitment to regularly report on AI’s influence, adapt as AI evolves, and be led by the positive changes AI can bring. It’s about being open with how AI is applied, owning the consequences it has on stakeholders, and continually striving for enhanced outcomes. Our vision is for companies to report on these aspects annually, expanding the scope of their AI impact evaluation as their AI maturity grows.

Our next steps towards not just measuring, but stewarding AI Impact with an “AI Impact Navigator”.

We’re on the brink of something big. In the Responsible AI Think Tank, we have the right people in the room to influence how we craft a kinder, smarter future as we live with a massive adoption of AI in Australian industry. It’s an opportunity to make sure this technology journey is remembered for making life better, for everyone and not just for a few.

This isn’t just about ticking boxes or reporting against a set of compliance metrics. We have changed the name of the AI Impact framework from “AI Footprint” to “AI Impact Navigator” to reflect the dynamic conversations, decisions, and actions each company will have to take as they chart their course to ensure their AI use is trustworthy through and through.

Acknowledgements: This article was drafted with support from my colleagues at CSIRO and National AI Centre , Donna Forlin , Judy Slatyer , and Georgina Ibarra . Information here doesn’t come from desktop research, but lived experience. ChatGPT4 was used to refine original drafts, with my original thoughts and prompts and lots of manual re-working. Image generated by ChatGPT4.

Michael Abbott

Project Manager Quality Systems

6 个月

A quality framework may be a good starting point to assess whether AI is being deployed in a way that meets community standards or regulatory requirements. It may not have the rigour of impact measurement, but it would be a good starting point in terms of assessing whether organisations are operating safely / ethically / effectively when it comes to employing AI. A program of accreditation could also help provide assurance that companies are using AI responsibly, creating the desired trust with stakeholders. I tend to think the measure of AI impact will come surprisingly easily when AI itself becomes the primary tool in evaluation.

回复
Marty Lonergan

Leadership I Director I Strategy I International Business I Stakeholder Engagement I Negotiations I Talent Development I Executive MBA

9 个月

Embracing AI offers immense opportunities for both personal and business growth, and understanding its complexities is crucial, sometimes I need precise outcomes, and at other times I need creativity, and I always need Grammarly ?? how do I measure impact across broad activities where AI has a use case? The idea of an AI Impact navigator is a good way to measure outcomes and also build more trust.

Tracy Bell

Founder @ Loopsoflearning.com.au | Change Management | PMO Management

9 个月
Takahide Maruoka

Credly Top Legacy Badge Earner | ISO/IEC FDIS 42001 | ISO/IEC 27001:2022 | NVIDIA | Google | IBM | Cisco Systems | Generative AI

9 个月

Thank you for info.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了