Rethinking responsibility: why 2024 is the year of Responsible AI
There’s no question which technology captured everyone’s imagination in 2023 - generative AI. But I believe this year will belong to Responsible AI . And here’s why.
?
Generative AI raises the stakes when it comes to responsibility.
While Responsible AI has been a discussion point since at least 2016, we’ve seen a major spike in interest over the last year. While clients are excited by the potential of gen AI to reinvent the enterprise, they’re hyper-aware of the potential risks that its uncontrolled use might create. We see six key risk areas: Bias and Harm; Liability and Compliance; Unreliable outputs; Confidentiality + Security; Sustainability and Workforce transition.
?
As business leaders pursue generative AI reinvention strategies, making Responsible AI pervasive and systematic will be key to avoid scaling unintended consequences enterprise-wide. ?However, most organizations have a long way to. Whilst 96% of organizations support some level of government regulation around AI[i], only 2% of companies have identified as having fully operationalized Responsible AI across their organization[ii]. We are at an inflection point. If companies don’t start turning intentions into action, they risk missing out on generative AI’s enormous value potential.
?
Government focus on Responsible AI
Governments and regulators have become increasingly active in this space because they understand that without the right guardrails, countries won’t be able to realize the benefits that AI offers. The EU has taken the lead with the EU AI Act , which reached provisional agreement on 8 December. The AI Act – a broad new piece of legislation governing the development, placing on the market and use of AI systems – will apply beyond the EU’s borders and is the first extensive law in the world on AI. ?The US is also creating standards for managing AI’s risks through the White House Executive Order on Safe, Secure and Trustworthy AI , and NIST Risk Management Framework . And the UK recently brought together leading AI nations, technology companies, researchers and civil society groups at the first Global AI Safety Summit , in a bid to help ensure AI is developed and deployed in a safe, responsible way for the benefit of the global community. Certain industries are even taking it upon themselves to define guidelines for the use of responsible AI, such as the Monetary Authority of Singapore’s VERITAS initiative for Financial Services .
?
What does it mean to be responsible?
It’s clear that Responsible AI is really at the top of the agenda for business leaders, but what does it mean in practice? For any enterprise, Responsible AI means taking intentional actions?to design, deploy and use AI to create value and build trust by protecting from the potential risks of AI.?Responsible AI begins with a set of AI governing principles which each enterprise adopts and then enforces.?We established our Responsible AI principles in line with our code of business ethics and company values, they focus on these 7 areas:
?
1.???? Human by design
2.???? Fairness
3.???? Transparency, explainability and accuracy
4.???? Safety
5.???? Accountability
6.???? Compliance, Data Privacy & Cybersecurity
7.????? Sustainability
?
At Accenture, we have been working in Responsible AI for multiple years and embedded commitments to responsible AI into our Code of Business Ethics as far back as 2017.?Our own responsible AI program is grounded in our Code of Business Ethics and core values, has CEO sponsorship and has been scaled to over 700,000 people worldwide.
领英推荐
?
A responsible future built on collaboration.
It's a pivotal time for Responsible AI and we’re delighted to be working at the frontiers of this vital area. We recently committed over $3 billion to AI and to creating solutions, including our Center for Advanced AI, and Generative AI Studios that will help organizations accelerate their AI journey from interest to action to value. And Responsible AI is a critical component of that investment.?
?
Our collaboration with academia and multilateral organizations like the WEF , as well as with industry leaders and specialist providers, is also key. This aims to develop pioneering research and thought leadership that will:
·??????? Help clients and communities understand what Responsible AI means for them and how they should respond.
·??????? Advance global dialogue around regulation, standards, policies and AI governance
?
How to move forward
There is a lot for organizations to do when it comes to Responsible AI, and it can be tricky to know where to start (especially with all the noise around Generative AI.) Here are five steps I believe leaders need to take to move forward:
?
1.?????? Establish AI governance and principles: Agreeing and adopting Responsible AI principles with clear accountability and governance for responsible design, deployment, and usage of AI.
2.????? Conduct AI risk assessment: Understanding the risks of an organization’s AI use cases, applications and systems through qualitative and quantitative assessments (e.g., fairness, explainability, transparency, accuracy, safety, human impact etc).
3.????? Enable systematic RAI Testing: Perform ongoing testing of AI for fairness, explainability, transparency, accuracy, safety leveraging best of breed Responsible AI tools and technologies and enable mitigations.
4.????? Ongoing monitoring & compliance of AI: Ongoing monitoring of AI systems and overseeing RAI initiatives while executing mitigation and compliance actions.
5.????? Workforce Impact, Sustainability and Privacy/ Security: A Responsible AI compliance program will need to engage cross functionally to address workforce impact, compliance with laws, sustainability, privacy/ security programs across the enterprise.
?
Getting Responsible AI right matters to us all, as businesses and individuals. Achieving that goal at the speed and scale required is a challenge we all need to work toward. Collaboration and cooperation to develop and implement shared standards is going to be vital. We all need to get going now. It’s time to enter a new era of Responsible AI and focus on generating trust, delivering value and turning risk into opportunity.
?
[i] CXO pulse survey, Sept 2023
[ii] Accenture Research, 2023
?
Building a venture portfolio.
9 个月Last year, my team decided to prioritize responsible use of new technologies. We realized the immense potential but also the ethical implications. This led us to adopt a framework that mirrors your points on governance and risk assessment. It's been transformative, not just in operations, but in how we approach innovation ethically. Your insights resonate deeply with our journey towards responsible implementation. It's clear that ethical considerations are not just nice-to-have, but essential for sustainable progress.
Great article Arnab. Love your early statement “As business leaders pursue generative AI reinvention strategies, making Responsible AI pervasive and systematic will be key to avoid scaling unintended consequences enterprise-wide”. This is exactly what we need. This thoughtful outline of what it means to be responsible, how we use collaboration to create a responsible future and your thoughts on a path forward are so helpful.?
Founder & CEO, Group 8 Security Solutions Inc. DBA Machine Learning Intelligence
10 个月Great
Author - Marketing Senior Analyst
10 个月Absolutely! Responsible AI is one of the greatest concerns to people.
Pioneering AI Innovations
10 个月Exciting times ahead for Responsible AI!