Moving beyond the hype: the path to implementing Responsible AI

Moving beyond the hype: the path to implementing Responsible AI

AI continues to revolutionise our daily lives and workplace in an unparalleled way. Ensuring safety in AI is swiftly turning into a global priority, as illustrated by the newly announced EU AI Act. So, what implications does this have for your organisation??

Recently, I attended a webcast called "Why You Should Embrace Responsible AI: value, risk and new regulations," where I gained insightful knowledge about the latest advancements in artificial intelligence (AI). The webcast unveiled a wealth of enlightening points about the significant progressions and triggered some reflections on ethical implementation and integrations of AI. In this blog, I will delve into my top five takeaways from the webcast.?

?1.?The benefits of embracing Responsible AI

Personally, I'm a true believer that AI can be a force for good in our world. It's possibly the biggest technological revolution in my lifetime, and potentially our history. That's why I want to start by reflecting on the benefits of embracing Responsible AI and realising it's true value for organisations.

The notion of Responsible AI underscores the need to balance between harnessing AI's power for socioeconomic benefits and mitigating its potential negative impacts. Addressing this delicate balance requires comprehensive strategies around 'The Three Rs': regulation, reputation and realisation.

Regulation

The AI regulatory landscape is rapidly evolving across multinational regions and market sectors. This can create significant compliance challenges for businesses. By adopting a Responsible AI framework and culture within your organisation, this can help you prepare for impending obligations and avoid significant regulatory fines (e.g., non-compliance to the EU AI Act can result in fines up to €35m or 7% of global turnover).

Reputation

In the era of social media, news circulates around the world more swiftly than ever before. During the webcast, examples were discussed illustrating when AI fails, it can dramatically impact negatively and tarnish a company's reputation. This can be due to the following reasons:?

  • Algorithmic discrimination
  • AI Hallucinations (i.e. the output presented as fact is in reality, completely not true)
  • Data misuse and breaches

Unethical AI practices leading to societal harms can carry these serious reputational risks for organisations, which can result into commercial losses and brand value degradation.

Realisation

Responsible AI isn't solely about risk, it's understanding the true benefits of AI and how it can transform an organisation for the better. The relationship between highly trusted and high performing AI technologies cannot be underestimated. It was interesting to hear that up to 80% of AI projects fail due to factors such as the varying quality or availability of data, bias or simply a pure lack of understanding of the problem that you’re trying to solve in the first place, wasting time and resources.

2.?The significance of an AI inventory and overcoming the challenges

In this fast-paced era of AI development, maintaining oversight of your organisation's AI systems is more than a valuable suggestion – it's a critical operation. Here’s why:

  • Optimisation of resources: An updated AI inventory flags down efficiency leaks helping save resources by getting rid of redundant systems or ramping up underutilised ones.
  • Staying on the right side of the law: Keeping a meticulous record of your AI systems helps ensure regulatory compliance and data security - you can't safeguard what you don’t know you have!
  • Walk the talk on AI ethics: An inventory helps you audit your AI systems for ethical implications, crucial for Responsible AI, and corporate reputation in the public eye.
  • Risk control: Is your AI fair and trustworthy? Analysing your AI inventory can give you insights into potential risks, including bias and transparency.

However, creating a comprehensive AI inventory isn’t without its challenges:

  • Keeping up with the pace: The speed of AI development can turn inventory management into a marathon. As you adopt new tools and retire old ones, the inventory needs continuous monitoring and updating.
  • Standardisation hurdles: Not all AI systems are built the same. Creating a standardised inventory can be tricky given the array of AI forms and functions.
  • Breaking down the silos: In large organisations, independent AI adoption by different divisions can lead to fragmented oversight.
  • Adapting to changing laws: As AI regulations play catch-up, businesses need to update their AI inventory to remain compliant.

Maintaining an AI inventory can seem intimidating initially, but it’s an essential starting point for effective and Responsible AI governance.

3. Embracing change with Generative AI (GenAI)

The webcast laid out an exciting paradigm shift with GenAI – a genre of AI that goes beyond making data-based predictions to generating entirely new models. Traditionally, business frameworks have revolved around consistent statistical patterns and strategies. But imagining a model that can potentially create, innovate, and devise approaches that haven't even come to human minds yet can be exciting. That's the seismic change Generative AI proposes, leading many of us to question - are we ready for this paradigm shift and change in our strategic thinking?

A reflective thought from the webcast highlighted that strategic thinking must shift from: 'People executing processes presented with data powered by technology' to 'Technology powered by data executing process managed by people.'

This highlights how our jobs and roles will need to evolve to embrace this technology. It's not simply about replacing jobs, but more about augmenting our current ways of working.

4. The CEO's Standpoint on AI

Discussions highlighted how AI is perceived by those at the helm of organisations. Most CEOs (65%) see AI as a net positive, indicating strong business optimism in this space. Yet, a significant percentage are also remaining cautious about the ethical implications and unintended consequences of AI. This cautious optimism throws light on the AI dichotomy - constant innovation vs potential societal impact. This mindset shown by business leaders can help pave the way for responsible and ethically sound AI development, ensuring that the technology evolves in a way that is beneficial for all of society.

5. Best ways to get started when developing Responsible AI

The webcast included a helpful road map sharing key ideas and milestones on how best to build a Responsible AI model. I’ve included some thought-provoking questions below on this journey:

  • Defining AI: This is a challenge for regulators, let alone clients. It seems like the OECD definition is leading the conversation in this space, but it's important to understand and categorise what AI is in your organisation. There still seems to be no exact definition on AI, and how can we govern something that we struggle to define?
  • Map inventory: I've reflected on this earlier, but it's crucial to map out where AI is in your organisation. So, how can you govern what you don't know the existence of?
  • Risk tier: For each use case, what is your organisations risk appetite? Are you compliant with EU AI Act Prohibited and High-risk scenarios?
  • Policies and procedures and governance: Are your employees aligned and aware of your ways of working with AI? Have you the appropriate governance structure and accountability throughout your firm to leverage AI responsibly?
  • Third party and cyber risk: Like any new technology, AI products pose risks which are not solely unique to AI alone but are consistent across an organisation’s technology environment. Have you thought about procuring protocols and impacts when it comes to AI products and services? How can cyber play a role in mitigating unauthorised access to or malicious attacks on an AI product?
  • AI lifecycle and ongoing monitoring: What is the end-to-end process from procuring, developing, testing, deploying and monitoring your AI systems? With GenAI's probabilistic outputs, and potential for model drift, what oversight and controls are in place to ensure models are continuing to work as intended?

In summary, many of the questions mentioned serve as a starting point to spark thoughts and dialogues necessary for establishing a solid Responsible AI framework. The webcast brought together a diverse range of viewpoints and offered valuable insights into the world of Responsible AI. As I continue to ponder over how these developments might potentially impact your industry, and even your day-to-day life. Feel free to share your thoughts and let’s keep this conversation going as we navigate AI together.? ?????????

You can watch the replay by visiting - Embracing Responsible AI: value, risk and regulation | EY UK.

The views reflected in this article are the views of the author and do not necessarily reflect the views of the global EY organisation or its member firms.

"Coding just got a major upgrade thanks to Microsoft Copilot! ???? hashtag #UpgradeYourCodingGame hashtag #MicrosoftCopilot" Dm for more details | bSkilling https://www.bskilling.com/courses/Microsoft/clt2qu86q0001r8n9dnpbbdz7?id=clt2qu86q0001r8n9dnpbbdz7&category=Microsoft

回复
Rahul Koul

Pre-Sales Solution Lead - Cloud & DC at Orange Business

1 年

Thank you for sharing these insightful reflections on Responsible AI: value, risk and new regulations. Your article?distills complex topics into clear, actionable insights, particularly in areas like the importance of regulation, reputation, and realization in the context of AI, as well as the need for an AI inventory and the transformative potential of Generative AI. Your point about the need for a balance between embracing AI's socioeconomic benefits and mitigating its risks is especially poignant. The emphasis on CEOs' perspectives and the roadmap for developing Responsible AI offers a comprehensive view that many can benefit from. In addition to the points you've raised, I believe there are a few more facets to consider in the journey towards Responsible AI: I will choose 3 important ones listed below.? 1) Ethical Design and Development: Beyond regulation and inventory, there's a need to integrate ethical considerations directly into the design and development phases of AI systems. This involves diverse teams that can foresee and mitigate biases from the onset. ?

回复
Leila Nouri (she/her)

Marketing & GTM Leader | AI/ML | Data | Cloud | Security | ex-Google & ex-Amazon

1 年

Much needed. Thanks for sharing!

Krishna Iyer

Senior Leader - Technology and Risk

1 年

I really enjoyed reading this - well done Olly. Look forward to reading more on this topic. Pragasen Morgan

Clare Gore

Assistant Director | Experienced in Product Marketing | AI Advocate | Passionate about pushing the boundaries of tech and innovation

1 年

Great reflections Olly ??

要查看或添加评论,请登录

社区洞察

其他会员也浏览了