Mind over machine — understanding the challenges of generative AI

Mind over machine — understanding the challenges of generative AI

We recently had the pleasure of hosting?the London chapter of Innovation Realized In Focus. At our London Bridge HQ, we convened senior leaders to explore the immense transformational potential of generative AI tools like OpenAI’s ChatGPT and how businesses can unlock it.

Soon after, our resident AI guru Harvey Lewis published?a fantastic piece?looking at the steps organisations should take to get the most out of generative AI. Although it “serves as a beacon, guiding us towards a level of productivity and creativity enhancement not seen since the first Industrial Revolution,” Harvey says, “it does present challenges, notably in forthcoming changes to the regulatory landscape around AI and the need for equally rapid corporate governance.”

In this blog, I’d like to consider the challenges of AI regulation and corporate governance in greater detail, but let’s first start with another: trust.

Trust

If businesses are to harness generative AI, they need people to trust it. Trust can be an elusive beast, especially in relation to such general-purpose AI models as ChatGPT — where the stakes are higher than, say, the algorithms making Netflix recommendations — but we only need to look to history to see that she can be captured.

As I say in my book,?AI by Design, although the automated elevator had been around since the early 1900s, fear meant people preferred to take the stairs. The innovation got its big break during the 1945 elevator operators’ strike in NYC, which cost the city an estimated $100 million. Suppliers and property developers worked hard over the following four decades to build trust in the automated elevator, and looking at its ubiquity today, their efforts clearly paid off.

Three ways we can build trust:

  1. Education and skills development:?Helping people understand the importance and relevance of AI through training is critical.
  2. Risk assessment of all AI systems:?Organisations could keep a record of all AI systems or systems using AI, with every application subject to risk-assessment. High-risk items would be flagged, reviewed internally, and replaced if needs be.
  3. Independent AI assessment:?A company’s datasets and systems could be subject to third party assessment, which would provide much-needed assurance.

Corporate governance

Trust goes hand in glove with ethics. Given that, with AI, we’re trying to replicate humanity, the tech should embody human ethics and values. As part of their corporate governance programmes, I encourage large organisations to establish an AI ethics committee.

In?a recent blog, I highlighted a common challenge in AI ethics: ensuring the right people have a seat at the table. It’s important members of this group are from various backgrounds, including law, ethics, technology, science, engineering, and philosophy. It’s equally important those individuals are representative of diverse demographics, otherwise we run the risk of people designing in their own image and becoming destined to replicate our current problems in future AI design.

The committee would be tasked with advising on AI ethical matters, as well as developing ethical guidelines. And there’s no need to start afresh, considering the existing wealth of fantastic ethics guidelines available.

Regulation

The EU is the clear frontrunner in AI regulation with its?AI Act. Once enacted, this will be the world’s first comprehensive regulatory attempt to ensure AI systems developed and used in the EU are safe, transparent, traceable, and non-discriminatory. Over two years in the making, it takes a risk-based approach, classifying general purpose AI models according to the risk they pose to users, with different risk groups being subject to varying degrees of regulation.

The UK government published?a pro-innovation white paper on AI regulation?earlier this year and will host a global summit on AI in the autumn, and the US recently published a framework for developing AI regulation that prioritises goals like security, accountability and innovation. Canada, Japan, South Korea, Singapore, and China are also working on their own AI regulation.

However, these varied approaches to AI regulation have further complicated matters for international organisations already striving to understand the benefits and risks of generative AI. As I say in?this blog, in an ideal world, it would be great to see a more unified strategy, a collective framework developed as countries work on their own individual approaches.


We covered these topics and others during?Innovation Realized in Focus, a series of events that EY curated across multiple cities around the globe to build understanding, encourage collaboration and help realise new opportunities for business value and positive human impact. Get in touch if you’d like to discuss how EY can help your organisation explore the impact of generative AI and unlock its potential.

Ahana Pardhe - CIPP/C, ISO Internal Auditor (Quality), LLB, CS, BCom, Life Skills Coach

ISO 9001 Internal Auditor | Regulatory Compliance | Data Privacy | Quality and Assurance | Risk and Governance | Nuclear Document Package Reviewer

1 年

Excellent read!!

回复

要查看或添加评论,请登录

Catriona Campbell MBE的更多文章

  • Sam Altman's Intelligence Age

    Sam Altman's Intelligence Age

    It took me a couple of days to reflect on Sam Altman’s blog on AI, ‘The Intelligence Age’. After some important client…

    3 条评论
  • Navigating the Grey - Shadow AI in the Enterprise

    Navigating the Grey - Shadow AI in the Enterprise

    I’ve met a lot of clients already this year and it has been great to hear what’s really happening, talk about their…

    2 条评论
  • AI is like water....

    AI is like water....

    I read a fantastic article AI Is Like Water over the weekend by Morgan Beller, a Partner at NFX, a San Francisco…

    3 条评论
  • The Ethical Conundrum of Neuralink

    The Ethical Conundrum of Neuralink

    Neuralink's recent breakthrough in Brain-Computer Interface (BCI) technology has catapulted the field from theoretical…

    9 条评论
  • The Evolution of Human-Centered System Design: From Web Interfaces to Generative AI

    The Evolution of Human-Centered System Design: From Web Interfaces to Generative AI

    Today, I'm delving into a fascinating journey that spans over two decades, from the early stages of web technology in…

    4 条评论
  • Grok – xAI’s LLM launches in the US

    Grok – xAI’s LLM launches in the US

    Even for electric-battery-supercharged Elon Musk, he is having a busy time of it. If appearing on the world’s biggest…

    1 条评论
  • Tech entertainment picks for November

    Tech entertainment picks for November

    See the bright lights on the Strand, go infinite with SBF and find out more about Demis Hassabis. Oh, and watch Haunted…

    1 条评论
  • Tech entertainment picks for September

    Tech entertainment picks for September

    Let’s shake off the post-holiday blues with a nuclear watch, lots of roundabouts, two imitation games, the world’s…

  • Tech entertainment picks for July

    Tech entertainment picks for July

    This month, I bring you a talking whale (with Meryl Streep’s voice), the chance to escape your perceptions of reality…

    5 条评论
  • Why we need diversity & agreement in AI

    Why we need diversity & agreement in AI

    At our London Bridge headquarters, I recently welcomed Google’s Responsible AI Programme Manager & Founder of Diverse…

    1 条评论

社区洞察

其他会员也浏览了