Navigating the AI revolution in 2024: Building trust through innovation and responsibility

Navigating the AI revolution in 2024: Building trust through innovation and responsibility

This year, the World Economic Forum (WEF) has chosen an intriguing theme: ‘Rebuilding Trust’.

As defined on the WEF’s website, it’s all about revitalizing trust in our future, creating harmony within societies, and strengthening the bond among nations. Let's face it, after the disruptive upheavals of 2023, a dose of trust-building feels apt.

Interestingly, AI, a hot-button topic, is set to steal the show at the Forum for the first time along with other vital themes like security, jobs, and climate.

Given its overwhelming presence in the headlines, AI’s center stage act shouldn’t raise many eyebrows. And focusing in on ‘Trust’ is an ideal guiding star for business leaders navigating AI in 2024.

The new frontier: Innovation with responsibility

The AI landscape is packed with promise. AI offers us the tantalizing opportunity to build the first truly creative human, by freeing up our time to focus on innovation, by offering us opportunities to create brand new ways of working, and by adding a layer of machine-driven data analysis to improve our decision-making.

But it’s not without pitfalls. Operating without responsibility risks eroding confidence, empowering malicious actors, and causing substantial harm. Charting a path that includes both innovation and responsibility is paramount.

In the past year, there have been real efforts to reconcile this globally. The UK hosted the AI Safety Summit, ushering in the Bletchley Declaration, the first official international agreement for a safe AI framework, endorsed by 28 countries. The EU introduced the AI Act, the US designed an ambitious blueprint for an AI Bill of Rights and more nations are hammering out their unique AI governance strategies.

As we move forward in 2024, I anticipate continued momentum toward AI regulation, with a spotlight on ethical deployment, transparency and risk management. This should herald a safer, more accountable global environment within which AI can develop.

To help assess the current state of play, EY recently published a revealing report focused on key global AI regulatory trends. Standout trends include the adoption of AI governance frameworks and compliance systems, risk-based courses of action, industry-specific rules, comprehensive approaches syncing with other digital policy priorities and collaboration between policymakers, the private sector, and civil society.

These trends have significant implications for business leaders and policymakers everywhere. Businesses must stay agile and informed to navigate the evolving legal environment effectively. Policymakers, meanwhile, have the tricky task of creating effective regulation and moving toward regulatory convergence with other key jurisdictions without stifling innovation. Together, they play a crucial role in guiding investments that can help translate regulatory initiatives into tangible growth for the global AI sector.

So, what’s my message for business leaders gathering in Davos looking to chart an AI course in 2024 that includes both responsibility and innovation? Here are three critical steps:

  1. Get to grips with your legal, governance and compliance responsibilities. As a vital first step, and to make sure companies are meeting the expectations of investors, regulators, and other stakeholders, it is imperative that businesses understand their responsibilities under the laws and regulations of the jurisdictions wherever they do business and establish policies and procedures designed to meet them. Some key new AI regulations have significant extra-territorial implications. Organizations will also need to understand how new AI codes and regulations interact with existing laws, including sector regulations.
  2. Establish robust AI governance processes at all levels of the business. This should include governance frameworks, responsibilities, an inventory, and controls for the use of AI from board to operations level. The establishment of an AI ethics board can help provide independent guidance to management on ethical considerations in AI development and deployment.
  3. Don’t be shy…engage in dialogue with regulators, governments, NGOs and others. This will help businesses better understand the constantly evolving regulatory landscape, while also helping provide potentially invaluable information and insights for policymakers as they shape the rules.

2024 will no doubt be turning point for AI; a tightrope walk between growth and governance for both policymakers and business leaders as they chart a path toward progress, innovation, and responsibility.

As we look toward Davos and beyond, these conversations will take on paramount importance. Nailing it isn’t just an ethical move — it’s an absolute necessity. Let’s see how it all unfolds.

The views reflected in this article are the views of the author and do not necessarily reflect the views of the global EY organization or its member firms.

Nancy Chourasia

Intern at Scry AI

6 个月

I couldn't agree more! A common characteristic of the previous industrial revolutions is that governments played an active role in them. They incentivized inventors through patents, protected their commercial interests, defended against foreign competition, and provided funding for research and development (either directly or via their militaries). Also, in the Second Industrial Revolution, the U.S. government dismantled monopolies. And, in the third, it protected inventors' commercial interests, lowered tariffs to defend against communism, increased military spending to create new markets, and increased overall spending in research and development (via DARPA). During the Fourth Industrial Revolution, governments of various countries are currently adopting different approaches. Some are espousing a laissez-faire attitude whereas others are actively enacting statutes. The use of data and AI systems is also being approached differently, with some governments emphasizing individual privacy but others allowing the use of data for the collective good. Similarly, governments and non-governmental organizations worldwide are approaching ethics and fairness regarding AI systems differently. More about this topic: https://lnkd.in/gPjFMgy7

回复
Whitt Butler

EY Americas Consulting Vice Chair

7 个月

Great article, Julie. Balancing innovation and responsibility will be a focus for many organizations in 2024.?

C. Firat Caliskan ??

Commercial Operations Leader I Tech Savvy Engineer I Entrepreneur I Ex-P&G I Delivers Sustainable Growth/ Change/ Turnaround I Passionate for Strategy & STEM I Believes in Agile Leadership & Learning Culture

7 个月
回复
Atiqur Chowdhury, MBA

E-Billing Specialist | Financial Analyst | Billing Coordinator | Heathcare Administrator

7 个月

Thank you for sharing. I recently completed a certification on GenAI and how we can use to analyze data and make things efficient. It was amazing to learn. Looking forward to applying those skills at my next role.

回复
Kateryna Stetsiuk

Principal AI Consultant || Crafting сustom AI Strategies and Solutions to drive growth, profit and efficiency

8 个月

Fully agree, we never should forget about Responsible AI

要查看或添加评论,请登录

社区洞察