How private companies can build trust and unleash the ‘force multiplier’ of generative AI

How private companies can build trust and unleash the ‘force multiplier’ of generative AI

If you haven’t experimented with a generative AI chatbot by now, you really should. It’s fascinating to watch in real time as the AI engine learns from and responds to your queries, getting you closer and closer to the information that you need with each follow-up prompt.

But the real reason to start familiarizing yourself with the technology is that it’s already evolved from a cool parlor trick into a game-changing business enabler across every sector. As Deloitte’s Tech Trends 2024 report points out, generative AI is being seen as the “rocket fuel for elevated ambitions,” prompting business leaders to invest in the technology to increase efficiencies and growth opportunities. ?

Despite capturing the public’s imagination, though, generative AI still isn’t all that well understood by many private company leaders. Recently, I participated in a Dbrief webinar on the topic for Deloitte Private. More than half of those responding to a polling question we asked said they had little to no knowledge of generative AI, and less than 10% described themselves as “very familiar” with it.

So what is generative AI? As Mike Bechtel, Chief Futurist at Deloitte, explained during the Dbrief, generative AI differs from its predecessors in that it doesn’t just recognize patterns in data – it creates new data sets based on existing data and user input. “The major change is that feedback loop,” Mike said. “As you ask questions, the generative AI gives you the output and you can provide feedback and modify the information. It’s constantly learning.”

One of the key reasons why so many people get it wrong when they predict that generative AI will replace humans is that their input and oversight are critical to the way the model works. But it’s also much more than that. While there’s clearly an opportunity to be leaner and meaner, we’re finding that our more pioneering clients are saying they want to use the technology not to do today’s work with fewer people, but to free up their people to do the work of tomorrow. They see generative AI as the latest force multiplier for human ambitions. From that perspective, it doesn’t make people matter less – it makes people matter more.

But before we get there, we need to deal with issues around trust. There can be unintended consequences of using generative AI, such as the potential for bias to enter the process. In one example, an AI model used to help evaluate job candidates in video interviews for leadership potential—based on inputs such as vocabulary and body language—leaned toward white baby boomer males because it was trained on 20th century data. Another potential pitfall is copyright infringement. Creators have already filed a number of lawsuits after finding their work was used as an input for an AI-generated facsimile.

Other negative outcomes are deliberate. Bad actors are already using generative AI to create so-called “deep fakes,” such as AI-generated photos that are hard to tell apart from the real thing. The technology is even being used to create voicemails supposedly left by work colleagues seeking sensitive information.

Mike and I concluded the Dbrief by sharing some formative steps private company leaders can take to ensure they get off on the right foot with generative AI solutions:

?

·?????? Keep the data private. As my colleague Mike pointed out, “Once the model is trained, it’s very hard to untrain it.” That argues for confining the information it uses to proprietary data sets, and then keeping the results in-house. Controlling the inputs allows you to avoid bias and other unintended consequences, while retaining ownership over the outputs means you’re not relinquishing any data rights or sacrificing your privacy.

?

·?????? Build some guardrails. Technology governance within an organization is always important for managing related risks and optimizing investment. With generative AI, the stakes could be much bigger. Put someone in charge of understanding the technology and its potential. Work to develop the right skills and leadership talent tailored to each function across the organization. Have your board create the governance policies that will help each function take the appropriate steps when looking to leverage AI and when dealing with any incidents that stem from its use.

?

·?????? Trust but verify. It’s a saying used widely in cyber security circles for a reason—it’s best to assume the worst. Educate yourself on tools available to detect fakes and apply them rigorously. Once the model comes to a decision, work to understand the inputs that went into it. Chatbots have already been found to include fabricated citations in their answers, which can lead to embarrassing outcomes. Trust requires transparency—you’ll want to be able to defend every decision that generative AI enables.

?

·?????? Monitor and measure it. You won’t know if something’s wrong unless you keep tabs on the model on a regular basis. Everything you feed into the model and every output it generates needs to be logged and tracked. If you are able to do that consistently, you’ll almost certainly improve the process and boost your productivity. ?

?

There’s one more thing—don’t let fear of the unknown keep you from realizing big things. As business leaders, we manage risk pretty much all the time. So there’s a tendency to think about everything that could go wrong. Don’t forget about the allure of this wondrous technology and what it’s capable of. Those benefits won’t be lost on your future competitors. If you take the right precautions, there’s no reason generative AI can’t transform your business for the better.

Lindsey Piker

Attended freeCodeCamp

8 个月

Interesting!

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了