Make sure your AI policy includes this...

Make sure your AI policy includes this...

In the last few months, I've spoken to a handful of companies that are stalled on moving forwards with their AI plans, or even putting those plans together, because they're having to put together an AI policy first. And, from what I’ve seen, these policies have the potential to prevent innovation and inhibit growth. If your organisation is working on its AI policy, make sure you include scope for experimentation and learning.

(As an aside, when I say AI, I’m mostly referring to generative AI. These policies mostly seem to omit traditional machine learning systems, for some reason.)


In May, we held an AI meetup in Leeds called AI Meet-up North. We had over 50 people turn up mid-week for networking, drinks and learning. We're turning this into the go-to community for artificial intelligence professionals and enthusiasts in Northern England. And this time, we’re coming to Nat West in Manchester!

There'll be complimentary food and drinks, some epic speakers sharing case studies and insights into designing and developing AI solutions, and plenty of opportunity for networking.

Hope to see you there.

Find out more


How your policy can backfire

The people working on your AI policy, of course, are acting in what they view as the best interest of the organisation. However, depending on how the policy turns out, it can actually have the opposite effect.

One large European enterprise I spoke with over the summer has a blanket ban on any kind of gen AI use internally and customer facing, including behind closed doors access to gen AI tools for experimentation. The same is true for a large American enterprise I spoke with recently. The reason? Legal and Risk don’t trust the technology and don’t want anything bad to happen as a result of its use.

That is completely understandable, but this stern knee-jerk reaction will hinder innovation and may well lead to a loss of competitive advantage if you can’t even experiment. Generative AI could add $2.6-4.4 trillion to the economy across just 63 use cases identified by McKinsey . These rigid policies make sure you won’t see any of that benefit any time soon.

I’m not saying that everyone needs to deploy generative AI right now, but a blanket ban stifles the possibility of even investigating what it might be used for in future.

Why are AI policies so heavy handed?

Generative AI has spooked many people across legal, risk and compliance and IT, due to one main thing: hallucinations. Closely behind it; data and security.

Nobody wants to risk giving your users wrong information. Understandably. But things change. Technology improves. Mitigating hallucinations is one of those things that’s improving. Is it perfect? No. Has it changed since 2022? Absolutely. Is it good enough for some low risk use cases? Probably. Heavy handed policies opt you out of tracking these improvements and growing with the technology.

Nobody wants to send their proprietary data to a big tech company for it to be ingested into their models and regurgitated to users on the front end. That’s also understandable. But there are ways of guarding against that which are also improving. There are ways today for you to leverage both large and small language models in a way that protects your data. This wasn’t as much the case 18 months ago. Rigid policies make sure you’ll never know about that.

What your AI policy should look like

Counter-intuitively to typical policy documents, which state what you can and can’t do, I recommend that you make your AI policy a living document. We're far too early, and things are moving far too quickly, to rule anything out indefinitely.

It should be something that highlights the risks you see with AI and the things you’re not going to do today. But it should state these as assumptions, not facts. It should also state what you need to see in order for you to validate or invalidate these assumptions, change your mind and update the policy. What you’ll then have is space and grounds for experimentation and learning.

Rather than blocking innovation, you’ll give teams a licence to generate hypothesis, to experiment, to prove you wrong, to innovate and to learn. They don’t have to deploy anything to the public at scale, but they do need to be able to get their hands dirty and play with this stuff. That's the only way you're going to develop the maturity and skills you need to leverage this technology when the time is right.

If you then hold quarterly review sessions where teams can bring their learnings and educate you on what’s changed, you can make changes, move with the times and keep on top of developments. Here, you’ll in a position to leverage the technology and benefit at the right time, rather than waiting for hearsay rumours of where the tech's at, or for your competitors to go first.

Then, your policy will be a helpful resource for guiding your organisation’s AI strategy, rather than a blocker that's full of heavy handed rules based on assumptions and paranoia.

You need to be able to innovate and experiment, track technology advancements, and assess the readiness and impact for your business. Your policy should facilitate this, not hinder it.

I know many of you are working on these policies today and I’d happily offer my time for free to cast an eye over them if you’d like. Just ping me a DM.


About Kane Simms

Kane Simms is the front door to the world of AI-powered customer experience, helping business leaders and teams understand why AI technologies are revolutionising the way businesses operate.

He's a Harvard Business Review-published thought-leader, a LinkedIn 'Top Voice' for both Artificial Intelligence and Customer Experience, who helps executives formulate the future of customer experience ad business automation strategies.

His consultancy, VUX World , helps businesses formulate business improvement strategies, through designing, building and implementing revolutionary products and services built on AI technologies.

  1. Subscribe to VUX WORLD newsletter
  2. Listen to the VUX World podcast on Apple , Spotify or wherever you get your podcasts
  3. Take our free conversational AI maturity assessment


要查看或添加评论,请登录

社区洞察

其他会员也浏览了