Insight of the Week: Forget the FUD
I hear so much scaremongering about Gen AI: Open AI will steal my data! What if my chatbot says the wrong thing? What about data leaks and jailbreaks?
Mostly, it’s from people with vested interests to protect, or those who haven’t really used the tech.
Let’s go through the concerns I hear most often…
FUD: Open AI will steal your data!
REALITY: You can run Open AI models on Microsoft Azure, with cast iron guarantees that none of the data will be shared with OpenAI or used for model training. If you don’t believe them, you’ve got bigger problems to worry about – like Office 365, and the other workloads you run on Azure, or any other cloud infrastructure you might choose like Google and Amazon. All provide absolute guarantees that your data will not be used for training their models. Relax. Your biggest risk is not deploying Gen AI.
FUD: Gen AI is prone to data leaks!
REALITY: There have been worrying stories of Gen AI solutions leaking data. But this is easily avoided by making sure you only give Gen AI sensitive or private data once the user has been authenticated, and only give it data that a specific user is authorized to access. Early horror stories resulted from sloppy implementations that gave the AI access to large swathes of data and let the AI decide what data could be shared with whom. Relax. If you do things the right way, your biggest risk is not deploying Gen AI.
领英推荐
FUD: Gen AI makes stuff up!
REALITY: Many people are concerned about hallucination: the potential for Generative AI models to produce convincing replies that contain false information. This happens when a model ‘tries its best’ to answer a question even though it doesn’t have sufficient information, or understanding, to answer correctly. There are lots of ways to minimize the risk, but you can never completely eliminate it, just as you can never completely eliminate the risk of your agents saying something they should not. So relax. If you do things the right way, keep an eye on what your bots are saying, and take steps to mitigate, your biggest risk is not deploying Gen AI.
FUD: Jailbreaking and misuse!
REALITY: When you deploy a generative AI solution to the public, bad actors may seek to ‘jailbreak’ the system so it will do and say things that it should not. This could be as a way to embarrass your business, reveal sensitive information, or simply a way for bad actors to get free access to generative models. All of these risks can be mitigated by checking and filtering the inputs to and outputs from the models. The risk cannot be completely eliminated, just as humans remain susceptible to social engineering, but modern methods, properly applied can reduce the risk significantly. So relax. If you do things the right way, filter inputs and outputs, and properly guardrail your bots, your biggest risk is not deploying Gen AI.
Bottom line. Gen AI is a disruptive technology that your competitors can, and will, and probably are already putting to good use. Don’t let the scaremongers hold you back. Your biggest risk is not deploying Gen AI.
Kerry Robinson is an Oxford physicist with a Master's in Artificial Intelligence. Kerry is a technologist, scientist, and lover of data with over 20 years of experience in conversational AI. He combines business, customer experience, and technical expertise to deliver IVR, voice, and chatbot strategy and keep Waterfield Tech buzzing.