It's Risky to Share Valuable Information With ChatGPT?
The successful rise of ChatGPT couldn’t be overstated as it has taken the world by storm. Almost every business and everybody uses this AI technology however, a new question arises regarding the usage of ChatGPT. No longer does "What can ChatGPT do?" the go-to question but "What should I share with it?" is the one you need to ask yourselves. What does this mean exactly? Let’s dive deeper into it…
This question exists due to ChatGPT's seductive capabilities that seem to have created a blind spot around hazards we normally take precautions such as possible data breaches, and the ways our personal information is used online. OpenAI even announced a new privacy feature that lets users disable chat history, preventing conversations from being used to improve and refine the model.?
The privacy research VP at Gartner, Hader Henein, stated that it's a step in the right direction, but the fundamental issue with privacy and AI is that you can't do much in terms of retroactive governance after the model is built. Hader further elaborated that ChatGPT is like an affable stranger sitting behind you on the bus recording you with a camera phone. They seem nice but would you then go and have the same conversation with them just because they look nice? They may have been well-intentioned, but if it hurts you — it's like a sociopath, they won't think about it twice. OpenAI’s CEO Sam Altman has also acknowledged the risks of relying on ChatGPT. He stated that it's risky to be relying on ChatGPT for anything too important right now as the company still has lots of work to do on robustness and truthfulness. Basically, treat the prompts of ChatGPT as something you would consider publishing online.
领英推荐
The analogy for this chatbot is that it’s a ‘black box’ of data. Mark McCreary, the co-chair of the privacy and data security practice at law firm Fox Rothschild LLP stated that users don’t know how their conversations with the chatbot will be used. That raises particularly high concerns for companies. As more and more employees casually adopt these tools to help with work emails or meeting notes, the opportunity for company trade secrets to get dropped into these different AI's is just going to increase. The chief AI ethics officer at Boston Consulting Group, Steve Mills, supported Mark’s statement by also adding his opinion on the matter. These AI chatbot tools pose the biggest privacy concern for companies as it will create the opportunity for inadvertent disclosure of sensitive information and it will lead to lost control of data through hacking and other cyber attacks.
So, what can businesses do to still utilise ChatGPT but be free of its security concerns? Well it’s simple, do not use ChatGPT for business processes as your conversations are stored and hackers might just infiltrate your history with ChatGPT and be mindful of those briefs you sent to your teams as it might be seen by others, leaving you and your team vulnerable.