ChatGPT is as Safe as the Driver Behind the Wheel of a Car
Image Credit - H Heyerlein | https://unsplash.com/@heyerlein

ChatGPT is as Safe as the Driver Behind the Wheel of a Car

My company has been deploying ChatGPT setups for organizations for the past few months, pointing the artificial intelligence system inward to allow organizations to query their internal files and data. The common concern every conversation starts with is "I heard somebody used ChatGPT that inadvertently shared company secrets to the whole world, is ChatGPT safe?"

The simple response is "ChatGPT can be safe, if you know how to use it properly..."

ChatGPT Public versus Private Configuration

When most people talk about the use of ChatGPT, they're using a completely public instance of the tool, where the data and the ChatGPT system are both operating 100% in publicly accessible systems. The data in the public systems is stuff you can readily grab anywhere off the Internet like from Wikipedia, some website or blogsite, public news site, etc. When you interact with a fully public instance of ChatGPT, your questions and queries go out to a public system (ChatGPT) and the responses you get back are from publicly available data sources. Most of the public instances people are fiddling with and complaining about privacy and security concerns are indeed not safe and secure with confidential and sensitive information.

However for a Private configuration of ChatGPT (like the AzureAI/ChatGPT I've blogged on) pointing to your INTERNAL data files (looking inward to your company data), the data REMAINS in your environment. The data files themselves never leave your organization. BUT since the ChatGPT system is still an external artificial intelligence system, the questions you ask, and the formulation of the answer from your data does have an external (outside your environment) component to it.

Granted, Microsoft has a clear privacy and data protection statement that notes their implementation of AzureAI/ChatGPT does not retain the questions you ask (ie: your prompts), but judicial interpretation of privacy laws have yet to be written to address the fringe area of what constitutes "leaving your organization," so my advice at this time is there's plenty you can do with Microsoft's implementation of ChatGPT (with a bit of hands-on experience that I'm sharing) to keep you out of the grey area for now.

No alt text provided for this image
From https://learn.microsoft.com/en-us/legal/cognitive-services/openai/data-privacy

Example of What Leaves Your Organization in a Private Setup

If you query ChatGPT (that's pointing inward to your company data) a question like "Our company secret formula is x, y, and z, can you suggest a better formula?", the fact that you just asked an external system (ChatGPT) a question that INCLUDES your secret formula basically put your secret formula OUT "there".

A question of this type, that injects private or confidential information should NOT be used against ChatGPT.

It's just like asking Google "Find instances of my social security number xxx-yy-zzzz out on the internet". Uh, you just threw your social security number out to a search engine to go find something you want to retain as private and secure.

What's a Better / Safer Way to Ask a Question?

Like in the Google search example, if you are curious if your social security number is out on the internet, you could type xxx yy zzzz (3 numbers, no hyphens) which will query the Internet and find "similar" instances of these 3 sets of number. BUT this is still not 100% secure as social security numbers have a very specific format to them, but among the millions of searches that go to Google, this "might" get buried and not be "as risky" when its just a series of numbers. This is the tricky part in Google and in ChatGPT, where you most certainly don't want to type in something specific to private or secured information, you could potentially type in something similar, or better yet query in a broader and more generic manner or not at all.

How Do You Get ChatGPT to Answer Questions on Private or Secure Info?

The short answer is that SOME information should NEVER be asked of ChatGPT, just like if you are on a Top Secret government project, you don't ask questions that in any way shape or form has ANY of the Top Secret info in it. It's like asking a really smart friend about specific questions on stealth technology that you're working on in this Top Secret project, which if you think about it, unless that person has proper security clearance, you just don't ask questions on anything related to the topic.

There are ways to ask questions, without compromising privacy and security. Instead of asking the question "Mary Smith is having a mental breakdown, is Mary covered under our medical insurance for mental health coverage?", you can simply ask "If someone is having a mental breakdown, are they covered under our medical insurance for mental health coverage?". Same question, but doesn't breach any privacy or confidentiality protections.

Making ChatGPT Useful with Semi-Private and Semi-Secure Information

You can make ChatGPT useful on other topics by "anonymizing" your data, by replacing names with generic names, replacing passwords with generic passwords, replacing IP addresses with generally available IP addresses (192.168.x.x), etc. You can get value out of the rest of the data, and protect sharing of any private or sensitive information.

Files like Request for Proposal (RFP) responses, Company Employee Handbook documents, General regulatory filing documents, etc are for the most part just "template files" that have a handful of sensitive information bits in them. Anonymize the sensitive information with generic info, and now the documents are "sanitized" for ChatGPT to query and respond leveraging the data in these cleaned up documents.

Can We Create a Completely "Inhouse" Version of ChatGPT?

You "could" create a full instance of your own ChatGPT environment where both the data AND the ChatGPT engine is 100% internal to your organization, however the ChatGPT system takes dozens (hundreds and thousands) of servers in a high performance computing cluster environment to make it work.

Its a massive undertaking to build and run an environment like ChatGPT, and for now, the best use of the technology isn't to try to inject it with your biggest secrets to have it do something for you. The best use for now is load it up with more mundane data and have ChatGPT do the simple stuff faster.

Wrap-up

There are many ways that ChatGPT as a public service CAN be better at isolating and protecting private, secure, and sensitive information, and Microsoft's implementation of AzureAI/ChatGPT does a great job at keeping your data and prompts isolated. But to be extra cautious for now, use it for what it does best, with anonymized data, and start building out your experience in interacting with artificial intelligence human input systems.

My advice to organizations, don't say "ChatGPT and artificial intelligence is bad, do not use it," as that's like telling someone that "Fire is dangerous, don't use it." Organizations are better off setting up a safe and isolated instance of ChatGPT, point it to a handful of sanitized documents, and LEARN directly what ChatGPT does and doesn't do than basing business decisions on unqualified fear and uncertainty. As I shared in a previous article on ChatGPT, we're setting environments up in under 3-5 days of effort with organizations experiencing ChatGPT real and live, to get smart about the technology because this Artificial Intelligence stuff is in our future, First Movers that harness its capabilities will get an edge over orgs that put their head in the sand. Firsthand knowledge and experience is king!

Mark Egan

StrataFusion Partner and CIO for Hire

1 年

I agree we need to leverage this technology

I was listening to News Radio while driving a few days ago. They are talking about ChatGPT. The recording is on https://omny.fm/shows/kcbsam-on-demand/jobs-are-now-looking-for-chatgpt-experience-what-d

Pankaj Srivastava

General Manager, Azure Partner Sales & Strategy @ Microsoft | Sales Leadership and Business Development | Partner Strategy, Programs, Incentives and GTM | Advisor and Executive Coach

1 年

These r gr8 insights. Thanks for sharing Rand Morimoto.

Toby Richards

Customer Success & Channel Exec | Strategy and Global Operations | Transformation Leader

1 年

I enjoyed reading this Rand. You did a great job highlighting the risks and opportunity, and precisely centered the conversation on responsible use, accountability and leadership by each one of us.

That's a strawman. AI, especially LLMs are not cars. And there is no driver anyway. But if we continue with your strawman, your argument would still fail for the same reason Tesla autopilot failed.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了