The Sheer Power and Potential of ChatGPT with a Large Dose of Security Reality
Grace Francisco
Technology Executive | ex-Atlassian, Roblox, MongoDB, Cisco | Keynote Speaker | IWF Fellows Class of 2022 - 2023
Most people have only started recently hearing about ChatGPT - the latest AI phenomena capturing millions of consumers’ interest globally since its release in November 2022. My personal and professional feed started filling in the early days with my network of developer influencers testing/hacking/adopting ChatGPT. My feed filled with amusing results of asking the ChatGPT to write articles and college essays, to code, ace exams, create new cooking recipes and to respond appropriately to basic things like saying “Your wife is always right".
Long and far from those early and amusing days in just a few months after launch ChatGPT is now embroiled in potential lawsuits from Italy, the EU and other organizations for privacy violations. And in last week’s news, ChatGPT headlines again with Samsung having three employees accidentally sharing top secret and proprietary information about its inventions in their pursuit in using the AI to help them solve their technical issues.?
Long in the news with all AI related inventions, many have raised concerns about the inherent biases in the training and the resulting ethical issues expected from the many potential uses of ChatGPT and other AI solutions.?
But little has been truly covered about the basic fundamental issues related to the misunderstanding of the haphazard use of ChatGPT.
Let’s start with that first time experience of testing ChatGPT out. As in the Samsung case, most users do not have the basic understanding that all data inputted is not just used to train the ChatGPT model but becomes the property of OpenAI - the startup behind ChatGPT. Italy and the EU are deep in investigation to understand the potential GDPR violations in this kind of solution.? Italy is the first western country to ban use of ChatGPT and cited a data breach which OpenAI acknowledged. GDPR violations can result in millions of dollars in penalty to OpenAI - however the billions of dollars of investment in that solution from the likes of both Elon Musk and Microsoft - will hardly tear down this startup. OpenAI is at the beginning of their journey and will likely not be stoppable.
Are we in the beginnings of the storyline of the Terminator movie? Is this going to turn into runaway AI? Let’s hope not - but we can’t just hope. As technologists and consumers we all have a responsibility to understand the wide spanning implications of not just ChatGPT but all AI solutions.
Consumers need to know that they should not input any kind of personal information (PII) into solutions like ChatGPT. This will take time and effort to drive this level of education. As technologists we should be bringing to bear solutions that prevent this proactively today. At Pangea we demonstrate the ability to use solutions like our Redact and Threat Intelligence APIs such as Domain Intel to proactively prevent such highly sensitive information from being posted in these solutions and potentially malicious results being returned from them. See example here where we use the example of Lifelock's CEO social security number which has been used publicly in advertising:
领英推荐
Above we have a sample app that takes some simple user input, identifies sensitive PII through Pangea Redact API and returns redacted content before it gets to ChatGPT - which means it never gets collected and stored by OpenAI.
Beyond this example, any corresponding result set from ChatGPT and other AI - take for example Microsoft’s BingGPT or Google’s Bard, users may get an assortment of links from across the internet - some valid, and some potentially harmful ones which can result in phishing, computer viruses, and other security threats.
You might ask - well how is that different from using search engines today? The conversational nature of the output is a key difference - potentially misleading consumers on the validity and trustworthiness of the output. Again education is key, however having proactive solutions here as well can prevent what has for easily the last decade escalated into cybersecurity organized crime. Adding a layer of intel and eliminating these threats before the results are delivered to end users is something in our technology community we must take responsibility in doing and protecting the broad masses of consumers. Again as a proof of concept Pangea demonstrates with our APIs how easy it can be to add this protective layer. NOTE - do not attempt to go to the url listed in the example which is an actual known malicious url.
In the example above the url inputted (which could have come through as output) was identified through Pangea Domain Intel API as malicious and then redacted as a result.
This is really just the beginning of what we all should expect to be growing drama, threats and issues related to these technology advancements. They come with great power and benefits, but we also need to take on the extra responsibility of preventing the potential widespread threats these solutions can create.
Cybersecurity Leader (CISO/TechOps) | Board Member | Investor/Advisor | Author/Instructor | +18y (Sec)DevOps
1 年Pangea could play an interesting role as a safeguard wrapper around ChatGPT, if it can consistently identify and redact PII and malicious URLs. Great callout re global legal concerns about ChatGPT absorbing intellectual property rights plus eliminating privacy rights on data/code fed to it. Some of the pushback is not just about the speed of AI advancement, but about the slowness of regulation catching up to current needs.
Technical writer?? | Pangea community advocate | Cyber security Advocate ??
1 年it's an exciting time in AI, a great opportunity to learn and improve our way of doing things. the threat is can we regulate people's intentions? we have to be Arti- conscious just my own concept: being conscious to use AI for good. there will be many laws, the solution is how we manage our intentions as users, thanks for sharing an insightful post ??