ChatGPT and AI Use
Time always seems to be accelerating in the worlds of security and technology and the recent AI chatbot developments prove this: ChatGPT busted into mainstream news and suddenly everyone was using it and hackers were drooling over the opportunities presented by it. The term "Artificial Intelligence" is a bit misleading right now since it gives the impression that there is some sort of sentient computer at work but ChatGPT, like other similar products, is really a Machine Learning engine with a Logic Learning Machine (LLM) and a large data lake to draw upon.
As many others such as JPMorgan and Walmart have demonstrated lately, there is a need for security teams in all organizations to be prepared for AI tools like ChatGPT and the risks they can create for your data and your organization. The three largest areas of risk that I personally see in this tech are inaccurate / inappropriate output, skewing output, and data ownership. There have been more than a few news articles describing how one AI chat type tool or another was politely answering a user's requests when it suddenly all went South and the AI began providing inappropriate, inaccurate, or downright disturbing responses. This tech is not independently "intelligent" as its moniker would imply but is instead only able to use what it has at hand to respond to requests.
Because this tech relies heavily upon its data pool as a source for providing responses, the data pool must be augmented with new information so that the machine learning portion of this tech can "learn". With this in mind, the responses provided by this tech can change over time - sometimes getting more accurate but sometimes getting less accurate or just way off topic. This drift or skewing of output would make this tech a bit of a worry if used for a core business purpose.
领英推荐
Now since this tech must grow its data collection in order to improve itself, you need to wonder, does this tech actually OWN the data it is collecting and using for its own (well, its builders) purposes? Probably not and this has raised a lot of serious concerns about privacy, copyright, and intellectual property. For example, if you use a tool like this for a service or product that you intend to profit from, and someone can demonstrate that your product or service used their intellectual property or personal data as a result... well, I doubt very much the AI tool will end up in court....
My advice: use evolving standards and best practice when working with AI tools like these and apply the core principles of security and privacy best practice. Assess the tool from a security and privacy risk perspective like you would any other third party tool. Do not send PII or confidential or proprietary data to such a tool (any more than you would do this on say, social media). Evaluate the tool and your risks against bodies of knowledge such as NIST.AI.100-1 and the various ISO Standards such as ISO/IEC 23894 (and others).Oh, and you will most likely need to build an AI Use Policy and add it to your ISMS at work - I did.
This tech is powerful and it can do wondrous things and, like all new tech, being on the bleeding edge of it can be a real ride but it can also bring a set of risks that you need to prepare for.
Director, Institutional Solutions and Services
1 年Nice work Tony English . Clear and concise. Thanks for sharing.