Security Considerations of AI

Security Considerations of AI

With Artificial Intelligence (AI) gaining more and more traction within businesses, here at PrivSec we've started seeing some of our clients grappling with the concerns it presents and reaching out to us for advice. While each use case for AI will be different, there's some general advice which applies, and like all new technology it can be summarised with "be cautious". A specific area of interest is the security implications of AI generated code, and how that may affect the security posture of your organisation.

Disclosure of sensitive data:

One of the early trends we saw once tools like ChatGPT were available publicly, was data breaches as employees were so eager to use them that they didn't consider that the information they were feeding into the tools was being processed by a third party entity. There was a quick crackdown as organisations started to implement guidelines around their usage but for many, the damage was done. AI by its nature works best when it has a complete view of the situation and the temptation is there for developers to treat it like another employee rather than an external entity.

Hallucinations and authority bias:

AI output tends to sound very confident, and humans have a tendency to default to an innate level of trust for those who look to belong somewhere or be an authority - red teams often abuse this with the classic techniques of tailgating and the high-vis vest. With AI though, people are expecting it to provide authoritative answers and most of the time it provides these with all the confidence of a seasoned expert in the field. Some work has been done by AI providers to tune down the confidence in answers but the public perception of these tools is sticky and hard to change, especially with the flood of social media posts talking about how AI is disrupting industries and changing the world.

Often the confidence of AI is better likened to your friendly neighbourhood technology enthusiast after a few too many at the pub. Yes, the answers sound good and there is a technical underpinning but they generally won't consider all the aspects you're going to be concerned about in a business environment. Specifically, the answers might be functionally sound, but fail to consider some security or privacy aspects. Using the answers verbatim is no better than copying in code directly from public sources (like Stack Overflow) without verification.

Lack of AI awareness:

There's a sightly old joke about how AI is just a fancy set of if statements, and while it isn't quite so true with recent advancements, it is true that the concept of AI is poorly defined for the majority of the people who consume the technology. Will the average person know if the prompt in-front of them was generated from a Large Language Model (LLM, a type of AI), or simply a pre-defined set of known related terms? In practice it may not seem like a big difference, but the sudden inclusion of user provided information in traditionally local tools is creating concerns which previously didn't exist. Notably, new and upcoming features such as GitHub Copilot and Microsoft Copilot are integrating AI into workflows that were often perceived to be "offline" and thus changing the risk profile for tools in ways users may not have considered.

As this trend continues, workflows will only become more integrated with third-parties and a lack of awareness of what an AI is, and how they work will result in gaps in security postures as scenarios go unconsidered.

Considerations:

Each AI implementation works differently and consideration should be given to each of the following factors (along with many others!) when assessing how AI should be used:

  • WHERE is the data processed? If everything is processed on the local device then the chance of accidental disclosure via AI prompts drops dramatically.

  • WHAT type of data is being fed to AI? Maybe your developers just want help with a gnarly CSS problem where all the pieces can already be seen publicly on your main website - the risks here are very different to if you're feeding it your raw financial data to look for areas that can be optimised.

  • WHO controls the AI? Maybe you're running AI yourself on an internal host or maybe the AI provider is a company you already have an NDA with and the new services could be wrapped under existing contracts.

  • WHY are your employees looking to use AI? AI changes the workload for organisations and under the right circumstances provides access to an extremely powerful resource which can create optimisations, but, depending on the desired outcome, can create unexpected tasks and increase technical debt in ways that are difficult to solve.

This whole area is daunting, but exciting. At PrivSec we're available to help you not only come up with a plan for how to manage the security and privacy risks AI could present to your organisation, but also be a trusted technical advisor reviewing the efficacy and security of AI generated code that you may want to use within your organisation. If this sounds like something you could benefit from, reach out to [email protected] for a discussion as to how we can help.

要查看或添加评论,请登录

PrivSec Consulting的更多文章

社区洞察

其他会员也浏览了