The intersection of GenAI and cybersecurity: assessing the risks for companies in the new era
Richard Watson
Global & Asia-Pacific Cybersecurity Consulting Leader at EY | @WatsonCyber
Since the launch of ChatGPT 18 months ago, company leaders have rushed to adopt generative AI (GenAI) to seize new opportunities and prevent rivals from opening a technology gap. Businesses have built their own tools and, in some cases, encouraged employees to experiment with public-facing applications like Claude and Gemini.
This process, and the speed at which it plays out, brings an entirely new set of threats that executives ignore at their peril. At the EY organization, we’re helping cybersecurity teams identify vulnerabilities and implement governance policies to help manage them so that EY clients can benefit from the vast potential of GenAI without courting disaster.
Empowered adversaries
The first thing leaders must be aware of is that hackers have quickly adopted GenAI to increase the threat they pose in several ways.
First the technology is allowing for a step change in the sophistication of attacks. Cybercriminals are using GenAI to impersonate humans, data and patterns with a fidelity that was previously unprecedented and then using these outputs to deceive and steal.
Think of a phishing attack or a deep fake, for example. To impersonate a human, a GenAI tool could not only send an email but also generate a voice to hold a human-like conversation and create false bank statements or social media profiles that show believable spending or location patterns. This makes traditional methods of confirming an identity, through phone as well as email contact for example, much easier to bypass.
The second change is quantitative: with GenAI cybercriminals can massively increase the number and complexity of attacks. AI tools can be used to create hundreds of thousands of new malware daily and launch brute-force attacks from multiple sources simultaneously. Using the phishing example above, a GenAI powered attack could substantially increase the number of contacts, subtly varying content until one was successful.
Data changes everything
GenAI also creates entirely new internal vulnerabilities for companies through the way the technology functions.
The first is data security. As corporations build their own GenAI tools, they must provide sensitive internal data for the Large Language Models (LLMs) that power AI insights to learn from. A record of every transaction the company has ever engaged in might be uploaded into one of these LLMs, for example. This centralizing of data creates a new risk of this data being stolen.
Enterprising employees who use public-facing tools such as Claude and ChatGPT have also begun to inadvertently reveal corporate data, by providing proprietary information in chat interfaces to prompt better responses. One report1 found that a company of 10,000 employees should expect employees to reveal 158 pieces of internal code to public-facing GenAI tools each month, as well as 18 pieces of sensitive regulatory information. Once such content has been revealed to a Gen-AI chatbot, it becomes part of its dataset and may potentially be served up to other users in a reply.
Equally serious is the risk of data manipulation. A malicious actor gaining access to the data that a corporate GenAI tool analyzes may substitute new data to corrupt the tool’s outputs. The company may struggle to spot such manipulation because the tool does not explain how it comes to conclusions, meaning it may be left unchecked to automate poor business decisions.
As companies release public-facing GenAI tools, they also create reputational risk - GenAI chatbots may insult customers, for example, or provide incorrect information.
领英推荐
A new governance paradigm
For cybersecurity teams, all this creates a challenge. Executives will expect them to facilitate GenAI but also hold them responsible for data breaches. At EY we have helped clients through a similar challenge before: when cloud computing entered the corporate world a decade ago, employees began to work on personal devices and experiment with software as a service (SaaS) products. Cybersecurity teams had to find a way to help enable this flexibility securely.
The principles that guided the cybersecurity response then - encouraging experimentation within a clear set of rules - apply again for GenAI.
The first area where rules are needed relates to public-facing GenAI tools like Claude, ChatGPT and Gemini.
Companies that continue to use third-party tools must frame clear guidelines about how employees do so. These must be company-specific, taking into account specific regulatory requirements. Health insurance companies, for example, are bound by rules that might prevent employees from using GenAI tools to find out sexually transmitted disease numbers within a geography. Employers must develop tools to ensure that employees follow regulations. To manage this risk further, companies with sufficient resources and expertise may choose to build their own tools to avoid the risk of accidental data leakage in the public LLMs.
The second area of focus is a company’s own GenAI tools. Steps must be taken to protect data in the cloud, and regular checks must be undertaken to ensure that outputs match expectations. If this is not the case, data should be checked for manipulation. In addition, AI tools and LLMs should be regularly penetration tested to identify any vulnerabilities that may make data insecure.
While employees should be encouraged to suggest ideas for GenAI tools and use cases, their development must occur within a consistent framework to protect data integrity. Finally, fail safes must be implemented so that experimentation can be halted, and products can be quickly bought offline where risks are identified.
GenAI is a significant business opportunity in every sector of the economy. Cybersecurity leaders must help their companies capitalize on it but also manage the many risks the technology creates. Doing so requires clear policies for employees to follow. Stay tuned for my next piece, where I will explain how cybersecurity leaders can turn GenAI to their own advantage.
?
The views reflected in this article are the views of the author and do not necessarily reflect the views of the global EY organization or its member firms.
?
1-Source code is the most common sensitive data shared to ChatGPT, by Nancy Liu
EY Americas Consulting Vice Chair
10 个月Love this, Richard!
Vice President, IBM | Board Member, AFCEA DC Chapter | Spearheading the Application of Advanced Technology to Federal Missions
11 个月I appreciate you sharing your perspective and insights, Richard. Cybersecurity leaders have to balance the value of innovation while at the same time ensuring things are secure. Implementing foundational models within their private environments so data does not leak to the wider world and using governance tools that give insight into the models and their outputs are important steps.
Partner at EY
11 个月Very informative article Richard Watson, thank you for sharing. Putting the right security frameworks in place enables enterprises to accelerate their Gen-AI ambitions.