CyberFocus: Does AI know your company's secrets? Here’s how to check and stay safe

CyberFocus: Does AI know your company's secrets? Here’s how to check and stay safe

Imagine you're drafting a client proposal and decide to ask ChatGPT for help. You paste client names and pricing details, and within moments, the document is ready. The next day, you find out your client received a similar proposal from your competitor. Coincidence? Possibly not.

It's proven that GenAI tools can boost employees' productivity by as much as 40%. However, leveraging them without caution can expose sensitive information: "Once such data is uploaded to the GenAI tools, it's beyond the company's control," says Algirdas ?akys , Computer Security Incident Response Team (CSIRT) Lead at Nord Security .?

If you've already shared work data with GenAI tools, this CyberFocus issue will help you learn what potential business secrets AI knows and how to avoid leaking more.

Risks of using GenAI

The LayerX study found that 1 in 3 employees share sensitive work data through GenAI tools, exposing company data, source code, or customer details.


Image source: Revealing the true GenAI data exposure risk (2023)
“Despite them stating how well they secure the data you share with the chatbot, GenAI platforms aren’t fully secure environments. It means that any confidential details you type, such as client data, proprietary code, or company strategy, could potentially be accessed by others or used to train the AI further," said Algirdas ?akys, CSIRT Lead at Nord Security.

Here’s how sensitive private or organizational data can be exposed:

  1. Data leakage: GenAI isn’t foolproof. In 2023, Microsoft AI researchers accidentally exposed 38 terabytes of sensitive information, proving that even the most secure companies can fail at data protection.
  2. Training on customers' data: 30.8% of GenAI tools, including popular chatbots like ChatGPT and Github Copilot, train on users' prompts. Although they offer a chance to opt out of data collection, it’s still advisable not to share personal or work-related information.
  3. GenAI tools hack: It’s proven that more GenAI tools, not just ChatGPT, are vulnerable to hacker attacks. Recently, a security firm revealed Google’s Gemini AI's vulnerability by injecting indirect prompts to manipulate the LLM's behavior.

A. ?akys notes that such tech giants as Apple, Amazon, Verizon, and Samsung already banned their employees from using AI tools to mitigate data leakage risks, but enforcement is challenging. About 73.8% of ChatGPT and 95% of Gemini use in workplaces come from non-corporate accounts, threatening organizations' security.

How to check if your organization’s data is leaked??

To see if your company’s data has been exposed, infosecurity expert A. ?akys suggests:

  • Use threat intelligence platforms like NordStellar, which monitor data breaches on dark web forums and notify companies when data is leaked.
  • Monitor employee device logs and internet activity on the corporate network to detect unauthorized use of AI tools for work tasks.
  • Set up Google Alerts for key terms and regularly review data breach notifications.

How to use GenAI tools without exposing work data?

When a GenAI tools ban is not an option, and you see great value in using it to boost your company's productivity, it's crucial to leverage them safely to avoid exposing company data.?

Here's how you can adopt AI tools securely, according to Nord Security expert A. ?akys:

For employees:

1. Always use GenAI tools with caution.

2. Opt out of data collection whenever possible to prevent your inputs from being used to train GenAI models. Here's how to opt-out from:

- OpenAI’s training data

- GitHub’s Copilot training data

- Google’s Gemini?training data

- Meta AI’s training data?

- Grok’s training data

3. Never share sensitive, confidential, or proprietary information when having “conversations” with AI. If you still need to share, provide general data without any specific details, e.g., avoid names, titles, etc. Regularly clean your interaction history to avoid storing sensitive information.

4. If you’re unsure about what information is safe to share, consult your company’s risk or security teams before using GenAI for work tasks.

For companies:?

1. Define your risk appetite for using GenAI tools. Set clear usage policies, whether it's restricting certain apps or allowing full use.

2. Implement Data Loss Prevention (DLP) measures to monitor and block sensitive data from sharing with GenAI tools.?

3. Establish robust data security protocols, such as data encryption, strict user authentication, and Security Information and Event Management (SIEM) systems, to safeguard sensitive information.

4. Use monitoring tools to see your employees’ interactions with GenAI tools and track what data they share. Additionally, consider systems that can scan dark forums for potential data leaks.

5. Consider setting up your own GenAI tools within your local infrastructure.

6. Train employees on safe GenAI practices and proper cyber hygiene. Outline clear data handling procedures when interacting with GenAI tools to prevent data leaks.

7. Stay updated on GenAI trends to identify emerging risks and protect your organization.

So, the next time you work on a task and ask the GenAI tool for assistance, remember these security expert tips to keep your data safe. If you found this CyberFocus issue helpful, don't forget to subscribe to stay on top of all things tech and cybersecurity.


MD ABU SHAHMAN

BSc in Information System and Cyber Security.

4 个月

??

回复

要查看或添加评论,请登录

Nord Security的更多文章