In the Shadow of AI
Richard Starnes
Strategic CISO | LinkedIn Top Cybersecurity Voice, NED and Advisory Board Chair - Cyber Resilience Centre for London and School Governor
Large Language Models (LLMs) are data hoovers. They will ingest any data given including the questions you ask them and make that part of their data set. This is where things get tricky from a CISO perspective. We have seen more than a few instances of people feeding public AI engines corporate confidential data to produce work products, such as reports and presentations. Many AI engines have a setting which purports to address that concern, but what assurance do we have? The data is available outside the company and potentially to everyone, maybe for a short time, but available none the less. As a result, CISOs have moved to shape behaviours and/or control corporate access. This unfortunately can lead to the creation of Shadow IT and the point of this article.
Shadow IT refers to the use of unauthorized or unapproved technology solutions within organizations. While it can pose numerous risks across many domains, the emergence of LLMs has introduced some new variations of old risks. There are some specific risks associated with shadow IT in the context of LLMs, highlighting the potential consequences and the importance of proper governance and security controls.
Uncontrolled/Uncontrollable Data Access and Security Breaches
One of the primary dangers of shadow IT with LLMs is the uncontrolled access to sensitive data. LLMs often require significant amounts of training data, which may include proprietary and/or confidential information. When individuals or teams circumvent established protocols and use unauthorized LLMs, they may unwittingly expose sensitive data, leading to potential intellectual property theft, data leaks, or regulatory compliance challenges.
?
Inconsistent Model Performance and Quality
LLMs require regular updates, fine-tuning, and monitoring to maintain performance and quality. Shadow IT deployments lack the governance and oversight necessary for proper qualified maintenance and security monitoring. As a result, the performance and reliability of these unauthorized LLMs can suffer, leading to inconsistent results, decreased productivity, an increased security threat and potential business disruptions. Another significant issue is biased outputs. Important decisions maybe made utilizing this biased data, with harmful unintended consequences.
Compliance and Legal Concerns
Unauthorized LLM usage within shadow IT can expose organizations to compliance and legal issues. Certain industries, such as healthcare, finance, or legal sectors, are subject to strict data protection regulations and ethical guidelines. Shadow LLM usage may violate these regulations, leading to penalties, lawsuits, and/or reputational damage.
领英推荐
Lack of Accountability and Control
Shadow IT inherently bypasses established IT policies, controls, and governance frameworks. This lack of oversight and accountability creates challenges in terms of version control, data governance, model explainability, security and proper documentation. Without centralized control and monitoring, it becomes difficult to ensure transparency, traceability, and compliance with organizational standards.
Infrastructure and Resource Challenges
Shadow LLMs can strain organizational infrastructure and resources. The deployment and usage of LLMs often require substantial computational power, storage capacity, and network bandwidth. Unauthorized LLMs can negatively impact corporate resources, leading to performance degradation, network congestion, and increased operational costs.
In conclusion, Shadow IT poses significant risks to organizations, and the inclusion of LLMs amplifies these risks. Uncontrolled data access, security breaches, inconsistent performance, compliance violations, and resource challenges are among the risks that organizations must address. To mitigate these risks, it is crucial to establish comprehensive governance frameworks, educate employees on the risks of shadow IT, promote transparency, and enforce proper security measures. By proactively managing LLM deployment and usage, organizations can harness the benefits of these powerful engines while safeguarding their data, reputation, and compliance profile.
Richard Starnes, CISO, Six Degrees Group Chair, Advisory Board, Cyber Resilience Centre for London Worshipful Company of Information Technologists - Security panel member
Making it easier for workers with genuine business needs to use legitimate, authorized AI/LLM services appropriately is surely part of the solution, along with policies, awareness, training and edumacation, plus compliance reinforcement/encouragement and (as a last resort) enforcement/penalties. Yes, that implies providing legitimate, authorized AI/LLM services plus the associated rules, awareness/training and support ... in other words, getting ahead of the game. Always a challenge in this field!
LinkedIn Top Voice | Founder @RebelHR | Director @Windranger | Fractional CPO | Strategic HR Leader | HR Innovator in Crypto & Web3 | Scaling Company Sadist |
8 个月Amazing article ?? This is a timely discussion given the increasing reliance on AI technologies across industries. It emphasizes the importance of establishing robust governance frameworks to mitigate the dangers and uphold cybersecurity standards.