Importance of Trust, Risk, and Security Management of Generative AI technology
Ajay Kumar, CISM
Helps organizations identify, assess, and mitigate cybersecurity risks
In my previous article, I put my thoughts around potential cybersecurity risk of ChatGPT usage. If you’re interested to read it than go ahead and give it a shot here link.
This article is focused on building AI trust, risk, and security management along with deployment guidance for enterprises.
To be trustworthy, AI technologies must appropriately reflect characteristics such as accuracy, explainability, interpretability, privacy, reliability, robustness, safety, and security or resilience to attacks – and ensure that bias is mitigated. Developing and using AI in ways that are ethical, reduce bias, promote fairness, and protect privacy is essential for fostering a positive effect on society.
ChatGPT reached 100+ million monthly active users just two months after launch, making it the fastest-growing consumer application in history. The free availability of generative AI technology to public has fueled in unprecedented adoption of ChatGPT with a viral rate of growth exceeding that of any other consumer application ever seen to date, and an awakening of idea to both the opportunities and risks of generative AI. Generative AI is being used in many components of businesses from sales & marketing to strategy to source code development that are leveraging deep learning algorithms to generate entirely new outcome based on patterns and features learned from various training data sets.
?According to a study, the power of generative AI will likely continue to grow with an estimated $15.7 trillion dollars of potential contribution to the global economy by 2030.
?
Trust in AI:
Gartner predicts by 2026, enterprises that operationalize AI transparency, trust and security will see their AI models achieve a fifty percent improvement in terms of adoption, business goals and user acceptance. Further, by 2027, at least one global company will see its AI development banned by a regulator for noncompliance with data protection regulation or AI governance legislation. Gartner expect the market will steadily evolve and grow driven by increasing regulations and requirements to operationalize and improve performance of AI models. Over time, we will see continuing feature extensions across as market consolidates around the data models and privacy functions right from strategy to design, build and defend and respond to security risk.
For example, ChatGPT and similar Generative AI tools are strong resources for creating initial drafts of documents and other timesaving activities.?However, these tools are incredibly limited, but good enough at some things to create a misleading impression of greatness. Therefore, it’s a not a good idea to be relying on it for anything important.
?AI Security risk and Threats:
Detecting and stopping attacks on AI requires new frameworks and techniques for testing, validating, and improving the robustness of AI workflows. Malicious attacks against AI could lead to different type of organizational harm and loss example, reputational, financial, or related to intellectual property, compromise of people’s personal data. The risk and consequences are different. Security professional needs to add specialized security controls and introduce practices to protect data and application security measures as the risk and consequences are different.
?
Usage of Personal Data in AI Model:
AI has proven to be a very powerful tool to help solve many of today’s great challenges in business, healthcare, security, and safety & welfare a like. Many privacy and data protection laws require purposeful processing of personal data, training algorithms and AI models with directly identifiable data that can lead to processing such for example outside transparently conveyed purpose. Operational deployment can sustain bias for lack of diversity and have an operational impact on the actual individual whose personal data was trained on if they remain identifiable.
Legal Aspect:
Enterprises are excited about use cases in a variety of functions and exploring whether Generative AI’s fully formatted wealth of information and faster response times can help produce contracts, generate strategic plans, write code, compose performance reviews, and craft internal audit reports. Enterprise not completely prohibiting the use of the Generative AI but warned its employees about inputting confidential information in prompts and inaccuracies in its responses. On the legal side, a large law firm have banned use of Generative AI until further iterations are more reliable and secure. As legal professionals formulate policy for employees, at the same time privacy and security professional needs to collaborate to make security analysts aware of responses from ChatGPT need to be verified independently because these are not always accurate and may even include fabricated response for example,
?
领英推荐
Key considerations while deploying secure Generative AI:
Both business and security leaders need to attune to the business and security risks they may be incurring and how those risk can be mitigated. Generative AI product come with a heightened risk of compromise that requires a well planned and executed security strategy to begin with.
Technology company Accenture announced a $3 billion investment over next three years in its Data & AI practice to help clients across all industries rapidly and responsibly advance and use AI to achieve greater growth, efficiency and resilience and AI security program internally and with ecosystem partners to deliver world leading advice and services right from strategy, design, build, defend and respond to AI security events.
?
Strategy and Governance Framework:
The world Economic Forum has launched the AI governance Alliance, a dedicated initiative focused on responsible generative artificial intelligence. It will provide guidance on the responsible design, development, and deployment of generative AI systems. Further, the initiative will prioritize three area ensuring safe systems and technologies promoting sustainable application and transformation and contributing to resilient governance and regulation.
AI governing frameworks need to evolve and expend to be able to mitigate risks and build trust as generative AI use cases expand, especially into more regulated domains. Organizations are running across to focus on improving the governance of generative AI.?It’s important in highly regulated sectors example, financial services that require data privacy and security, along with strict compliance with continue changing rules and regulations. Every single organization is considering the use of generative AI to improve the business efficiency & effectiveness, but they?should take precautionary steps now and start implementing governance framework to?put the proper guardrails in place to mitigate the risks that are highlighted.
?
Design and Build:
Start with a high-level roadmap to develop custom generative app as an interface to deploy and interact with generative AI behind a security control to mitigate the key technology risk. The app that acts as a proxy between the user and generative AI to eliminate the direct interaction with it. The app can be customized with a set of prompts aligned to the focus use cases. The app will help validate the response in a predetermined format, this in turn makes it easy for the logic within the App to validate each response before passing it on back to the user.
In addition, data loss prevention policies can be built and configure to monitor and detect sensitive business data and personal data to protect against data loss and address the privacy & compliance issues. Conducting end-to-end security testing of generative AI systems should be part of overall secure application development practices.
Trusted Environment to Minimize Risk of Data protection:
The risk is real. Employees looking to save time, ask questions, gain insights, or simply experiment with the technology can easily exposed confidential data—whether they mean to or not—through the prompts given to?generative AI?applications.??This risk can be minimized by thinking through specific use cases for enabling access to?generative AI?applications while looking at the risk based on where data flows. The risks of data leakage lie primarily at the application layer. So, as mentioned above enterprises can use a custom app that replaces the ChatGPT interface to leverage the chat Generative AI API with inbuild security policies to monitor and detect sensitive data if a user tries to upload into AI prompt.
?
Detection, Monitoring and Response:
Continuous monitoring, detection and response are a standard activity part of a cybersecurity program that record and store system level behaviors, uses various data analytics techniques to detect suspicious system behavior, provide contextual information, block malicious activity, and provide remediation suggestions to restore the affected systems. Every single log and event from AI systems should be integrated with centralized log analytics platforms for monitoring and detecting the unusual activity and quickly respond to detected threat or vulnerabilities.
MITRE have developed adversarial Threat Landscape for Artificial-Intelligence Systems (ATLAS) to raise awareness of threats and vulnerabilities in machine learning, a knowledge base of adversary tactics, techniques and case studies for machine learning systems based on real-world observations and demonstrations by working with security groups and academic research. The model enables researchers to navigate the landscape of threats to machine learning systems.
?In summary, develop threat models against generative AI and identify probable attack vectors, a holistic approach of identifying generative AI attack surface includes all third-party tools, models and data used to train or finetune the models. Revise data protection, privacy and AI regulations that apply to generative AI and establish controls to prevent Personal data misuse and leakage to minimize the risk of litigation and compliance issues. Obtain user consent for use of data in generative AI systems and ensure data is only used within the limited purpose approved by the user and in accordance with privacy laws and regulatory guidelines.