Artificial Intelligence : A new dimension to traditional people, process and technology risk framework.
Santosh Kamane
Cybersecurity and Data Privacy Leader | CISO Coach | Entrepreneur | PECB Certified ISO 42001 Trainer and advisor | Virtual CISO | GRC | DPO as a Service | Empowering Future Cybersecurity Professionals
Traditionally , as we all have learnt, when it comes to Risk management, “People, Process, and Technology” refers to the key components necessary to effectively manage risks within an organization. All of risks today, either operational or technology risks, typically do fall under one of these areas.
Looking at these fundamental aspects,
People:?Typically this refers to the individuals responsible for identifying, assessing, and mitigating risks. People are critical to risk management as they possess the knowledge, experience, and expertise necessary to identify, evaluate, and most importantly respond to risks. “Lack of security awareness for people” has been sighted as one of key weaknesses for most organizations today.
Process:?Effective risk management processes such as change management are designed to identify, assess, and mitigate risks, and ensure that the organization is able to respond effectively in the event of an incident or crisis. Also in most organizations , audits as well as Risk Control Assessment (RCA) exercises are targeted at identifying the gaps in the processes. Example Maker checker processes typically deployed in financial institutions or controls such as segregation of duties used to mitigate change management risks.
Technology:?This includes risk management software, data analytics tools, and cybersecurity technologies. Technology is an essential component of risk management as it enables organizations to efficiently gather and analyze data, monitor risks in real-time, and respond quickly to emerging threats. It’s a common scenario to notice data leakage, zero day vulnerability, unpatched systems being key vulnerabilities in the most security advisories.
By focusing on people, process, and technology in risk management, organisations can develop a comprehensive risk management strategy that is aligned with their business objectives and risk tolerance.
AI risks and its integration with People, process, technology.
AI is a new component that is rapidly gaining its own place in the organisations and being integrated with key decision making areas. Like any other technology, AI can present certain risks to organizations as well as individuals. Largely the concern has been AI’s adoption without being familiar with its full capabilities and its impact, largely on people.
1.??? Privacy and security risks: As OpenAI algorithms and models are trained on large datasets, there is a potential risk that they may capture sensitive or personal information, putting the privacy of individuals and organizations at risk. Additionally, OpenAI models may be vulnerable to hacking or cyber attacks, leading to potential data breaches and other security issues.
?
?One of the most recent incident noted here,
ChatGPT creator OpenAI has confirmed a data breach caused by a bug in an open source library, just as a cybersecurity…
?
2. Bias and fairness?: AI models may inherit biases from the datasets or algorithms they are trained on. This can damage the reputation of the organization and may lead to negative consquences for individuals.
3. Legal and regulatory risks: As OpenAI is a relatively new technology, there are few established regulations governing its use. Lack of governing body, regulatory guidelines and oversight is a signifcant risk for any business.
领英推荐
4. Operational risks:?Organizations are adopting openAI without assessing the need for necessary infrastructure, impact assessment, evaluation of business cases, expertise, and skilled resources to operate and maintain the technology. Resiliency is necessary to maintain robust AI infrastructure within organizations.
5. Strategic risks:?In today’s evolving world where technology and innovation is changing frequently, organisations that do not effectively incorporate AI into their operations, may risk being left behind by competitors who are able to leverage the technology more effectively.?
?
AI is now integrated into people, process and technology.
AI can fall under all three components of the “People, Process, and Technology” framework depending on the context in which it is used. Here are some examples:
1.??? People:?AI relies heavily on the expertise and skills of people to develop and train its algorithms and models. People are essential to the success of AI as they possess the knowledge and experience necessary to develop and optimize the technology.
2.?? Process:?AI includes development and testing of algorithms and models, as well as the implementation of best practices for data management and security. Process is critical to the effective use of AI as it ensures that the technology is used in a consistent and effective manner.
3.?? Technology:?OpenAI is a technological tool that relies on hardware, software, and infrastructure to operate. This includes cloud computing platforms, data storage systems, and specialized hardware such as GPUs.?
Should AI be categorised as a new dimension?
We need to consider a few things before making this decision. A few thoughts we must consider are,
1.??? Is AI a mere technology or can be categorised into people or process as well?
2.?? AI though a technology at the core has ability to adopt new learning ways, independent decision making, human like responses, possible bias in its approach and also ability to spread misinformation due to lack of correct data being fed to it. Does this not make AI more than just a technology?
3.?? AI impacts people, process as well as technology aspects in the organization and hence it may need to have its own place or accountability in the risk management frameworks.
4.?? Like people and technology, AI is largely an informational asset. It has ability to retain information forver as part of its learning. People can be held accountable when it comes to risk ownership and mitigation actions. Who is accountable for AI’s decision making and actions ?
5.?? AI has unique sets of risks today — lack of governance, regulations, ethical and moral value conflicts and so on. Should AI not be included as a new dimension in Risk assessment ?
Final Words
In summary, AI has its own set of risks and challenges. Its too early to assess the complete impact of AI on people and technology. Its important for risk officers to assess the impact of AI before its adopted in the organizations. AI’s inaccuracy, biased decision makings and algorithms will impact organization’s business, reputation, people and so on. Data leakage risk is also significant with AI being capturing personal data, organizational data and so on. AI is doing everything from writing your programs, building your intellectual property, guiding you out of stressful situations — but this comes at a cost.
FREELANCE TRAINER FOR INFORMATION SYSTEMS SECURITY AND AUDIT - CISSP, CISA at FREELANCE INFORMATION SECURITY TRAINER
1 年Extremely timely and thought-provoking article. Thank you.
Sr. Director, Managed Security Services at NTT DATA Services, Adjunct Faculty JSOM, UTD, Board Member NTX Infragard, Blogger,
1 年Great summary Santosh, will share with my contacts
Cybersecurity and Data Privacy Leader | CISO Coach | Entrepreneur | PECB Certified ISO 42001 Trainer and advisor | Virtual CISO | GRC | DPO as a Service | Empowering Future Cybersecurity Professionals
1 年Hi Malini , Yes agree on your views that AI security risks have been addressed to great extent. Over time, most of the risks would be mitigated. Though I personally feel the risks around bias and ethics would never get completely mitigated. This article is more about fitting AI into people, process, technology framework. The underlying question is should AI be considered as just technology risk ? It has human like traits when it comes to decision making, learning, bias etc. So the point was if there is need to introduce AI as 4th dimension in risk mgmt.
Cybersecurity& GRC Thought leader| AI Governance & Risk Advisor| Speaker | Mentor | Top Voice| Best Selling Author | Top 10 Global Women in Cybersecurity| Certified Board Member| Top Technology Leader | CISO 100 winner|
1 年OpenAI has already addressed many security and privacy concerns in the latest ChatGPT 4.0 enterprise version so the risks that is highlighted here are already taken care at least major ones. Please refer https://openai.com/security and https://trust.openai.com/ . AI as a technology is in use since 1955 and in major use by industries since a decade now. Because of ChatGPT which is a Generative AI and Conversational AI model, AI as a technology is in limelight however the security concerns needs to be addressed as you build your AI model within your applications similar to application security. Best reference guides are NIST AI risk management framework https://www.nist.gov/itl/ai-risk-management-framework and OWASP Top 10 for LLM and Machine learning. https://owasp.org/www-project-top-10-for-large-language-model-applications/