Risk-Aware Innovation: Managing Generative AI Threats

Risk-Aware Innovation: Managing Generative AI Threats

On April 17, 2024, I was invited by the Association of Certified Fraud Examiners ACFE Singapore Chapter (Official) to give a presentation on the risks associated with the increasing adoption of generative AI and the role that risk teams could play in enabling this innovation. Below, I have included a version of the slides I used for the presentation, along with some additional commentary on selected slides.

tl;dr: Generative AI presents both benefits and risks that organizations must proactively address. Key risks can be categorized as external, process-related internal, and consumer-facing internal. Effectively quantifying the business value, assessing the associated risks, and ensuring alignment with organizational intent are critical prerequisites for the successful deployment of generative AI. Yet, this remains a significant challenge blocking many AI initiatives today.

AI and its impact

There are key characteristics that differentiate generative AI from what is now called 'traditional AI' (also referred to as 'specialized' or 'applied' AI). Typically, traditional AI focused on understanding data and making accurate conclusions about unforeseen data. Primary use cases were related to predictions, recommendations, anomaly detection, or forecasting.

The focus of generative AI is very different. In the simplest terms, generative AI focuses on creating new data that resembles the training data, aiming for originality and creativity. Generative AI use cases include generation, classification, summarization, semantic search, and information extraction based on text, voice, image or video data.

This different focus changes how we evaluate the performance of generative AI models. Instead of using traditional measures of accuracy, precision, and recall, the task of evaluating generative AI outputs against their originality, consistency, quality, and other expected characteristics is a much more open-ended task, requiring a different approach.

Generative AI and traditional AI also demonstrate differences in the amount and type of data used at various phases of AI system development, and pose different challenges in terms of interpretability and explainability. Additionally, while traditional AI has always been prone to biases, generative AI adds another layer of important ethical considerations, including the possibility of generating harmful or offensive content, as well as potential copyright infringement issues.

While the popular opinion is that the emergence of generative AI is an important step towards autonomous and intelligent AI systems, generative AI still has significant limitations. These include a lack of ability to understand the physical world, lack of ability to remember and retrieve information, lack of persistent memory, and a lack of ability to reason or plan effectively.

We are currently in the stage of exploring business use cases for generative AI solutions. Publicly available applications like ChatGPT have triggered public imagination, but industrial use cases are still in the process of being better articulated, evaluated in practice, and fully understood.

Nevertheless, the technology has significant transformative potential, and we will likely see a strong push from business leaders to understand the relevance of generative AI to their operations. This is accompanied by a concerted effort from major technology companies to monetize this new technology quickly.

While the full impact of generative AI on businesses remains to be seen, it is clear that this emerging field represents an important technological development with the potential to disrupt and reshape various industries in the coming years.

The broad scope of potential use cases for generative AI, combined with the low maturity of the underlying technology and the expected rapid market adoption, creates an environment that is prone to various risks. As the transformation impacting organizations and communities appears to be largely unavoidable, a new set of expectations arises for risk management professionals. They must ensure that the implementation of generative AI systems is sustainable and beneficial for all stakeholders.

This requirement presents a need for the development of new competencies across risk and governance teams, as well as an update to existing risk detection, monitoring, and mitigation solutions. Risk management professionals will need to stay ahead of the curve, proactively identifying and addressing the unique challenges posed by the growing adoption of generative AI.

Moreover, several new concepts are currently being explored to further refine generative AI as a technology. These include:

  • AI agents capable of iterating on task execution through self-reflection, tool use, and improved planning, multi-agent collaboration;
  • Multimodality, leveraging diverse data types;
  • Extensive use of smaller, domain-specific language models;
  • New architectures that combine many small models into larger, more comprehensive solutions;
  • On-edge computing capabilities.

These emerging concepts underscore the truly transformative nature of generative AI technology and its potential to evolve in powerful ways.

In the remainder of this article, we will focus specifically on industrial use cases of generative AI - that is, the use cases and associated risks that may be relevant to your organization. We will avoid delving into the broader societal implications of AI, such as its impact on national security, employment opportunities, and other high-level considerations. While these are all important factors in the holistic analysis of AI's influence, they are outside the scope of this article.

Instead, the goal is to provide a targeted examination of generative AI key risks that risk management teams should be prepared to address within an organizational context.

Generative AI risk landscape

From an organizational perspective, at the highest level of abstraction, we can say that generative AI mainly amplifies existing risks by enabling many tasks to be executed at a much higher rate of automation.

However, generative AI may also introduce risks that are specific to an organization. These risks would depend on factors such as the organization's business model, client base, industry sector, and the particular use case of the generative AI system. In other words, while generative AI can exacerbate general risks faced by organizations, the unique risks it poses will vary greatly depending on the context and specific applications within each individual company or institution.

Overall, we can talk about generative AI impact on the 4 following risk categories: Trust and safety, Fraud, Security, and Data protection.

We can broadly categorize the risks associated with generative AI into two high-level groups:

  1. External Risks: The use of generative AI in an organization's external environment creates risks for the organization, its clients, and its partners. This means that even if an organization is not directly leveraging generative AI solutions, it can still be impacted, as the technology may be used by bad actors.
  2. Internal Risks: An organization that chooses to leverage generative AI systems exposes itself, as well as its clients and partners, to a unique set of risks. These are the risks that arise directly from the organization's own implementation and use of the technology.

By understanding and addressing both the external and internal risk factors, organizations can take a more holistic approach to managing the challenges presented by the growing adoption of generative AI. Let's explore risks falling into each of the two categories in more detail.

A few of the risks mentioned in the slides deserve special attention. According to AI system developers, it is already possible to automatically generate any voice or video recording of a person based on just a few seconds of authentic footage or a limited number of photos. This ease of impersonation has significant consequences for many Know Your Customer (KYC) and Know Your Business (KYB) processes that rely on authorization or authentication using images and voice. These impersonation capabilities create risks for e-commerce platforms, social media, and even internal communication processes within organizations. The ability to easily impersonate individuals and generate misleading content at scale is a significant risk that organizations must be prepared to address as generative AI becomes more widespread.

Generative AI lowers the cost and effort required to organize more sophisticated scam and spam campaigns, develop smarter bots, and reduce the quality of content on many platforms by spreading untrue comments, ratings, misinformation, or harmful content. These threats can directly impact an organization's products, services, clients, and even internal processes.

Overall, some of the risks associated with generative AI are not entirely new concepts. However, this emerging technology significantly increases the scale, effectiveness, and enables the use of novel tactics by bad actors. Additionally, generative AI lowers the barriers to entry, allowing larger groups of individuals to more easily engage in malicious activities.

Mitigating the external risks posed by generative AI is a complex task, as it goes beyond the capabilities of any single impacted organization. Addressing changes to the external risk landscape will require organizations to review and potentially overhaul their existing risk management systems.

Over time, we can expect to see increased efforts from the developers of foundational generative AI solutions to prevent the misuse of their technologies. Additionally, the emergence of new legislative frameworks is likely, which may provide greater clarity around liability for harms committed through the use of generative AI systems.

Effectively managing the external risks of generative AI will necessitate a collaborative, multi-stakeholder approach, involving organizations, technology providers, and policymakers working together to establish the necessary safeguards and oversight mechanisms.

Now, let's examine the landscape of internal risks related to an organization's use of generative AI. These internal risks can be further divided into two main categories:

  1. Process-related Risks: This scenario involves the use of generative AI to augment or automate business processes within the organization. In this case, the use of generative AI is not directly visible to the organization's end clients.
  2. Consumer-facing Risks: Here, the organization is leveraging generative AI for the development of products and services that are provided to clients or users. In this case, the outputs generated by the AI system are visible outside of the organization.

Understanding the distinction between these two risk categories is crucial, as the mitigation strategies and potential impacts will vary depending on whether the use of generative AI is internal or exposed to the organization's customers and stakeholders.

When using generative AI, organizations must thoroughly understand the limitations and vulnerabilities of the technology, and proactively develop methods and techniques to mitigate them. Characteristics of generative AI, such as hallucinations or output inconsistency, can negatively impact the experience of users. Additionally, these solutions may be prone to prompt injections or jailbreaking, exposing the organization to another set of risks.

Recognizing that AI systems can be valuable organizational assets that may encapsulate critical knowledge, companies should protect themselves from model theft or the extraction of sensitive data. Finally, employees may demonstrate automation bias, where they over-rely on the outputs generated by AI systems or ignore internal policies related to data protection.

In addition to the risks outlined on the slides, there are several other factors that contribute to the complexity of generative AI use cases. These include:

  • The ability for AI users to engage in very open-ended and unconstrained interactions with the systems
  • The capacity for AI systems to generate unconstrained content
  • The expectation that AI systems will provide factual and up-to-date information

Furthermore, scenarios involving high-stakes decisions in domains like payments, loans, insurance, where AI-supported determinations directly impact people's eligibility for rights or benefits, significantly increase the complexity of these use cases. Ensuring legal compliance is another significant driver of complexity.

Higher complexity, in turn, increases the probability and severity of risks, raises the costs, and reduces the likelihood of successful AI implementation. Organizations must carefully consider these additional complexity factors when evaluating and deploying generative AI solutions, especially in mission-critical or regulated applications.

Towards generative AI system alignment with human intent

To effectively manage the internal risks of generative AI, organizations must adopt a comprehensive approach that addresses technical vulnerabilities, data security concerns, and human factors. Proactive risk mitigation strategies will be essential as generative AI becomes more pervasive within business processes and customer-facing applications.

Understanding the internal risks related to the limitations and vulnerabilities of generative AI solutions, along with the key drivers of complexity in AI use cases, should lead organizations to develop an internal classification system for their AI applications and systems.

The proposed EU AI Act outlines a framework for categorizing AI applications from minimal/no risk to unacceptable risks. Additionally, the Act distinguishes general-purpose AI systems as a separate category. This type of structured categorization is crucial for effective risk management, as it helps prioritize the efforts of risk teams.

For the prioritized high-risk systems, the risk management team should work on aligning the use of AI with its original intent and implementing mitigation strategies to address the identified risks. This systematic approach to internal risk classification and mitigation will be essential as organizations increasingly integrate generative AI into their business processes and customer-facing offerings.

By proactively managing the internal risks through thoughtful system categorization and targeted risk mitigation, organizations can unlock the transformative potential of generative AI while responsibly addressing the challenges it presents.

Aligning generative AI systems with their intended purpose starts by clearly defining the characteristics that these systems must demonstrate to deliver expected business value. It is crucial to quantify these desired characteristics and translate them into measurable metrics for future evaluation and ongoing monitoring.

These key AI system characteristics can generally be divided into four main categories:

  1. User Utility: Does the system deliver the expected value to the business, clients, users, and partners? Does the nature of interaction with the AI system meet expectations related to user retention, quality, and response flow?
  2. Trustworthiness: Is the system reliable? Is it prone to errors or providing counterfactual information? Are the answers provided by the system interpretable to human users?
  3. Performance: Is the AI system efficient and technically reliable? What is the time to first token? How many requests can it handle per second? How many tokens can it render per second?
  4. Cost: How does the cost of operating the system compare to the value it creates? At a technical level, what is the GPU utilization? Is there any wasted GPU utilization? Do we encounter issues with truncated responses?

By systematically defining and quantifying these four key categories of characteristics, organizations can develop a comprehensive framework for aligning generative AI systems with their intended purpose and effectively managing the associated risks.

In this approach, the notion of 'trustworthiness' encapsulates various risk types associated with the deployment of generative AI systems. This concept is also commonly referred to as 'Responsible AI', with many organizations proposing more detailed characteristics that fall under this broader category.

Organizations and regulators have begun proposing frameworks, principles, and expected characteristics that should guide the development of AI systems to increase their alignment with intended purposes and mitigate potential harms. The principles of Responsible AI often include considerations around fairness, transparency, accountability, privacy, security, and robustness, among others.

By aligning generative AI systems with these Responsible AI characteristics, in addition to the core considerations around user utility, performance, and cost, organizations can work towards deploying these transformative technologies in a more ethical, reliable, and controlled manner. This holistic approach to AI system design and governance will be critical as generative AI becomes more pervasive across industries.

While the work towards aligning generative AI systems with intent and mitigating associated risks starts with the definition and quantification of required characteristics, organizations will need to employ a mix of technical and organizational solutions to achieve these goals.

In the process of AI system development, the organization should ensure alignment at various levels of abstraction, including the model, system, application, and user experience levels. Techniques like model fine-tuning and architecture selection will be executed by technical teams. However, responsibility for other methods, such as red teaming, input and output moderation, ongoing evaluation, and monitoring, may sit across different parts of the organization, requiring collaboration between business, risk, engineering, operations, and data science teams.

Beyond these technical solutions, organizations should also implement internal risk countermeasures, which may include:

  • Regular auditing and compliance checks
  • Limiting access to models and training data
  • Ensuring high-quality training data, including governance around data labeling and validation processes
  • Strategies for carefully selecting vendors and foundational models

By adopting a comprehensive, multi-layered approach that combines technical safeguards and organizational risk management practices, companies can work towards aligning generative AI systems with their intended purpose and mitigating the various risks associated with these transformative technologies.

Case study: e-commerce chat bot

Use of chatbots in context of e-commerce platforms, bring the promise of personalized product recommendations, facilitated transaction and increased conversion. If we plan a development of a user-facing chatbot that would be recommending products to our clients together with aggregated information about product reviews, ratings, compatibility with other products, there is a number of factors an organization must consider.

Conclusions and key aspects of AI governance

While we are still in the early stages of generative AI adoption, the transformative nature of this technology cannot be denied. Even if some of the risks highlighted in this article may seem hypothetical for many organizations currently, there is a clear business case for risk and governance teams to be proactive.

Proactive mitigation of generative AI risks can provide significant benefits, including:

  • Enhancing product and service quality
  • Contributing to better data management, security, and privacy
  • Helping maintain strong trust and brand reputation
  • Ensuring readiness for current and future AI regulations
  • Enabling innovation
  • Strengthening relationships with stakeholders and investors
  • Improving talent acquisition, retention, and engagement

Ultimately, the ability to quantify the business value of generative AI use cases, assess the associated risks and their prevalence, probability, and severity, and ensure alignment with organizational intent is a prerequisite for the effective and responsible use of these transformative technologies. However, this remains one of the key challenges and blockers for many AI projects today.

By addressing these challenges proactively, organizations can unlock the full potential of generative AI while mitigating the unique risks it presents.

Puneet Gambhir

Risk Management for Grab and Wider Digital Ecosystem

7 个月

Very insightful and detailed article. Thank you for sharing Zibi Paszkiewicz, Ph.D.

Anna Tokarska PhD

Medical Science, Neuroscience

7 个月

Complicated but such important issue to address. Very interesting!

Biswanath B.

Chief Data Officer | Data, AI, Product, Transformation | Singapore PEP Holder

7 个月

Very well written Zibi. With Generative AI, we are seeing some significant changes in the success metrics from Traditional AI, e.g. Originality in place of Accuracy, Consistency in place of Precision etc. 'Fact checking' becomes more important (and more difficult) in this paradigm, and this is where multi-agent based workflows to self-check becoming more main-stream. But most importantly, as you rightly mentioned, the importance and evolution of AI Governance and Compliance roles are higher than ever now.

Jonathan Page, MBA

CEO, Sales and Compliance

7 个月

Great presentation Zibi, thank you very much!

Jens Scheerlinck

Create better customer experiences by unlocking your data potential.

7 个月

Thanks for sharing Zibi! Interesting read.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了