The NIST ARIA Program: Navigating Generative AI Risks
Shantanu S.
Machine Learning & Artificial Intelligence Legal Advisor and GenAI Product Builder
by Shantanu Singh, written and edited in creative collaboration with multiple AI Systems.
Introduction: The New AI Frontier and Its Risks
Early last year, I first published my thoughts on this NIST AI Risk Management approach to Generative AI--ARIA, for short. The article was later syndicated with one of the major accounting firms for educational purposes. Since much has happened during this time, I am supplementing my perspectives on Gen AI Product Development and Risk Management.
Generative AI's adoption into the mainstream has wielded a double-edged sword for organizations. Yes, these powerful systems unlock efficiency and creativity. But novel risks can break out, and roam freely, at scale, with only nascent tools and tests to proactively detect their impact.
In August 2024, 39 percent of the U.S. population age 18-64 used generative AI. More than 24 percent of workers used it at least once in the week prior to being surveyed, and nearly one in nine used it every workday.
NIST has emerged as a central player to guide AI risk management. NIST released its AI Risk Management Framework (AI RMF 1.0) in early 2023, followed by a draft Generative AI Profile in mid-2024. They also launched the Assessing Risks and Impacts of AI (ARIA) program to operationalize AI risk evaluation in real-world scenarios.
These efforts matter because organizations deploying generative AI need concrete ways to ensure these systems don't endanger users, violate laws, or erode trust. High-profile incidents have already highlighted potential harms: AI chatbots generating false information, image generators producing deepfakes, and private user data leaking via AI tools. Companies that proactively manage AI risks protect themselves from incidents and may gain a competitive edge in trust.
The Importance of Generative AI Risk Management
Generative AI risk management is crucial because the technology's power to create content, automate decisions, and engage with humans means it can cause unintended harm. Unlike traditional software, generative AI produces unpredictable outputs and learns from complex data that may embed biases or private information.
NIST identified at least ten risk areas unique to or intensified by generative AI, including:
Each risk can lead to real harm—from reputational damage and legal liabilities to physical or societal harm for individuals and communities.
For AI lawyers and regulators, these risks underscore why governance is needed to ensure compliance with laws and uphold ethical principles. Product builders must understand these risks for responsible design, while security professionals need to guard against an expanded "attack surface." Auditors and risk officers need frameworks to assess whether AI systems meet appropriate standards.
NIST's guidance is becoming a de facto benchmark, widely cited by authorities as a preferred solution for addressing AI risks. Implementing these frameworks demonstrates that an organization is taking responsible "trustworthy AI" steps—rapidly becoming an expectation from boards, customers, and regulators alike.
Recognizing Risk: Identifying Issues Early in AI Projects
Identifying AI risks starts with listening for certain signals in product features that indicate generative AI involvement or high-stakes decisions. This means mapping the AI system's purpose, scope, data usage, and operating environment—what NIST calls the "Map" function.
In practice, identifying risks means asking:
When building generative AI features, consider key risk categories:
NIST's ARIA program reinforces proactive identification by focusing on contextual evaluation—testing AI in realistic scenarios to see how risks materialize when humans interact with the system.
Industry feedback to NIST highlighted the challenge of anticipating all failure modes. We need clear taxonomies of risks as a starting point, which the Generative AI Profile's risk list helps provide. Teams should also listen to front-line product teams whose intuitions about potential problems should be captured and analyzed.
Hurdles to Adopting AI Risk Management Best Practices
Organizations face several challenges in implementing effective AI risk controls:
Overcoming these hurdles starts with organizational commitment to make AI risk management a priority, not an afterthought.
Insights from NIST Workshops and Industry Feedback
NIST has engaged stakeholders through workshops and requests for feedback to shape its AI risk guidance. Key takeaways include:
The ARIA program embodies these insights—evaluation-driven, multi-stakeholder, and focused on bridging the gap between principles and practice.
What Teams Can Do Today: Define, Detect, Analyze, Decide, and Act
Breaking AI risk management into clear steps makes it actionable.
Define (Context and Risk Criteria)
Detect (Monitoring and Testing for Issues)
Analyze (Understanding and Assessing Detected Risks)
Decide (Risk Response Decision-Making)
Act (Implement Controls and Improvements)
You don't need to start from scratch—leverage NIST's frameworks, the Generative AI Profile's risk mitigation techniques, and industry best practice guides to choose concrete controls.
Conclusion
The landscape of generative AI is evolving rapidly, but NIST's ARIA program and risk management frameworks provide valuable guidance. AI risk management requires involvement from legal, technical, business, and user communities to be effective.
ARIA signals a future where AI systems are evaluated on their holistic real-world impact, not just technical performance. As it generates new metrics and testing methodologies, these may become tomorrow's benchmarks for AI assurance.
The five-step approach of Define, Detect, Analyze, Decide, Act offers a pragmatic cycle any team can implement today. Effective AI risk management enables responsible innovation and builds the trust needed for generative AI to reach its full potential.
Organizations that strengthen their AI risk practices now will lead in deploying AI that is not only cutting-edge but worthy of users' and regulators' trust. The tools and knowledge are available—it's up to us to use them.
The content is for educational purposes and not intended as legal advice or to establish an attorney-client relationship. No content represents advice provided to past or current clients.
(The next article is: Spotting and Defining Generative AI Risks in Your Projects)
Deputy Manager at JSPL-Jindal Africa
4 天前Muito útil