2024-26 Board, CEO, CAIO Research AI Insights: AI Risk Management: Corporate Playbook Reset
STEVE HAWALD
AI & Innovation Advisor | CEO & Board Member | AI Strategy for Midcap & SME Growth | AI ROI & Risk Expert | Speaker & Executive Researcher | Notable Innovation Award Wins
As AI services like OpenAI's GPT-4 and Microsoft's Copilot revolutionize sectors/industries with millions of users daily, managing AI risks is critical. Today's platforms, built on robust GAI models and massive data foundations, demand strategic oversight to ensure their ethical, safe, transparent, and practical implementation.
"The narrative is that AI is everywhere?all at once, but?the data shows?it's harder to?do than people seem interested in discussing."-?Dr. Kristina McElheran, MIT IDE Visiting Scholar.
AI is reshaping the corporate landscape, offering unprecedented opportunities for growth, efficiency, and innovation. However, these advancements come with significant risks that demand vigilant management. The challenge for Boards, CEOs, and Chief AI Officers is to harness AI's potential while mitigating its threats. This playbook provides a comprehensive roadmap for right-sizing your corporate AI stack, from selecting the right LLMs and applications to optimizing interfaces and domain-specific models. By strategically integrating these components, companies can drive value, foster growth, and achieve a robust ROI.
Here is a GAI LLM and apps—closed or open—ecosystem roadmap for resilient and reliable AI stack options. These risks-based options often use corporate proprietary data, AI LLMs, and open-source apps for data risk mitigation solutions.
Here's what CEOs, CAIOs, CDOs, and CIOs need to know!
CEOs, CAIOs, CXOs Key Takeaways
1. Evaluate Foundation Models (LLMs) Adoption, Fit, ROI
Foundation models are neural networks trained on vast datasets without specific domain optimization, making them versatile for various traditional tasks such as customer and technical support, code generation, or technical error resolutions. These 2024 models include both closed-source and open-source options:
Closed-Source Models:
Open-Source Models:
Note: Tailored Business LLMs often need highly customized AI solutions. For example, combining Claude 3 and OpenAI models can allow for more finely tuned corporate solutions that better match specific business use cases and performance, such as customer service, data analysis, or content generation. In addition, Using multiple models can also serve as a risk mitigation strategy.
All these LLMs are based on the transformer architecture. Developing new foundation models is challenging due to the massive amount of data, computing power, cloud data centers, and expertise required, leading to a few high-quality options in the AI market.
Takeaways:
2. Select Corporate Data-Specific LLM Performance Options
Today, two optimizing foundation models (LLMs) for corporate use are Retrieval-Augmented Generation (RAG) and Fine-Tuning Approaches. Foundation models are versatile but may need to perform optimally for specific tasks. Corporations can consider both approaches to enhance performance for context-specific applications: RAG and Fine-Tuning. Each LLM method has its advantages and disadvantages:
RAG Approach: Retrieves relevant information snippets and appends them to the input prompt for the LLM.
Advantages:
Disadvantages:
Fine-Tuning Approach: Retrains the LLM with domain-specific corporate data to optimize it for specific tasks.
Advantages:
Disadvantages:
Takeaways:
3. Align and Apply LLM Applications Leaders for Interfaces
Corporations often rely on LLM application interfaces known as "GPT Wrappers," built on top of leading LLMs, such as OpenAI, Microsoft Copilot, and Google Gemini. Companies leveraging foundation models for their applications face challenges:
Consider GitHub Copilot, an AI industry leader powered by OpenAI's Codex and GPT-4, distributed through GitHub. With a massive 100+M base and growing developers, GitHub holds a significant distributional ecosystem advantage and gains insights for model improvement. Yet, balancing AI model enhancement with corporate customer and user privacy is crucial!
Big Tech leans towards vertical integration, integrating LLM capabilities into their platforms, like Gemini for Google Workspace and Microsoft Copilot for Microsoft 365, through its OpenAI partnership.
Domain-specific incumbents lacking proprietary LLMs can succeed by building tailored applications atop third-party LLMs. Leveraging last-point access, Big Tech can swiftly offer AI-enabled capabilities to customers, posing challenges for new entrants.
CEOs and CAIOs must strategize AI LLM adoption. Incumbents must capitalize on proprietary data for unique value. Startups lacking data access should focus on agility and outmaneuvering incumbents with speed.
Simply put, differentiation at the user interface, leveraging proprietary data, and agility are crucial in navigating the competitive landscape of generative AI adoption.
4. Selecting leading LLM Applications for Corporate Use Cases
The highest layer of the AI stack involves building applications on top of the foundational or custom fine-tuned LLM models for specific use cases. In 2024, the top 10 LLM applications most widely adopted and used by corporations based on specialized needs, by name order, are:
These LLM apps are often just SaaS apps, with pricing based on monthly usage fees. The costs are mainly associated with secure cloud hosting and API fees from the foundation models each utilize.
Since 2023, AI investments from Magnificent Seven (Mag 7) and VCs with funding stages have fueled the development of numerous new foundations and fine-tuned, task-specific models. This surge has led to thousands of startups creating specialized apps on top of leading LLM models, aiming to gain a competitive advantage. Notably, Mag 7 usually buys needed tech VC startups; however, 90% fail in year one, fewer go to IPOs, and 10% falter and crash in years 2-5.
Takeaways:
领英推荐
5. Apply Specialized Doman High-Performance LLMs for Fit/Value/ROI
An LLM's performance depends on its neural network architecture and the overall quality and quantity of training data it receives. Transformer models, like BloombergGPT, excel when trained on vast, high-quality datasets tailored to specific domains. For instance, Bloomberg, the top global financial data service leader, leveraged its financial data access to develop BloombergGPT, a 50 billion parameter model surpassing ChatGPT-3.5 on financial tasks despite its smaller size.
However, competitors can quickly replicate these services, diminishing their competitive advantage. Although slower in replication, Big Tech can swiftly dominate the market, securing the lion's share usually in a year.
"Understanding the limitations of AI systems is crucial for businesses to mitigate risks and ensure reliable performance in real-world applications." - Gary Marcus, NYU.
Notably, GPT-4's superior performance in some scenarios highlights the ongoing evolution of generalist models. Corporations in highly regulated domains should "evaluate first" specific proven LLM solutions instead of reinventing the wheel.
Takeaways:
6. Open Source Data Option for AI Experiments/Pilots/Adoption
"Making good use of AI and ML tools often entails having the right data strategy to support them" - Harvard Business Review, 5/2024.
As corporations explore AI's potential to transform their operations, the risk of exposing proprietary data becomes a critical concern. When adopting open-source AI and LLMs, OpenAI GPT4, Microsoft Copilot, and Anthropics Claud, executives must never forget that these LLMs are notorious for biases and inaccuracies related to gender and race.
According to Stanford's Foundation Model Transparency Index (FMTI), primary LLM developers' data remains a significant area of opacity. LLM data access, impact, and trustworthiness are the least transparent subdomains. Notably, most developers need to be more transparent about their data practices, with persistent issues, such as Google at 0%, OpenAI and Meta at 20%, Microsoft Phi-2 at 40%, and only StarCoder at 100%! Despite some improvement in transparency from 20% to 34% in the latest May 2024 FMTI, only BigCode, Hugging Face, and ServiceNow provide clear information on data creators, copyright status, licensing, and personal information. The FMTI shows that most Big Tech LLMs still considerably lag and have much more work ahead of them regarding higher AI and data transparency scores!
However, LLMs offer a path forward, but careful risk management is required to ensure data security and build customer trust. Business-centric custom AI LLM solutions can mitigate data risks for a more controlled approach to corporate AI safety.
Minimizing Open-Source AI Risks is Crucial
Not all LLMs are best for open-source solutions. Models like Bloom, Stable Diffusion-3, Vicuna, and others allow tailored AI solutions without relying on third-party services, keeping sensitive data in-house.
Takeaways:
Navigate Licensing and Leverage Big Tech for Copyright Risk:
Compliance with AI open-source licensing terms prevents legal and ethical issues, safeguarding the corporation's reputation.
Update and Maintain AI Models:
Regularly updating AI models with the latest data releases ensures accuracy and relevance while maintaining system stability.
Mitigate AI Bias:
Addressing AI biases ensures fair and equitable treatment of all customers, enhancing corporate responsibility and trust.
Build Customer AI Rights and Trust:
Transparency in data usage builds trust and prevents customer backlash over data privacy concerns.
Into The Future of Corporate AI LLM Risk Management Strategy
Executives can leverage AI's transformative potential by adopting leading practices to minimize open-source risks. Ensuring data security, compliance with licensing, regular model updates, bias mitigation, and transparent communication will foster responsible AI adoption. This approach protects proprietary information, builds trust, and positions the corporation for sustainable and ethical AI innovation.
By focusing on these strategies, corporations can enhance their data risk posture and protect against privacy threats, bolstering customer loyalty and trust while mitigating potential lawsuits.
CEOs and executives must continuously adapt their strategies to navigate the evolving GenAI landscape. An annual Corporate Strategic AI Playbook reset will keep AI services, operations, and the workforce aligned with advancements and best-of-breed leading practices. Embracing a culture of experimentation, learning, and innovation will maximize ROI and secure a competitive edge in the AI-driven future.
By focusing on ethical and responsible practices, executives can harness AI's power, minimize risks, including "must-have" AI cybersecurity, and build a foundation of trust and reliability for sustained success.
References, Acknowledgements, and Selected Research
Financial Times: Risk Management, "Why risk managers need to fight AI with AI," Nick Huber, May 2, 2024.
Harvard Business Review: Analytics And Data Science, "External Data and AI Are Making Each Other More Valuable," Adam D. Nahari and?Dimitris Bertsimas, February 26, 2024
MIT SMR: MAGAZINE SPRING 2024 ISSUE, "Who Profits the Most From Generative AI?" Kartik Hosanagar and Ramayya Krishnan, March 12, 2024.
Stanford University: CRFM, The Foundation Model Transparency Index (FMTI) v1.1, Advisory Board, May 1, 2024.
Steve Hawald CEO CIO Advisory LLC: Forward-looking Boards & CEOs Newsletter,?"2024-26 Board CEO Research: Chief AI Officer (CAIO) Value," April Issue, April 29, 2024.
The Wharton School; UPenn: Knowledge at Wharton, "Five Myths About Generative AI That Leaders Should Know," Scott A. Snyder and Sophia Velastegui, April 30, 2024.
The Wharton School; UPenn: Knowledge at Wharton, "How Early Adopters of Gen AI Are Gaining Efficiencies," Shankar Parameshwaran, February 20, 2024.
Copyright STEVE HAWALD CEO CIO ADVISORY LLC and Board-CEO Research Insights + Vision? Newsletter? 2017-2024. Copying articles to share/use in any way breaches STEVE HAWALD, CEO CIO Advisory LLC. Our research, newsletters, and articles cited for IP rights cover all copyrights, trademarks, designs, domain names, patents, and all other IP rights worldwide. All other content and IP rights are owned for each researcher and?organization's?content and references cited for their ownership. We reserve all our rights in any IP without prior written approval by the CEO. Disclaimer: These articles are the?author's?opinion without financial payments and engagements.