An LLM maturity model
Doug Bryan
?? Helping B2B CROs Eliminate Guesswork and Drive Growth Fast with AI-Powered, Actionable Insights | Growth Advisor | 25+ Years of Proven Results
Here's a simple LLM maturity model framework along two dimensions: model selection capabilities and model governance. The governance needed largely depends on the capabilities used. An illustrative analogy is that different rules and guardrails are needed to drive different types of cars. Range Rovers are easy to drive, safe, and good in most weather conditions. Sports cars, however, require more skill and are higher risk. Supercars are expensive, are only for skilled drivers, and are inappropriate in inclement weather. Lastly, custom hot rods are high-maintenance and require a team of experts to build and operate. Here's the framework:
Level 1. Model selection: Pre-trained model APIs and UIs. Model governance: red-teaming. Car analogy:
Level 2. Model selection: Prompt engineering, retrieval augmented generation (RAG), and few-shot training. Model governance: Define and measure accuracy and cost. Car analogy:
Level 3. Model selection: Fine-tune pre-trained models. Model governance: Define and measure biases. Car analogy:
Level 4. Model selection: Full training of LLMs. Model governance: Continuous, real-time measurement. Car analogy:
领英推荐
The levels of model selection capabilities are:
Levels of model governance are:
You don't need a custom hot rod for a run to the store for milk, especially in a snowstorm. Which is best for a use case will depend on your time, budget, and risk appetite. And one size doesn't fit all, so if you have lots of use cases you'll have lots of combinations. How seamlessly does your AI platform support that?
#ai #genai #LLMs
?? Helping B2B CROs Eliminate Guesswork and Drive Growth Fast with AI-Powered, Actionable Insights | Growth Advisor | 25+ Years of Proven Results
1 年Dave Orashan. I'd give GDPR as an example of significantly increasing the cost of AI training sets and having little positive effect. Gen AI regulations in the US are TBD. My main point is that companies, as well as governments, can over regulate in these early days of gen AI.
Principal Sales Engineer, Strategic Accounts at CrowdStrike
1 年Focusing on perhaps the more important prevailing message - I’ll happily take the (click)-bait Doug: where’s the over-regulation of AI happening in practice? If anything I worry that there are far too many brilliant practitioners - the Curies of the world that didn’t fully appreciate in their time the harm they were doing to themselves and others in the moment - as well as actual Bad Actors that readily see AI as the next means to continue to target the weakest link - us. If anything, we need MORE regulation and oversight. It may already be too late, and I’ll simply be first against the wall when Skynet gains cohesion and it will have assayed this post in infinite detail and incorporated it into its human flaying models.
Data-vangelist helping companies derive value from data
1 年Love the analogy, Doug
Strategic Account Manager w/ Appian: US Civilian Government/HHS
1 年Brilliant!