Connecting the Dots Between GenAI and Traditional ML (Part 1 of 3)
Illustration created using Leonardo.ai

Connecting the Dots Between GenAI and Traditional ML (Part 1 of 3)


The Rise of Intelligent Agents

Artificial Intelligence is undergoing a transformation. We're moving beyond single-purpose AI models towards a new era of agentic workflows powered by Generative AI (GenAI). These agents are autonomous software units capable of making decisions, collaborating, and solving complex problems, much like a skilled team within your organization. But what drives the intelligence behind these agents?


The Brain of an Agent: LLMs and Their Limitations

The ideal "brain" for an agent should be highly specialized, possessing deep knowledge of your company's processes, business rules, and historical performance. Think about a new hire at your business; you want them to use their previous experience but adjust their decisions to the reality of your own business. This level of specialization is difficult to achieve with general-purpose LLMs alone, and depending on the nature of the information, it is necessary to use more appropriate models.

At the core of most GenAI agents today lies a Large Language Model (LLM). LLMs, such as OpenAI's GPT models, Anthropic's Claude models, or Google's Gemini, they excel at understanding and generating human-like text, enabling them to simulate intelligent conversation and probabilistic decision-making. However, LLMs have limitations:

  • Trained on Text, Not Structured Data: LLMs are trained on vast amounts of text data. While they are masters of language, they lack the inherent ability to perform tasks like forecasting, regression, or anomaly detection —tasks that rely on categorical and numerical, structured data. This is where traditional Machine Learning (ML) shines.
  • Context Constraints: LLMs can handle contextual input through prompts, but the amount of information you can provide is limited by token restrictions. Larger contexts come with increased cost and latency, making them impractical for many applications. Even if you choose to add your "training" data into the prompt, they primarily operate within the realm of text, not the manipulation of historical data or feature engineering.
  • Fine-tuning Limitations: Fine-tuning involves adjusting the weights of an LLM by training it on a new dataset. While it can be effective for adapting the model to a specific domain or style, it's not ideal for enhancing its capabilities with numerical data. Fine-tuning for numerical tasks would require extensive retraining and may not yield the desired accuracy compared to purpose-built ML models.


Bridging the Gap: The Power of Hybrid AI

To truly unlock the potential of GenAI agents, we need to go beyond the hype and augment them with the strengths of "traditional" ML. Here's how:

  • Hybrid Decision-Making: Pair LLMs with "traditional" ML models. Use LLMs for unstructured, language-heavy tasks like making plans, summarizing reports, translating, coding, generating conclusions based on presented data findings, or interacting with users. Delegate structured data analysis, forecasting, and anomaly detection to specialized ML models.
  • ML Models as Tools: Think of ML models as specialized tools within an agent's toolkit. Allow agents to dynamically call upon forecasting models for demand prediction, regression models for pricing optimization, or clustering algorithms for customer segmentation.
  • Agent-Oriented ML Pipelines: Design workflows where GenAI agents seamlessly integrate with "traditional" ML models to leverage the strengths of both. For example, imagine an agent tasked with proactive customer retention. This agent could use an LLM to analyze customer interactions (like emails, chats, and service calls) to understand sentiment and identify potential churn signals. Simultaneously, it could leverage a predictive churn model trained on vast historical data of customer purchases, payments, and service interactions. This model would identify customers at high risk of churn, allowing the agent to proactively engage with them through personalized offers or interventions.


The Bottom Line

Generative AI and traditional ML are not competitors; they are collaborators. By combining the contextual and global knowledge of LLMs with the analytical rigor of ML models, businesses can create agents that not only think but also act—intelligently, contextually, and effectively. This is the key to unlocking truly intelligent, agentic workflows.


Stay tuned for Part 2 of this series, where we'll explore some use cases of this powerful combination across industries and learn when LLM based agents is just enough or additional models need to be included as agent tools. In Part 3, we'll dive deeper into the future of multi-agent systems, including emerging trends like having a dedicated "data scientist agent" that analyzes your historical data, trains specialized models, and generates actionable insights and other advancements that will shape the future of intelligent workflows.


Saludos, apreciada profesora, desde Venezuela de Miguel Villalobos

回复
John K.

Chief Executive Officer at Agolo

1 个月

Very interesting, thanks for sharing.

Leah Elzinga

Fixer | Product Leader | Community Organizer | 40 Under 40

1 个月

This was so helpful, even for a n00b like myself!

Pedro Garcia Leon

Departamento de Fisica, Fac. de Ingeniería, Universidad Catolica Andrés Bello y Red Iberoamericana de Investigadores en Matemáticas Aplicadas a Datos, AUIP.

1 个月

As always... Brilliant

回复
Rodrigo Araujo

AI Technical Advocate

1 个月

Very insightful, thank you for sharing.

要查看或添加评论,请登录

Carolina Bessega的更多文章

社区洞察

其他会员也浏览了