Why AI Agents Are Forcing Enterprises to Rethink Retrieval Investments

Why AI Agents Are Forcing Enterprises to Rethink Retrieval Investments

Enterprise search tools lingered in the background for decades—a minor employee efficiency booster with a low (perceived) Return-On-Investment (ROI). That’s changing. Companies are realizing they need to invest resources into retrieval infra not to save humans time but because AI agents can’t function without precise access to Enterprise data. Humans no longer initiate the search for information; AI drives the queries to complete a task in a workflow. Improving retrieval is now far more critical than when the case for enterprise search infra was to help employees find that travel policy document.

Swapping Models Is Easy

It’s clear by now that the competitive landscape for AI adoption in Enterprises will, at the core, split into two main categories:

  • Reasoning models: A handful of frontier LLM providers (OpenAI, Anthropic, Google, DeepSeek) with best-in-class reasoning models.
  • Retrieval infra and data tools are gateways between these models and enterprises’ data. There has always been a moat in data. Now, we need to make that data available to AI.

Swapping frontier models (like OpenAI o1 to DeepSeek R1) is as simple as updating one line of code. However, rebuilding the systems that index, rank, and align proprietary data with AI workflows is where the real wide and deep moat lies.

The Rise of AI Agents

The transformative potential of AI agents in enterprise workflows hinges on one critical factor: retrieval of the enterprise's data. Data is a moat; models are not. While reasoning models like DeepSeek R1, trained on public internet data, excel at generalized tasks, they fail when faced with enterprise challenges that demand access to proprietary, up-to-date, and context-rich data.

Without robust retrieval systems to surface precise information from internal silos, these agents cannot achieve the accuracy or reliability required to get work done.

From Search as a Human Tool to Fuelling AI

The legacy search systems we worked with for the last three decades focused on helping humans navigate data. The human would express the information needed as a short text query. With AI agents entering the enterprises, AI drives the questions, and the stakes are higher than reducing the time necessary to find that reimbursement policy document. The return on investment is potentially much higher now that AI agents promise to change how work is done. Reliable retrieval is the cornerstone of achieving those promises. A promising direction is combining the power of large reasoning models with retrieval:

To shed light on this topic, our core motivation is to enhance the Large Reasoning Models (LRMs) with o1-like reasoning pattern through autonomous retrieval. We propose Search-o1, which integrates the reasoning process of LRMs with two core components: an agentic retrieval-augmented generation (RAG) mechanism and a knowledge refinement module. This design aims to enable LRMs to incorporate the agentic search workflow into the reasoning process, retrieving external knowledge on demand to support step-wise reasoning while preserving coherence throughout.

Quote from Search-o1: Agentic Search-Enhanced Large Reasoning Models

Beyond the "Model du Jour": Invest in Retrieval Capabilities

Innovation in frontier models will continue to accelerate, with new releases from major labs pushing the boundaries of reasoning capabilities. However, enterprises focusing solely on adopting the latest models while neglecting their retrieval infrastructure risk building their AI strategy on shaky ground.

The true competitive advantage lies not in which frontier reasoning models you use but in how effectively you can connect those models to your organization's knowledge.

As AI agents become more sophisticated and take on increasingly complex enterprise workflows, the quality of retrieval systems will become the primary determinant of success. Organizations must shift their mindset from viewing retrieval as a mere search utility with low ROI to understanding it as a critical infrastructure that enables AI to function as a true enterprise capability. This means investing in sophisticated indexing, ranking, and context-awareness systems that can surface the correct information at the right time, whether the query comes from a human or an AI agent.

The future belongs to enterprises that recognize this fundamental shift and act accordingly. While frontier models will remain important, the companies that build robust, AI-ready retrieval infrastructure today will be best positioned to leverage whatever models emerge tomorrow.

It's about having the most effective systems for connecting AI to your enterprise's most valuable asset: its data.

Roman Grebennikov

Principal Engineer | ML - Search - Recommendations | PhD in CS

1 个月

I wonder if an AI agent is just a modern alternative to a Python script. Yes nowadays it's much easier to write complex automations, but in my personal experience you rarely bound by the code itself, but more by understanding what and why you're building.

要查看或添加评论,请登录

Jo Kristian Bergum的更多文章

社区洞察

其他会员也浏览了