2025 will be the Beginning of Actual AI First Applications
Daivik Goel
Building a New Consumer AI Experience | I also throw events | Ex-Tesla, Cisco Meraki | Podcast Host | uWaterloo Eng Grad
Since the introduction of ChatGPT in late 2022, a whole new paradigm of applications were enabled. Everyone in Silicon Valley rushed to find new intuitive ways to implement this technology in their applications.
And the broad result?
Basically the same software applications you are used to with a chat interface added on top of it.
We’ve seen this with everything from Notion AI to Microsoft Copilot to virtually every SaaS product rushing to add some form of AI assistant.
However 2025 is where I think all of this starts to change. The introduction of underlying infrastructure, protocols to help with context and centralization, and a shift in design thinking means there should be a whole new suite of AI applications that were not previously seen. But what does that look like? Let’s talk about it.
The Chat Interface will slowly start to die
In the pursuit of keeping their investors happy, many companies’ move to becoming “AI First” was by throwing a base chat interface over their existing software UI. They were able to check the box off and call it a day.
Look at any major enterprise software today, they all proudly advertise their AI features, but dig deeper and you’ll find it’s just a chatbot sitting in the corner of the screen.
This to me is equivalent to software companies porting their desktop applications 1:1 to the cloud or essentially making their mobile application a support to their desktop app.
In theory they are leveraging the technology but it is massively underutilized in the same way early iPhone apps were just shrunken versions of desktop software.
AI should mark a complete re-imagination of the way we think about applications. Embedded on the ground level of every software stack should be an LLM trained which is constantly getting trained on the users’ actions and is interacting with every layer of the stack upward.
Rather than statically designed, UIs should be dynamically generated, set within bounds that product designers specify when creating applications. LLMs should take from these predefined set of UI templates to create applications dynamically shifting them on the users’ input.
It can look like an email client that completely reorganizes itself based on your current project, a code editor that morphs its interface depending on the language you’re using, or a CRM that presents different views based on your role and current objectives. The UI becomes a living, breathing entity that evolves with your needs.
We need to shift the thinking from thinking that the AI layer is separate from the core software stack and is instead the engine underneath it all. There won’t be a chat interface because the whole application will be an abstracted version of it.
The new Vercel AI SDK is a big step in this direction: https://sdk.vercel.ai/docs/introduction
Agents will finally get the necessary context
Today Claude, ChatGPT, Perplexity or whatever agent you use seems to be siloed in their own world. You have to feed in context into each one by uploading your documents, giving it prompts with context about you or just through overall use.
This extremely tedious process is necessary to utilize the full power of LLMs to become extremely personalized to you. It’s like having to tell your story over and over again to different assistants who can’t talk to each other.
However this is all set to change this year.
Perplexity’s recent acquisition of Carbon means it now has a myriad of external data connectors that it can now tap into.
And Anthropic’s introduction of Model Context Protocols(MCPs) is a massive deal that not enough people are talking about. It offers a standardized protocol for AI models to seamlessly connect to any data source rather than having to create a custom layer for each dataset.
MCPs are like the HTTP protocol of AI, a standard way for AI models to request and receive information from any source. Just as HTTP revolutionized how web applications communicate, MCPs will transform how AI models interact with data sources.
We already have over 30 different connectors already built here and this list keeps growing every day. From Salesforce to Notion, from GitHub to Notion, your AI assistant can now reach into these tools and work with your data directly.
So what does this mean in essence? It means agents will no longer be siloed.
One query to your AI agent can take a document from Google Docs, summarize it and send it to your colleague in Slack.
It giving controls to AI agents so it can act like Jarvis, querying multiple data sources to take actions on your behalf.
If you have Claude Desktop, try out MCPs today.
It is a truly groundbreaking experience.
Agents will abstract away tasks for you
With the rise of new infrastructure layers such as BrowserBase and Crew AI you will start to see even more abstractions of tasks for you within software applications.
An app that finds and allows you to buy new shoes from any website that offers it at a certain price.
An app that automatically finds and applies to Software Engineering jobs in the Bay Area that are focused on Data Science.
But it goes beyond simple automation. These agents will be able to handle complex, multi-step processes that previously required human intervention at every stage. Imagine an agent that not only finds job postings but also customizes your resume for each position, drafts appropriate cover letters, and manages your interview schedule. Or an agent that handles your entire travel planning, from finding flights and hotels that match your preferences to making restaurant reservations and creating an optimized itinerary.
No longer having to rely on an API to do any of it as an AI agent using a headless browser can automate it for you. On top of MCPs we are now making it so every single website can be accessed, interacted with and scraped no matter what underlying barriers are imposed on it.
This will fundamentally shift the way we think about our interactions with software.
It will bring up new challenges as well which will be solved with interesting solutions.
For example here is my friend Andre going over how we will need to give agents a secure embedded wallet so they’re able to execute transactions with approval, and actually act autonomously.
The real power comes from combining these capabilities.
An agent could monitor your calendar, notice you’re planning a business trip, automatically book travel arrangements within your company’s policy guidelines, update your expense tracking software, and notify relevant team members, all without you having to orchestrate these actions.
Your Agent will power all your underlying software applications
The future how I see agents is like a piece you slot into every software application.
In the same way the base foundational way a program is run on, in the same way I see this for an AI agent layer.
The future I see is when you are logging into an application, it prompts you to log into Perplexity, Claude or OpenAI.
领英推荐
Logging in will slide your AI agent over a software layer essentially applying the fine tuning that the software layer provides over an agent that follows you along.
This isn’t just about convenience, it’s about creating a truly personalized computing experience. Your agent will understand your writing style, your work patterns, your preferences, and your goals. When you switch between applications, you won’t need to rebuild this context from scratch.
Whether you’re writing a document, analyzing data, or managing projects, your agent brings your entire context with it.
This means the context of your interactions, your personalization are not siloed out per application and instead are carried with you across software applications. Every app will not have their own model of you, instead it will have your agent slide into the software stack.
Frameworks like Langchain will make it so every application can handle the differences in model providers.
It is like OAuth on steroids, fundamentally altering the way you interact with software applications.
Your entire digital experience will become seamless and interconnected. Your agent could help you draft an email in Gmail while pulling relevant data from your last Zoom meeting, referencing documents from Dropbox, and checking your availability in Calendar, all while maintaining your personal communication style and professional boundaries.
These are just some of the ways all software applications will fundamentally change to become AI first. Within these massive shifts are multiple billion dollar startups that will fundamentally help usher in this transition.
We are still in the iBeer era of AI applications, with the big shifts still to come.
I think 2025 will be the year we start to see what that truly means. The companies that understand and embrace these changes, moving beyond simple chat interfaces, leveraging universal context, enabling true automation, and creating seamless agent experiences will be the ones that define the next era of software.
It won’t be about just adding AI features.
It will be about rebuilding software from the ground up with AI at its core.
Featured Events
Events that I host, will be at or think might be worthwhile to check out
?? - I plan to be there, feel free to say hi!
?? - I am hosting! Please feel free to swing by!
San Francisco
01/08?- AI Night @ GitHub ~ w/ Jam.dev??-?Register
01/08?- San Francisco AI Founder and VC Networking??-?Register
01/27?- SF Founders Brew Mixer ??????-?Register
Toronto
01/08 -?7 & 8 Figure Agency Owners Private Breakfast -?Register
01/16 -?Automating Your Workflow With Agents -?Register
01/27 -?TO Founder’s Brew Mixer ?? -?Register
New York
01/07 -?Everything But SaaS with SPC & USV -?Register
01/08 -?NYC Data Exploration with Plotly -?Register
01/10 -?"fun house" - #1 event for vc's, creators + tech founders by tappedX -?Register
Want to see your event here? Is there something going on that I should be highlighting? Feel free to reach out and I would be happy to feature them!
Check out this Podcast Episode!
You can also listen on?Spotify,?Apple Music, and?Google Podcasts
In this episode of Building Blocks, we dive deep with Shrikant Latkar, co-founder of BaseRock.ai and former CMO of InMobi.?
With over two decades of experience spanning from large enterprises to startups, Shrikant shares his journey from engineering to marketing leadership, and now to revolutionizing software testing through AI.?
He offers valuable insights on transitioning from 200,000-person companies to solo entrepreneurship, building and selling Oust Labs (acquired by Betterplace), and his current mission to transform QA with BaseRock.ai.
Thanks for reading,
Daivik Goel
PropTech Visionary | Secure RWA Ownership Backed by Bitcoin
1 个月Your insights on the evolution of AI applications are thought-provoking! As we approach 2025, what do you think will be the key challenges in integrating these new technologies seamlessly? On a different note, I'd love to connect and discuss further, please feel free to send me a request!
Co-Founder at The Gradient, a human first design agency for an AI era | Investor, Advisor, Creator
1 个月Daivik Goel, awesome post! Your idea of AI going beyond simple chatbots and evolving into “agents” that carry our context across different apps really caught my attention. It’s exciting to think about how this approach—plus MCP—could completely change software design by 2025. Thanks for sharing these forward-thinking ideas!