My AI Stack - March 2025

My AI Stack - March 2025

The AI landscape is a whirlwind!

What was cutting-edge six months ago might be obsolete today.

With the first quarter of 2025 under our belt, here's a list of my favorite AI tools

As we approach the midpoint of 2025, the AI landscape continues to evolve at a dizzying pace. What worked six months ago might already be outdated, and new tools emerge almost weekly. Today (March 14th, 2025), I’m sharing my current AI stack — the collection of tools, services, and workflows that I’ve found most effective for my daily use.

This is very much a snapshot in time. The beauty (and challenge) of working with AI in 2025 is that everything is subject to ongoing change and evolution as the underlying technologies improve and new approaches emerge. Consider this less of a recommendation and more of a field report from someone deep in the AI trenches.

I’ve also published this stack as a GitHub repository for those who want to track changes over time or perhaps fork it as a starting point for documenting their own AI toolkit.

LLM APIs

While self-hosting LLMs is increasingly viable, I still prefer using cloud APIs for most of my needs. This approach lets me avoid hardware stress and simplifies deployment while giving me access to state-of-the-art models.

My current favorites:

  • OpenRouter has become my go-to for consolidated billing and access to a wide variety of APIs. It lets me select models best suited for specific tasks without managing multiple accounts.
  • Google Flash 2.0 serves as my primary model due to its fast inference, large context window, and reasonable pricing. While not the best for complex reasoning, its versatility makes it suitable as the backing model for all my Assistant configurations. More information is available on the Google AI Developers site.
  • Qwen’s models, particularly Qwen coder, are my preference for non-agentic code generation and debugging. I find these models particularly underrated in the current landscape.
  • Cohere excels at instructional text-based tasks where clarity and precision matter.
  • OpenAI still has its place in my toolkit, especially Sonnet 3.7 for agented code generation, though I’ve found its recent performance somewhat inconsistent. You can find more details on the OpenAI website.

LLM Frontend

After testing numerous AI tool frontends, Open Web UI stands out as the most impressive for my needs. I’ve shared some of my configurations on the Open Web UI community platform.

One lesson learned: if you’re planning to use these tools long-term, start with PostgreSQL rather than SQLite. While the container defaults to Chroma DB, you can configure it to use Milvus, Qdrant, or other vector database options.

My approach to self-hosting has evolved from experimentation to building for long-term stability, with careful component selection from the outset.

Speech To Text / ASR

Transitioning to speech-to-text has been transformative for my workflow! After unsatisfactory experiences a decade ago, Whisper has revolutionized reliability and made STT good enough for everyday use.

  • I use Whisper AI as a Chrome extension for speech-to-text and rely on it for many hours each day.
  • For Android, the open source Futo Keyboard project shows promise, though it depends on local hardware capabilities.

While I recognize the use case for local processing, I generally prefer not to run speech-to-text or most AI models locally. On my Linux desktop, I use generative AI tools to create custom notepads that leverage Whisper via API.

Vector Storage

I’m currently developing a personal managed context data store for creating personalized AI experiences. This is a long-term project, and my approach will likely evolve significantly over time.

My current focus is on using multi-agent workflows to proactively generate contextual data. Some of my related projects include:

The project involves creating markdown files based on interviews detailing aspects of my life. I’ve also experimented with the inverse approach of putting non-contextual data through an LLM pipeline to isolate context data.

For vector storage itself, I avoid OpenAI assistants to prevent vendor lock-in and instead use Qdrant to decouple my personal context data from other parts of the project.

Regular Storage

Contrary to what some might expect, storing AI outputs robustly doesn’t require specialized solutions; regular databases work perfectly well. MongoDB and PostgreSQL are my preferred databases, with PostgreSQL being especially beneficial as it can easily be extended with PGVector for vector capabilities when needed.

Agentic & Orchestration Tools

These tools extend the capabilities of my core components, enabling more complex workflows and creative applications.

Agents & Assistants

I’ve explored many AI agents and assistants, noting that many interesting projects lack well-developed frontends. You can explore some of AI assistants in my AI Assistants Library.

I’m a strong advocate for simple system prompt-based agents and have open-sourced over 600 system prompts since discovering AI in early 2024. I currently use these in Open Web UI, sharing my library with that community.

While having 600+ assistants might seem excessive, it’s quite manageable when each assistant is highly focused on a small, distinct task. For example, I have assistants for changing the persona of text, formalizing it, informalizing it, and other common writing tasks.

Other Generative AI

Beyond text generation, I use:

  • Leonardo AI for text-to-image generation, appreciating its diversity of models and configurable parameters.
  • Runway ML for creating animations from frames, though I haven’t explored text-to-video as extensively yet.

Workflows & Orchestration

My main interest in AI systems lies in addressing the challenge of making this rapidly growing technology more effective through tool use, workflow management, and orchestration.

  • I use N8N to provision and orchestrate agents, experimenting with different stack combinations while prioritizing simplicity and fewer components. I also appreciate the pipelines and tools within Open Web UI that enable actions on external services.
  • Langflow provides a user-friendly interface for visually building complex workflows with language models, making it easier to prototype and experiment with different LLM configurations.

AI IDEs & Computer Use

I currently subscribe to Windsurf, valuing its integrated experience for agent-driven code generation, despite some recent performance issues. I also use Aider, especially for single-script projects where precise context specification is advantageous.

My daily driver is OpenSUSE Linux, which influences my choice of tools. I’ve found Open Interpreter impressive for running LLMs directly within the terminal and see significant potential in this project, though it requires careful provisioning for debugging and working directly on the computer.

Docker Implementation

My GitHub repository includes a docker-compose.yaml file that encapsulates my AI stack. This setup allows for easy deployment and management of the various components.

Key components include:

  • OpenWebUI: My primary frontend for interacting with LLMs
  • PostgreSQL: The main database for storing application data
  • Qdrant: A vector database essential for semantic search and RAG applications
  • Redis: Used for caching and performance optimization - see Redis.io
  • Langflow: Facilitates workflow management for language models
  • Linkwarden: A bookmark and web content manager - see Linkwarden.app
  • N8N: My chosen workflow automation platform
  • Unstructured: For extracting content from a variety of file formats - see Unstructured.io

The implementation also includes monitoring (Glances - Nicolargo.github.io/glances/) and backup (Duplicati - Duplicati.com) to ensure a robust and maintainable system.

APIs Beyond LLMs

I leverage specialized APIs alongside LLMs to enhance specific tasks:

  • Tavily: This search API provides relevant, up-to-date information, making it ideal for RAG applications and ensuring LLMs have access to current knowledge.
  • Sonar by Perplexity: Delivers powerful search capabilities with built-in summarization and information synthesis, particularly effective for research and gathering comprehensive information on specific topics.

Final Thoughts

This stack represents what works for me right now, but I expect it to continue evolving as new tools emerge and existing ones improve. The AI landscape of 2025 is incredibly dynamic, with new capabilities appearing almost weekly.

If you’re building your own AI stack, I encourage you to experiment and find the combination of tools that best suits your specific needs and workflow. What works for me might not be ideal for you, and that’s perfectly fine.

I’ll continue updating my GitHub repository as my stack evolves, so feel free to check back periodically if you’re interested in tracking changes over time.

What does your AI stack look like in 2025? I’d love to hear about the tools and approaches that are working well for you.


要查看或添加评论,请登录

Daniel R.的更多文章