AI-Oriented Software (AIOS) Mindset
In an era when technology evolves at breakneck speed, traditional approaches to software development are starting to show their limitations. As businesses seek to become more agile and user-focused, AI offers a powerful new paradigm—one where software can adapt, learn, and evolve alongside its users. This shift requires a fundamental change in how product and tech executives think about software, moving from rigid, predefined systems to flexible, AI-driven solutions.
This blog will explore how AI can transform software development by enabling more adaptive, user-responsive systems. We’ll discuss the stages companies go through to adopt AI, from leveraging pre-trained models to fine-tuning and eventually training proprietary models. But this is just the beginning. In future blogs, we’ll also touch upon the implications of this shift, including the need to rethink data privacy, organizational culture, and AI ethics. We will dive deeper into practical applications, challenges in scaling AI, the importance of MLOps for managing AI in production, and emerging trends like multi-agent systems and context-aware AI. Together, these insights will help businesses navigate the complex landscape of AI-driven software and unlock its full potential.
Let’s explore the transformative journey from traditional software development to an AI-oriented software (AIOS) mindset.
How Can Software Benefit from AI?
How can software development leverage AI, and how should product and tech executives rethink their approach to software to harness AI's full potential? These are the key questions I aim to address in this blog.
Traditional Software Development vs. AI-Driven Development
Historically, software development followed either the waterfall model—where requirements are gathered upfront—or the more recent agile model, which gathers requirements iteratively. In both cases, developers implement software strictly based on predefined requirements. These requirements are often detailed, specifying exactly what users should do and outlining various scenarios. While structured, this approach can lead to rigid software limited to its initial design specifications.
However, real-world usage evolves, a phenomenon driven by the concept of affordance—as users interact with software, they discover new ways to use it, leading to changing expectations and opportunities. What if software could adapt to evolving user needs without being limited to its initial definitions? This is where AI steps in, offering the potential for flexible and adaptive software that evolves alongside user behavior.
Transitioning to an AI-First Mindset
Executives and developers accustomed to traditional software development face several challenges in adopting an AI-driven approach. Often, they mistakenly start with the monolith AI approach, the most complex and costly stage: training proprietary AI models to replace hand-written logic. However, there are more efficient ways to get started, which we will explore in detail below.
1. AI-as-a-Service (AIaS) Approach
A more mature approach to integrating AI mirrors what experienced architects have been doing for a long time: service-oriented architecture (SOA). Under this approach, AI becomes just a service that can leverage existing pre-trained models, particularly Large Language Models (LLMs). These models are already trained and ready to use, enabling faster deployment. LLMs can make software much more flexible, as product managers no longer need to anticipate every possible user input. Instead of designing numerous form-based features, a single AI-powered interface can handle free-form inputs like speech, document uploads, or Q&A sessions.
领英推荐
For example, LLMs-powered software can dynamically ask users for more information based on their initial inputs, reducing the need for rigid data collection forms. This requires a list of all required data and sophisticated prompt engineering techniques to elicit missing details.
While zero-shot models are the easiest way to leverage AI, collaboration between engineers and product teams is still required to identify which parts of the product can benefit from AI integration. A gradual approach—starting with smaller portions of the product and expanding as needed—works best. As the product becomes more AI-driven, advanced concepts like multi-agent systems (MAS) and retrieval-augmented generation (RAG) become necessary.
? Multi-agent systems (MAS) involve multiple AI agents, each managing a specific knowledge domain within the product. These agents collaborate under the guidance of an orchestrator and share data selectively to enhance overall functionality.
? Retrieval-augmented (RAG) allows AI models to access and use existing enterprise knowledge, such as databases and internal documents, to generate contextually accurate responses. This approach leverages structured data sources rather than relying solely on the AI’s linguistic capabilities.
I am currently researching multi-agent systems. For more insights, check out my ongoing work.
2. Fine-Tuning Approach
While zero-shot models and RAG approaches are quick and straightforward, they can sometimes lead to unpredictable or fragile software behavior. The solution here is fine-tuning. Fine-tuning involves training pre-existing models on domain-specific data instead of merely utilizing existing enterprise knowledge like RAGs. For instance, GPT models can be fine-tuned to generate responses tailored to legal, medical, or other specialized fields.
Fine-tuning allows for more predictable and reliable AI behavior while leveraging the power of existing models. However, it requires access to high-quality, domain-specific data and expertise in model training.
3. Building Proprietary Models Approach
Many companies mistakenly begin their AI journey by attempting to train proprietary models from scratch. The idea is that, instead of hardcoding static logic for different scenarios, they can gather data on user interactions and train a custom neural network to predict future outcomes. This approach can be useful in specific cases but is often the most challenging and resource intensive.
Why? Many systems have high-dimensional data involving numerous input and output variables. Training effective models in such contexts requires vast amounts of data, specialized architectures, and creative training techniques.
For example, developing something as sophisticated as ChatGPT required nearly all the data available on the internet, years of research to develop the transformer architecture and innovative techniques like word masking and reinforcement learning. While training proprietary models can benefit certain specialized areas, it is usually not the best entry point for companies new to AI.
A Gradual Approach to AI Adoption
It’s crucial to progress through these stages sequentially, gradually moving towards more advanced AI capabilities. In many cases, a hybrid approach—combining zero-shot models, fine-tuning, and proprietary training—can yield the best results. This way, companies can optimize their AI strategy, leveraging existing models where possible while selectively investing in training their models for critical, high-impact use cases.
By adopting this mindset, product and tech leaders can unlock AI's true potential in their software development processes, enabling more adaptive, user-centric products that evolve alongside customer needs.
Product Manager
4 个月You've pointed out a very important topic, my friend. I’d be happy if you tried Alpha, as it’s a fully AI-based tool that allows you to manage processes much faster, more accurately, and more efficiently. It’s a unique tool specifically designed for product managers.