Making AI Accessible to All: Foundation Models
Dominik Krimpmann, PhD
Business & Technology Futurist at Accenture | Helping Companies Reimagine via Disruptive Technology
Artificial intelligence (AI) has advanced in ways that few could have imagined just a few years ago. But while AI now helps us solve many challenging problems, significant time and effort is often needed to create and deploy solutions. Before new datasets can be deployed to train the underlying models, people must painstakingly find and label huge numbers of images, texts, and graphs to make their meaning clear. Foundation models sidestep this costly, time-consuming process, delivering customizable, ready-to-use models that can help companies of all sizes hone their competitive edge.
Foundation Models versus Conventional AI: What’s The Difference?
Foundation models provide a general basis for AI applications. But that doesn’t mean they’re less sophisticated than traditional models. In fact, foundation models are vast cloud-based neural networks comprising billions of parameters – the best-known examples being GPT-4, Google’s BERT, and, of course, ChatGPT.
What is it, then, that sets these models apart from their more conventional AI counterparts? Traditionally, models are trained for specific tasks. Because these tasks are usually within a particular domain, the training data tends to be highly specific. And, as mentioned above, it must first be manually annotated or labelled to indicate the meaning or classification of its content.
Foundation models, by contrast, are trained for a wide range of tasks across various domains. The underlying data is, therefore, more general. Crucially, no time-consuming labeling is needed. Instead, the model supervises its own training, learning to find patterns, structures, and relationships within the data, without any human assistance. The result: more flexible, reusable AI models.
Consume or Customize?
When it comes to deploying foundation models, there are two main approaches – consumption and customization. Because these pretrained models can be used out-of-the-box, organizations can simply consume them via APIs, leveraging prompt-engineering techniques to tweak them in line with their specific use cases.
Convenient as the consumption option is, it’s unlikely to be the preferred approach. To enhance usability and tap into the real value of foundation models, most organizations will probably opt for more extensive customizing – using their own smaller datasets to fine-tune them to exactly match their needs.
Impressive Capabilities and Applications
Let’s now zoom in on what foundation models can do and how they can be applied. One of their key features is the ability to understand and interpret human language. Building on this, these models provide excellent support for text generation. If you’ve tried out ChatGPT, you’ll know just how well it generates coherent, contextually relevant text in response to prompts or queries. But that doesn’t just apply to natural language; foundation models can also generate usable computer code.
In addition, models of this kind can rapidly summarize the content of documents, enabling users to quickly identify key information, without having to wade through reams of text. Perhaps even more impressive is the ability of foundation models to translate text quickly and accurately from one language to another, breaking down linguistic barriers.
领英推荐
From Content Editing Right Through to Content Creation
Another application that draws on the tech’s language capabilities is content editing and creation. When deployed as an intelligent writing assistant, a foundation model can suggest improvements, rephrase sentences, and enhance the overall quality of written content. But these models go further still: They can even be used to generate initial ideas on given topics, which human writers can then adapt and refine, as needed.
Because of this ability to understand and interpret human language, foundation models are also ideally suited for use as virtual assistants and chatbots. Here, they not only provide users with relevant information, but also enter into meaningful conversations, making an invaluable contribution to customer service and support. In fact, Accenture considers models of this kind potentially useful in tackling the roughly 70% of non-straightforward customer service communication that calls for a conversational approach of the kind traditionally provided by skilled support staff.
More Speed, Less Effort/Greater Scalability, Lower Costs
Foundation models offer many appealing benefits. As customizable, pretrained solutions, they greatly reduce the time and effort that organizations need to invest in building and training AI systems from scratch. This frees up resources to focus on fine-tuning the general model in line with their specific requirements.
Additionally, the cloud-based architecture of foundation models makes them easy to scale in line with the needs of businesses of all sizes. And finally, these models enable companies to run their AI projects more easily and cost-effectively. Since the heavy lifting has already been done in pretraining, organizations can concentrate on tailoring the model to particular tasks.
Two Downsides: Limited Transparency and Inherent Bias
But for all their obvious appeal, foundation models also have their drawbacks. Like many AI solutions, they are essentially black boxes. The sheer volume of data used to train them, plus the many deep learning layers involved, can make it impossible to understand how the model arrived at its results.
Another issue is that these results hinge on the underlying training data. Consequently, foundation models inherit any biases, distortions, or patterns inherent in that data, potentially giving rise to incorrect results.
A Wealth of Opportunities
Despite these drawbacks, foundation models hold considerable promise for transforming various industries and domains. Currently, we’re at a stage in the adoption cycle where many organizations are experimenting with creating new applications by consuming off-the-shelf foundation models. As they’re discovering, the tech already offers many opportunities – and, as it evolves, these are likely to become even more diverse.
Over to You
I hope this blog post has given you an initial insight into the world of foundation models. If you have any questions about this tech, feel free to reach out to me. And if you have ideas of your own about foundation models and traditional AI, please share them in the comments.