History Teaches us to Embrace Agility in Technology
Matias Undurraga Breitling
Enterprise Technologist @ AWS | Transformation, Strategic Tech Planning
In the world of tech, it's like we're riding a merry-go-round. We keep seeing the same things, but they look a bit different every time. This is especially true when we think about how we build and design our architecture. We used to make big, bulky systems that did everything, but we have been moving into breaking down those systems into smaller parts. Thats the transition from monolithic architecture to a more service-oriented architecture. It's a move towards faster development, higher flexibility, and agility, all while keeping costs in check and scaling efficiently.
History has its way of repeating itself, we might find similarity to our past decisions now being reflected in the world of large language models. The recent rush to build colossal models, like Falcon with its staggering 180 billion parameters or the discussions around ChatGPT's impressive capabilities, reflects our fascination with 'bigger is better'. But is it always the case?
We're kind of going down the same road again, making one-model-to-rule-them-all - similar to 'monolithic' applications. It's tempting to try to make one model that can handle everything, but we might be forgetting the lessons we learned before. We have to ask ourselves if we're going down the same road again or if we can find a smarter way to move forward.
What if, instead of making one huge model, we made a bunch of smaller, specialised ones? Imagine a future where instead of one gigantic model, we have an orchestra of specialised, smaller models. These models, up to 20 billion parameters each, are not only easier to manage but can be fine-tuned for specific tasks, offering a more tailored and efficient solution. This approach, akin to micro-services in software architecture, allows for the scaling of specific services as needed, rather than the entire application. It's about having the right tool for the right job, not a one-size-fits-all bazooka for tasks that require a scalpel.
领英推荐
This mindset shift towards smaller, specialised models offers several advantages. It allows for faster deployment, easier training, and the ability to perform continuous learning and improvement. Flexibility becomes the cornerstone of this approach, enabling us to route tasks to the most suitable models, ensuring efficiency and speed.
The future, as it seems, is not in building larger and more complex models but in fostering a mix of expert that can work together seamlessly. This approach not only leverages domain-specific knowledge but also ensures that we can scale intelligently without the burden of excessive resource consumption. It's a lesson from the past?
As a co-author of the Cloud Adoption Framework for AI, I invite you to explore how we envision the AI transformation in your organisation and what capabilities you should consider developing.