?? Why Small Language Models are better than LLMs in 90% of the cases

?? Why Small Language Models are better than LLMs in 90% of the cases

Last week, I was at the GAIAS (Generative AI Application Summit). As a co-chair of that event, I invited Julian Simon, Chief Evangelist of Hugging Face. (Foto of us at the end.)

His keynote was insightful on the state of LLMs. But primarily, it opened my eyes to the fact that you don’t use a sledgehammer to crack a nut.

Let’s unpack.

(I used the audio feature in my newsletter. If you want to hear it and read the full version of the newsletter, you can check it out.)

Enjoy reading it in 4:30 min.?


?? Why Small Language Models (SLMs) are better than LLMs in 90% of the cases

(Source)

It is roughly accurate that the larger a model, the better it understands the world and the more emergent capabilities it has (e.g., emulating a persona, reasoning capabilities, etc.)

However, is a large model always the best choice? No. Considering all requirements (performance, latency, costs, etc.), 9 out of 10 times, there is a better fitting model.

Example

If you build an AI that answers your client’s calls, you would include three models: 1x speech-to-text (STT), 1x LM, and 1x text-to-speech (TTS).

Source:


This means 3 AI models in sequence for each message exchanged.

Latency is critical in direct client calls to have a good interaction experience.

An LLM like GPT-4o, even though it is much faster now than GPT-4 Turbo, is A) too large to be very fast (<1 sec.), and B) it can be pretty costly.

?? Fact: For a client, I once built a solution with GPT-4, and the number of calls incurred half a million $ per month. Too much, even for a global corporation.

SLMs

Meet SLMs. These are models with a model size of ca. 3B parameters - a 100th of an LLM.

As always, the first question is: Which is the best??

No big surprise; you can find the answer on the Open LLM Leaderboard, filtered for 3B models. Keep Mixture of Experts (MoE) showing.

What is an MoE?
It is an architecture that employs a divide-and-conquer strategy by using multiple specialized sub-models, known as experts, to handle different parts of a task.        

Today, you will discover that Microsoft’s new Phi-3 model is the best - and it is not even an MoE. Let’s use this model going forward.

It has 3.8B parameters, a context window of up to 128k, and is a model that you can fine-tune.

?? This model is so tiny that you can download it and host it on your laptop.

SHORT DEMO on how to make an SLM run on your Laptop
    1. Download Ollama at Ollama.com
    2. Open Terminal and type: “ollama run phi3”
    3. After installation you can use it even offline        

When specializing Phi-3 for your task, i.e. communicating with clients via an STT and a TTS, through prompt engineering or fine-tuning (now an affordable and quick option for a 3B model), you reach a very comparable performance to a 100x bigger model.

Fine-tuning example from the Generative AI Book:


?? I only recommend fine-tuning when there is a specific linguistic style, domain specialization, or some task refinement.

Because SLMs, like Phi-3...



?? Read the full newsletter here ??

Carsten Kraus

AI & Data Science Expert | Forbes Technology Council | Founder Omikron, FACT-Finder, casablanca.ai, ... #Multipreneur | Business Punk Top100 | Innovator | Business Angel | Keynote Speaker

2 周

Yes, Small LLMs are on the rise! Just as I said in my February Interview on #WAICF24: https://youtu.be/1HNAZogG95Y?si=BYCQCKpH3O1vGv13 (Statement on Small LLMs see at 03:26). I also explain why we currently see the ?Electromotor Moment of AI“

Microsoft proved with Phi3 that quality of data is much more important than the number of parameters.

Kausik Kumar

Advanced Analytics Leader | Driving Business Impact | AI/ML Expert | 18+ Years in Financial Services | Data-Driven Decision Making Champion

4 个月

While small language models are efficient and effective in the majority of cases, there are specific scenarios where larger language models (LLMs) excel and small models may fall short: ? 1. Complex Language Understanding ? 2. Multilingual Capabilities ? 3. Creative Tasks 4. Long-Form Content Generation? 5. Complex Problem Solving 6. Specialized Knowledge 7. Contextual Awareness 8. Rare and Ambiguous Queries 9. Adaptability to New Data? 10. Integration with Advanced Systems ? While small language models are efficient for many tasks, these scenarios highlight the unique strengths of larger models in handling more complex and specialized requirements.

R B

Automotive Performance Engineer (Android/Linux) | If you got Android/IOS System issue, I am your guy! Product Engineer | Software Developer | Opensource LLM Enthusiastic

4 个月

phi-3 indeed is awesome and works like a charm for most of my personal apps

要查看或添加评论,请登录

社区洞察

其他会员也浏览了