Instruction Pretraining LLMs

Instruction Pretraining LLMs

A lot has happened last month: Apple announced the integration of on-device LLMs, Nvidia shared their large Nemotron model, FlashAttention-3 was announced, Google's Gemma 2 came out, and much more.?

You've probably already read about it all in various news outlets. So, in this article, I want to focus on recent research centered on instruction finetuning, a fundamental technique for training LLMs.

What I am going to cover in this article:

  1. A new, cost-effective method for generating data for instruction finetuning
  2. Instruction finetuning from scratch
  3. Pretraining LLMs with instruction data
  4. An overview of what's new in Gemma 2
  5. An overview of all the other interesting research papers that came out in June

Happy reading!

1. Creating Alignment Data from Scratch

The Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing paper shares a fascinating hack to generate a high-quality dataset for LLM instruction finetuning. While this doesn't offer any particularly recent research insights, it's one of those interesting, practical exploits that seems super useful.

1.1 Generating An Instruction Dataset From Nothing

What distinguishes this instruction-data-generating method from others is that it can be fully automated and doesn't require any initial questions or instructions. As the paper title suggests, it enables the creation of an instruction dataset from "Nothing" – the only thing we need is a locally running Llama 3 8B model. The figure below summarizes how this method works.

Annotated illustration of the Magpie method for generating a synthetic dataset for instruction finetuning. The figure is based on illustrations from the Magpie paper:

Essentially, as shown in the figure above, we just have to prompt the Llama 3 8B Instruct model with a pre-query template, and it will generate an instruction for us. Then, we feed that instruction back to the LLM, and it will generate a response. If we repeat this procedure a couple of thousand times, we obtain a dataset for instruction finetuning. (Optionally, we can apply an LLM to filter the instruction-response pairs by quality.)

1.2 Dataset quality

What's fascinating is that with the resulting instruction dataset, the authors found that finetuning a Llama 3 8B base model with just instruction finetuning (no preference finetuning via RLHF and DPO) beats the original Llama 2 8B Instruct model by Meta AI, as shown in the figure below.

A Llama 3 8B base model finetuned on the Magpie-generated instruction dataset beats the original Llama 3 8B Instruct model. Based on an annotated illustration from the Magpie paper:


The Magpie results shown in the figure above wer achieved with 300 thousand samples only. In comparison, The original Llama 3 Instruct model was finetuned and aligned on 100 million samples!

1.3 Running the Dataset Generation Locally

I was skeptical at first, so I tried to implement this myself. It really works! Here, you can find my reimplementation using Ollama, which even runs fine locally on a MacBook Air.

Code screenshot from a reimplementation of the Magpie method that runs locally. The code is available



1.4 Additional Details

The authors created two sets of datasets: A "Pro" version using the Llama 3 70B Instruct model and an "Air" version using the Llama 3 8B Instruct model. As an earlier figure showed, the Magpie-Pro-generated dataset results in slightly stronger models compared to the Magpie-Air dataset when using it to instruction-finetune a Llama 3 8B base model.

The figure below shows an additional comparison of the dataset qualities and difficulties as rated via an LLM.

Annotated plots from the Magpie paper showing the dataset quality and difficulty of the Air and Pro datasets relative to each other.


As the figure above shows, the quality of the Air and Pro datasets is roughly on par. In addition, it would have been interesting to see how the Alpaca dataset compares to these. (The assumption is that the Magpie data is of much higher quality than Alpaca, but a reference point would be interesting.)

Furthermore, the paper contains an analysis showing that the breadth or diversity in this dataset is much larger than that of other popular datasets for instruction finetuning, such as Alpaca, Evol Instruct, and UltraChat. In addition, when compared to models trained with other instruction finetuning datasets, the Magpie-Pro finetuned model also compares very favorably.

1.5 Conclusion

Overall, I think that Magpie is an interesting exploit that is, on the one hand, fascinating in its effectiveness and, on the other hand, has a lot of practical utility. I will certainly consider it as an interesting, simple, and cost-effective candidate for constructing general-purpose instruction datasets in the future.

2. Instruction Finetuning from Scratch

If you are looking for a resource to understand the instruction finetuning process in LLMs, I am happy to share that Chapter 7 on instruction finetuning LLMs is now finally live on the Manning website.

This is the longest chapter in the book and takes a from-scratch approach to implementing the instruction finetuning pipeline. This includes everything from input formatting to batching with a custom collate function, masking padding tokens, the training loop itself, and scoring the response quality of the finetuned LLM on a custom test set.

(The exercises include changing prompt styles, instruction masking, and adding LoRA.)

Happy coding!

An overview of chapter 7 in my


PS: it's also the last chapter, and the publisher is currently preparing the layouts for the print version.

3. Instruction Pretraining LLMs

In the paper "Instruction Pre-Training: Language Models are Supervised Multitask Learners" (https://arxiv.org/abs/2406.14491), researchers investigate whether LLM pretraining can be made more efficient by including synthetic instruction-response pairs instead of just raw text. (Here, "raw text" means text from books, websites, papers, and so forth that has not been reprocessed into a specific format.)

A comparison between regular pretraining (top) and the proposed instruction pretraining approach (bottom) via an annotated figure from



Specifically, the researchers experiment with generating instruction-response data from the raw training corpus itself via an "instruction synthesizer," an LLM specifically finetuned for this task.

(Note that this is not the first paper proposing the formatting of raw text as instruction data. Another work that comes to mind is "Genie: Achieving Human Parity in Content-Grounded Datasets Generation" (https://arxiv.org/abs/2401.14367). I also recall seeing another paper or blog post using instruction data during pretraining a few months ago—I discussed this method with some of my colleagues—but unfortunately, I couldn't find the reference. Nonetheless, the paper discussed here is particularly intriguing since it builds on openly available LLMs that run locally and covers both pretraining and continual pretraining.)

3.1 Instruction Synthesizer

Before we dive into the pretraining and continual pretraining results, let's talk about the core component of this method: the instruction synthesizer. This is an openly available Mistral 7B v0.1 LLM (which I wrote about last year here: https://magazine.sebastianraschka.com/i/138555764/mistral-b) that has been finetuned to generate instruction-response pairs from raw text.

To finetune this synthesizer, the researchers use datasets such as HotpotQA (https://arxiv.org/abs/1809.09600), which consists of passages from Wikipedia associated with questions and answers. For this, the authors also ensure that a variety of tasks, like commonsense reasoning, sentiment analysis, math problems, etc., are covered.

The input and output data of the instruction synthesizer via an annotated figure from


Once this instruction synthesizer is developed (i.e., finetuned), it can be used to generate the input data for pretraining the target LLMs.

One last noteworthy detail regarding the instruction synthesizer is that multiple raw texts (Tn) and instruction-response pairs (In ⊕ Rn) are concatenated as few-shot examples, as shown in the figure below.

The formatting of the instruction-data for finetuning (and using) the instruction synthesizer via an annotated figure from



3.2 Pretraining with Instruction Data

Now that we have discussed the method to generate the instruction-response pairs, let's get to the interesting part: how well do models train on this augmented dataset. The first set of results looks at two small models trained from scratch: 500M parameters and 1.3B parameters (both are based on the Mistral architecture).

A comparison of 3 different pretraining approaches used to train models from scratch (annotated table from



As we can see in the table above, the model trained via the proposed instruction pretraining approach (Instruct PT) performs best on most benchmark tasks (higher values are better).?

Note, though, that it has seen more tokens than the Vanilla PT approach since it included the synthesized instruction-response pairs. Hence, the authors included the Mix PT comparison, which is a model that has been trained on a data mix containing both the raw text and the instruction data used to train the synthesizer. From this comparison, we can see that not simply having more data makes the difference. The fact that Instruct PT performs better than Mix PT on most tasks illustrates that the nature of the instruction-response data (i.e., instruction-response data related to the raw data) makes the difference.

In addition, it's worth noting that the Instruct PT pretrained models have another advantage: They improve a more when they are instruction-finetuned afterwards, as the figure below shows.

Finetuning LLMs that have been pretrained with either the traditional pretraining pardigm (Vanilla PT) or instruction pretraining (annotated figure from https://arxiv.org/abs/2406.14491)


3.3 Continual Pretraining with Instruction Data

Pretraining from scratch is interesting because that's how LLMs are created in the first place. However, I'd say that practitioners care more about continual pretraining and finetuning.?

Continual pretraining here means that we take an existing pretrained model and pretrain it further on new domain data. For instance, think of a Llama 3 8B base model that has been trained on a general text corpus and that you want to adapt for finance, medical, legal, or other domains.

The table below summarizes the results the researchers obtained when applying the instruction pretraining method to a pretrained Llama 3 8B base model. Specifically, they conducted continual pretraining with both biomedical texts and finance texts.

A comparison of 3 different pretraining approaches used for continual pretraining (annotated table from


Looking at the table above, we can see that the instruction pretraining approach (Instruct PT) clearly outperforms the vanilla pretraining (Vanilla PT) approach (here, this means regular continual pretraining of the base model).?

The Llama 3 70B base model is included as a reference; I suppose to showcase that small specialized models can beat larger general models.

3.4 Conclusion

Almost every time I explain the LLM pretraining pipeline to someone, they are surprised by its simplicity and the fact that this is still what's commonly used to train LLMs today. The instruction pretraining approach is quite refreshing in that sense.?

One caveat is that for large pretraining corpora, it might still be expensive to create the instruction-augmented corpora. However, the nice thing about generated data is that it can be reused in many different projects once created.

4. Gemma 2

I cannot write this article without mentioning Google's new Gemma 2 models, which are arguably the biggest model release last month. However, when it comes to pure size, Nvidia's Nemotron-4 340B takes the crown (https://arxiv.org/abs/2406.11704). The Gemma 2 models come in 2.6B, 9B, and 27B parameter versions.

Since this article is already quite lengthy, and you're likely familiar with Gemma 2 from other sources, let's cut to the chase. What are the main highlights and noteworthy updates in Google's newly released Gemma 2 LLMs? The main theme is exploring techniques without necessarily increasing the size of training datasets but rather focusing on developing relatively small and efficient LLMs.

Specifically, they blend three main architectural and training choices to create the 2.6B and 9B parameter models: sliding window attention, grouped-query attention, and knowledge distillation.

4.1 Sliding window attention

Sliding window attention (e.g., as popularized by Mistral) is a technique using a fixed-sized attention block that allows a current token to attend to only a specific number of previous tokens instead of all previous tokens, as illustrated in the figure below.

A comparison of 3 different pretraining approaches used for continual pretraining (annotated table from


In the case of Gemma 2, the authors alternated between regular attention and sliding window attention layers. The sliding attention block size was 4096 tokens, spanning a total block size of 8192 tokens.?

Sliding window attention is mainly used to improve computational performance, and the researchers also included a small ablation study showing that there's a barely noticeable difference in perplexity when shrinking the block size during inference.

An ablation study from the



(It would have been interesting to see the GPU memory improvement side-by-side.)

4.2 Group-query attention

Group-query attention (like in Llama 2 and 3) can be regarded as a more generalized form of multi-query attention. The motivation behind this is to reduce the number of trainable parameters by sharing the same Keys and Values heads for multiple Query heads, thereby lowering computational requirements.

Annotated figure from



4.3 Knowledge distillation

The general idea of Knowledge distillation (as in MiniLLM, https://arxiv.org/abs/2306.08543) is to transfer knowledge from a larger model (the teacher) to a smaller model (the student). Here, they trained a 27B (teacher) model from scratch and then trained the smaller 2B and 9B (student) models on the outputs of the larger teacher model. The 27B model doesn't use knowledge distillation but was trained from scratch to serve as a "teacher" for the smaller models.

An overview of knowledge distillation from



4.4 Other interesting architecture details

The paper contains many other interesting tidbits. For instance, one hallmark of Gemma 2 is its relatively large vocabulary size: 256,000 tokens. This is similar to the first Gemma model, but it's still worth noting since it's twice the size of the Llama 3 vocabulary (128,000) and eight times the size of the Phi-3 vocabulary (32,000).

The vocabulary size of an LLM refers to the number of unique tokens (words, subwords, or characters) that the model can recognize and generate.?

A large vocabulary size in LLMs allows for better coverage of words and concepts, improved handling of multilingual content, and reduced tokenization artifacts. However, a large vocabulary size also comes with trade-offs, such as increased model size and potentially slower inference due to the larger embedding and output layers. (That's where the sliding window attention and multi-query attention mechanism are important to offset this.)

There's also an interesting section on "logit capping," a technique I haven't seen used before. Essentially, it is a form of min-max normalizing and clipping of the logit values to keep them within a certain range. I presume this is to improve stability and gradient flow during training.

logits ← soft_cap ? tanh(logits/soft_cap).

Additionally, they leverage model merging techniques to combine models from multiple runs with different hyperparameters, although the paper doesn't provide much detail about that. (However, interested readers can read more about this in WARP: On the Benefits of Weight Averaged Rewarded Policies, which Gemma 2 uses for this.)?

In terms of modeling performance, Gemma 2 is almost as good as the 3x larger Llama 3 70B, and it beats the old Qwen 1.5 32B model. It would be interesting to see a comparison with the more recent Qwen 2 model.

A comparison between two other popular models with openly available weights: Llama 3 and Qwen 1.5. (Annotated table from the



Personally, a highlight is that the Gemma 2 report includes ablation studies for some of its architectural choices. This was once a given in academic research but is increasingly rare for LLM research.

An example of one of the ablation studies included in the



4.5 Conclusion

It's refreshing to see such a relatively detailed technical report from Google. When it comes to the model itself, based on public consensus, Gemma 2 is likely the most capable model for single-GPU use cases today. For larger models, Llama 3 70B and Qwen 2 72B remain strong contenders.



Supporting Ahead of AI

This magazine is personal passion project that does not offer direct compensation. However, for those who wish to support me, please consider purchasing a copy of one of my books. If you find them insightful and beneficial, please feel free to recommend them to your friends and colleagues. (Sharing your feedback with others via a review on Amazon helps a lot, too!)

Machine Learning with PyTorch and Scikit-Learn


Your support means a great deal! Thank you!

Reihaneh Gholampour

Deep Learning Engineer

7 个月

Determining the relationships between tokens in a single sentence enhances the machine's understanding. However, how does calculating the relationships between three or four tokens that don't form a complete semantic unit (unlike a full sentence) contribute to the machine's understanding?

回复
Arvind Jayaraman

Senior Machine Learning Engineer

8 个月

Fascinating that LLMs can automatically generate instruction response pairs. Is that specific to just Llama 3 or does it work with most LLMs. Are coding LLMs more likely to generate code responses if prompted this way? Would be interesting to know

回复
Ravi Shankar

Machine Learning Manager | RecSys, LLM, CV, NLP | Scalable AI/ML

8 个月

In 1.3, looks like the link is not working and should direct to https://github.com/rasbt/LLMs-from-scratch/blob/main/ch07/05_dataset-generation/llama3-ollama.ipynb For magpie, is there a way the questions can be restricted to something specific?

回复
Daniel Svonava

Vector Compute @ Superlinked | xYouTube

8 个月

Instruction finetuning from scratch sounds fascinating. How does this compare to the traditional pretrain-then-finetune approach in terms of final model performance and training efficiency?

回复
Roland Sz?cs

Business | AI | Leadership

8 个月

Great summary of the happenings of the last month. Sebastian Raschka, PhD What data synthesis technique would you suggest for low resource languages to story writing task? As I see, the number of tokens for pretraining is minimum 6-10 times more than the number of parameters. If I take the smallest GPT2 modell with 100+ million parameters it means roughly 600m token. That is 450 million words. A book is around 100 000 words. So minimum 4 500 books we need to work with. There is no chance to do that in the public domain. That's why I thought some creative idea to increase the size of the training dataset.

回复

要查看或添加评论,请登录

Sebastian Raschka, PhD的更多文章

  • Understanding Reasoning LLMs

    Understanding Reasoning LLMs

    Methods and Strategies for Building and Refining Reasoning Models In this article, I will describe the four main…

    94 条评论
  • Understanding Multimodal LLMs

    Understanding Multimodal LLMs

    It was a wild two months. There have once again been many developments in AI research, with two Nobel Prizes awarded to…

    53 条评论
  • Building a GPT-Style LLM Classifier From Scratch

    Building a GPT-Style LLM Classifier From Scratch

    In this article, I want to show you how to transform pretrained large language models (LLMs) into strong text…

    54 条评论
  • New LLM Pre-training and Post-training Paradigms

    New LLM Pre-training and Post-training Paradigms

    The development of large language models (LLMs) has come a long way, from the early GPT models to the sophisticated…

    35 条评论
  • LLM Research Insights: Instruction Masking and New LoRA Finetuning Experiments

    LLM Research Insights: Instruction Masking and New LoRA Finetuning Experiments

    This month, I am covering three new papers related to instruction finetuning and parameter-efficient finetuning with…

    15 条评论
  • How Good Are the Latest Open LLMs? And Is DPO Better Than PPO?

    How Good Are the Latest Open LLMs? And Is DPO Better Than PPO?

    April 2024, what a month! My birthday, a new book release, spring is finally here, and four major open LLM releases:…

    26 条评论
  • Using and Finetuning Pretrained Transformers

    Using and Finetuning Pretrained Transformers

    This week has been filled with developments, including exciting new AI research that I’ll be discussing in my usual…

    19 条评论
  • Ahead of AI #12: LLM Businesses and Busyness

    Ahead of AI #12: LLM Businesses and Busyness

    In Ahead of AI, I try to strike a balance between discussing recent research, explaining AI-related concepts, and…

    13 条评论
  • Ahead of AI #11: New Foundation Models

    Ahead of AI #11: New Foundation Models

    Dear readers, The latest issue of Ahead of AI covers the recent and noteworthy developments around LLMs this summer:…

    2 条评论
  • Ahead of AI #10: State of Computer Vision 2023

    Ahead of AI #10: State of Computer Vision 2023

    Large language model development (LLM) development is still happening at a rapid pace. At the same time, leaving AI…

    16 条评论

社区洞察

其他会员也浏览了