LoRA
Why do we need this??
When we look at the wide world of machine learning techniques, fine-tuning hasn't been getting much attention lately. Even if we only tweak a small part of the model, like the classification part, we end up changing all its settings! This might be okay for smaller models, but it doesn't work well as things get bigger. As our datasets grow, fine-tuning becomes a huge computational problem, almost as expensive as training the model from scratch.
This issue isn't new. Researchers have tried to solve these problems by training external modules or being selective about the parameters.?
What is it?
LoRA strikes a nice balance between file size and training capability. Its file size is more manageable (2 – 200 MBs), and it offers decent training capabilities. Users of Stable Diffusion who experiment with various models often find their local storage quickly filling up due to the large file sizes. It becomes challenging to maintain a collection with just a personal computer. LoRA presents a practical solution to this storage issue. Similar to textual inversion, you can't use a LoRA model on its own; it requires a model checkpoint file. LoRA tweaks styles by making minor adjustments to the accompanying model file.
In summary, LoRA provides a flexible way to personalize AI art models without overwhelming local storage.
For details see - https://arxiv.org/pdf/2106.09685
Check out Hugging Face’s LoRA library to get started.