Maximizing AI Potential: Fine-Tuning Large LLMs Locally for Cost-Effective and Superior Performance
Timo Laine
AI Consultant | Results driven hands-on leader | Generative AI | Data Science | Machine Learning | Certified in 8xAzure, 4xAWS, 4xGoogle, 4xOracle, 4xNVIDIA | Passion for learning | PhD
We make one more attempt to fine-tune the AI Act in three languages: English, Finnish, and Swedish, this time using larger 9.24 billion parameter LLM Gemma-2-9b-it. The model demonstrates a strong understanding of the general principles of EU AI law and excellent accuracy compared to the smaller 2.61B parameter Gemma-2-2b-it model we presented earlier.
For European companies seeking adaptable AI solutions, fine-tuning LLMs like Gemma-2-9b-it is a powerful strategy. Its enhanced generalization and learning allows for precise customization to address specific business challenges. This adaptability translates to valuable insights, streamlined processes, and significant improvements in productivity and cost-effectiveness. The ability to fine-tune on readily available hardware further enhances its accessibility, empowering businesses to leverage advanced AI without substantial infrastructure investments.
Gemma models are particularly good in European languages, where many other models fall short. They come in various sizes, offering flexible training options that allow for efficient fine-tuning. Smaller models can be trained quickly, providing quick feedback on training parameter adjustments. Additionally, the ability to run most Gemma models on local laptops makes it easy to test and compare their accuracy, maximizing performance gains while maintaining cost-efficiency and flexibility.
Read the full story from Medium. The fine-tuned LLM is in Huggingface.