Build a RAG App in Python Using Llama 3.2 ??

Build a RAG App in Python Using Llama 3.2 ??

Welcome to the?latest AI in 5?newsletter with Clarifai!

Every week we bring you new models, tools, and tips to build production-ready AI!

Here's a summary of what we will be covering this week: ??

  • Notebook: RAG using Llama 3.2
  • Guide: Image captioning using the Llama 3.2 Vision model
  • Tutorial: Auto annotation
  • Tip of the week: App template for Content moderation

RAG using Llama 3.2 ??

Use the latest Llama 3.2 model to chat with your files.

Meta has released Llama 3.2, one of the most capable openly available LLMs to date. Both the base and instruction-tuned versions have been open-sourced by Meta.

The 3B model outperforms the Gemma 2 (2.6B) and Phi 3.5-mini models in tasks such as instruction-following, summarization, and tool use.

RAG provides LLMs with the context needed to generate more accurate and context-aware responses.

The Colab Notebook below will guide you through building a RAG system using the Llama 3.2 3B instruct model with the Clarifai Python SDK.

Try it out! ??

RAG using Llama 3.2

Image Captioning using Llama 3.2 11b Vision Instruct ??

Meta AI has also announced the release of its first multimodal models in the Llama 3.2 series.

The 11B and 90B parameter multimodal models are designed for tasks such as visual reasoning, image captioning, and visual question answering (VQA). These models can now process and understand both text and images.

Below is an example of how you can access the Llama 3.2 11B Vision Instruct model via an API on the Clarifai Platform for image captioning tasks. Check out the code here .

Auto Annotation ??

Auto-annotate all image inputs in your dataset with a single click, regardless of size.

We’ve added the Auto Annotation feature to the platform, enabling you to leverage machine learning models and workflows to automatically annotate your data.

Here’s what you can do with Auto Annotation:

  • Annotate with a single click: Automatically annotate all inputs in your dataset with just one click.
  • Use a fully automated approach without relying on the semi-automated, human-in-the-loop method of AI-assist.
  • Set confidence thresholds: Choose which concepts to annotate automatically and which to review manually based on confidence levels.
  • Targeted review: Review only the inputs that fall below your specified accuracy thresholds.
  • Customizable model selection: Select the model or workflow that best suits your dataset when creating labeling tasks.

See Auto Annotation in action! ??

Tip of the Week: App Template for Image Moderation ???

Image moderation involves reviewing and filtering images to prevent inappropriate or harmful content.

The Image Moderation Template covers several use cases and includes ready-to-use workflows powered by Clarifai’s vision models for image moderation.?

Check out the template here .

Want to learn more from Clarifai? “Subscribe” to make sure you don’t miss the latest news, tutorials, educational materials, and tips. Thanks for reading!

This is very helpful. Thanks for putting all this information together and sharing

要查看或添加评论,请登录

Clarifai的更多文章

社区洞察

其他会员也浏览了