Hugging Face转发了
Big Update for Local LLMs! Excited to share that you can now easily use any GGUF model on Hugging Face directly with Ollama! Just point to the Hugging Face repository and run it! Here is how to run Meta Llama 3.2 3B! ?? 1. Find your GGUF weights on the hub, e.g. Llama 3.2 3B 2. `ollama run hf(.)co/hugging-quants/Llama-3.2-3B-Instruct-Q4_K_M-GGUF` remove () 3. Chat with your model locally using the power of llama.cpp Docs: https://lnkd.in/eswCBq5q Big Kudos to Adrien Carreira, Omar Sanseviero, Xuan Son NGUYEN, Julien Chaumond, and Vaibhav Srivastav for making this happen!