ollama run qwq ?? an experimental 32B model by the Qwen team that is competitive with o1-mini and o1-preview in some cases. https://lnkd.in/gBbgP2MZ
Ollama
科技、信息和网络
Ollama,Ollama 61,999 位关注者
Get up and running with Llama 3 and other large language models locally.
关于我们
Get up and running with large language models.
- 网站
-
https://github.com/ollama/ollama
Ollama的外部链接
- 所属行业
- 科技、信息和网络
- 规模
- 1 人
- 总部
- Ollama,Ollama
- 类型
- 教育机构
- 创立
- 2023
- 领域
- ollama
地点
-
主要
US,Ollama,Ollama
Ollama员工
动态
-
?? Happy 100th Ollama release! In Ollama 0.4.5, we are updating Ollama's Python library! - Python functions can now be provided as tools to models - Strong typing for improved reliability and type safety - New and updated examples for Ollama Learn more about what you can do with Ollama's Python library 0.4: https://lnkd.in/gYF6Uqjy
-
Ollama转发了
Ever wondered what it’s like to have an AI assistant that writes code, debugs, and handles?documentation—all locally on your machine? AI Engineer Kelly Abuelsaad shows you how to use a local open source AI code assistant to develop applications, powered by the Granite Code model. If you're looking to integrate AI into your development process while maintaining privacy, control, and open source compatibility, this guide shows you how to get started! https://lnkd.in/gRCSVvnx #AI #LLM #opensource
-
Ollama转发了
We at Qualcomm are taking edge AI to the next level by empowering developers with Ollama an open-source solution for seamless AI inference on Snapdragon X Series devices. By open-sourcing models and focusing on efficiency, developers can now - Run state-of-the-art models locally on edge devices -?? Build faster, more efficient AI-powered applications -?? Optimize performance while reducing latency and power consumption Read our blog to know how we are bringing the power of LLMs on-device with Ollama https://lnkd.in/gUq4ayrh #ollama #AI #EdgeAI #MachineLearning #Snapdragon #Qualcomm #Developers #Innovation #OpenSource
Ollama simplifies inference with open-source models on Snapdragon X series devices
qualcomm.com
-
Ollama转发了
?? Ollama now available on Snapdragon X Series Our mission to deliver the ultimate developer experience on Snapdragon X Series platforms begins by partnering with leading AI tool providers, making AI accessible to all and empowering everyone to run large language models (LLMs) directly on their devices. Ollama is one such open-source project that serves as a powerful and user-friendly platform for running LLMs on-device. It provides the capabilities for developers to easily run these cutting edge models on-device and provides them with the right tools to create their own customizable AI experiences. With the recent announcements from AI at Meta around Llama 3.2, we have worked closely with Michael Chiang and the Ollama team to bring Llama3.2 support on our Qualcomm Snapdragon X Series platforms to all classes of developers. Get started by downloading and using Ollama with the Llama 3.2 models directly on your Snapdragon X Series devices here: https://lnkd.in/g3WPbEXx Learn more about the Llama3.2 support on Snapdragon X Series devices here: https://lnkd.in/g6cgUeku #AI #deeplearning #qualcomm #ollama #snapdragon Manish Sirdeshmukh Manoj Khilnani FRANCISCO CHENG Chun-Po Chang
-
Bespoke Labs released Bespoke-Minicheck, a 7B fact-checking model is now available in Ollama! It answers with Yes / No and you can use it to fact check claims on your own documents. How to use the model with examples: https://lnkd.in/gD9_9mCw
-
Ollama转发了
Software Engineer | Author of "Cloud Native Spring in Action" | CNCF Ambassador | Conference Speaker | Oracle ACE Pro | Java, Cloud Native, Kubernetes, AI
I like my new keychain ??Thanks, Ollama ?? If you’d like to learn more about building LLM-powered applications with Ollama and Java, check out this repository with tons of examples and use cases: https://lnkd.in/dH6QbUC4
-
Ollama转发了
Ollama is now supporting LLaMA 3.1 Meta. I'm implementing it for Coding, Sentiment Analysis, and Question Answering. Check out my GitHub for the latest developments and see how you can use LLaMA 3.1 for your projects! https://lnkd.in/gBfcY3J3
ollama run llama3.1:405b This is running in TensorWave with AMD's MI300X Get started with Ollama on your cluster: https://lnkd.in/ePsqqsUm