Running LLM locally using Java...

Running LLM locally using Java...

There are several options, but looks like #Ollama is getting a lot of traction!

Ollama chat

Ollama allows you to download models from a registry and use them locally. The project is open source and available @ https://github.com/jmorganca/ollama

There's also a Java wrapper named Ollama4J (Java 11 and above support) which allows you to integrate it into your Java apps ??

Code available @ https://github.com/amithkoujalgi/ollama4j

Ollam4J

Looks like we can even deploy it to Clever Cloud ????

https://clever-cloud.com/blog/engineering/2023/11/27/deploy-llama-mistral-openchat-or-your-own-model-on-clever-cloud/

Feedback always welcome.

-Stephan


Graham Lynch

Senior Solutions Architect with a passion for Machine Learning and Data Science. Author of JOllama Fluent Java API for Ollama

9 个月

I have released a new Java Ollama API called JOllama. All Ollama REST endpoints are supported https://www.dhirubhai.net/posts/grahamlynch_jollama-ollama-llm-activity-7212512400149733376-Qb7E?utm_source=share&utm_medium=member_ios

回复
Amith Koujalgi

Product Dev | Generalist | Maintainer of Ollama4j

1 年

Thanks for sharing this Stephan Janssen! ??

回复
Alexey Titov ????

Software Architect at adesso SE, langchain4j Committer

1 年

Latest version of #langchain4j supports #Ollama too

Is it a Java process or a native api wrapped in Java?

回复

要查看或添加评论,请登录

Stephan Janssen的更多文章

  • 10K+ Downloads Milestone for DevoxxGenie!

    10K+ Downloads Milestone for DevoxxGenie!

    I'm excited to share that DevoxxGenie has hit a major milestone: over 10,000 downloads! The actual number is likely…

    2 条评论
  • Running the full DeepSeek R1 model at Home or in the Cloud?

    Running the full DeepSeek R1 model at Home or in the Cloud?

    The DeepSeek R1 model, a massive 671B parameter Mixture-of-Experts (MoE) model, demands significant computational…

    7 条评论
  • Large Language Models related (study) material

    Large Language Models related (study) material

    This week I spoke at VoxxedDays CERN and Ticino (including a keynote). Received lots of great feedback but also several…

  • LLM Inference using 100% Modern Java ????

    LLM Inference using 100% Modern Java ????

    In the rapidly evolving world of (Gen)AI, Java developers now have powerful new (LLM Inference) tools at their…

    5 条评论
  • Basketball Game Analysis using an LLM

    Basketball Game Analysis using an LLM

    I asked OpenAI's ChatGPT and Google Gemini to analyze some game snapshots, and it's incredible how well they break down…

    5 条评论
  • The Power of Full Project Context #LLM

    The Power of Full Project Context #LLM

    I've tried integrating RAG into the DevoxxGenie plugin, but why limit myself to just some parts found through…

    14 条评论
  • Using LLM's to describe images

    Using LLM's to describe images

    I've already worked on face recognition many years ago, so the natural next step is to use a Large Language Model (LLM)…

    1 条评论
  • Devoxx Genie Plugin : an Update

    Devoxx Genie Plugin : an Update

    When I invited Anton Arhipov from JetBrains to present during the Devoxx Belgium 2023 keynote their early Beta AI…

    1 条评论
  • MLX on Apple silicon

    MLX on Apple silicon

    "MLX is an array framework for machine learning on Apple silicon, brought to you by Apple machine learning research…

    1 条评论
  • Streamlining Your IDE with a Local LLM AI Assistant: A Quick Guide

    Streamlining Your IDE with a Local LLM AI Assistant: A Quick Guide

    The current "AI Assistant" plugin for IntelliJ operates exclusively online, as it leverages a cloud-based GPT-4…

    6 条评论

社区洞察

其他会员也浏览了