Streamlining Your IDE with a Local LLM AI Assistant: A Quick Guide
OpenAI DALL·E 3

Streamlining Your IDE with a Local LLM AI Assistant: A Quick Guide

The current "AI Assistant" plugin for IntelliJ operates exclusively online, as it leverages a cloud-based GPT-4 service. However, emerging initiatives, such as the Continue plugin (currently at version 0.0.20), are striving to broaden this scope by enabling your ID (IntelliJ & VS Code) to connect with any Large Language Model (LLM), whether hosted locally or in the cloud.

This is the way! ????

Working with local models resolves any privacy issues you might have as a company/developer and it even allows you to still have a coding assistance when you're not connected to the internet!

Let's do this...

The installation of open-source LLMs on local systems is an increasingly popular trend. Powerful tools like Ollama and Llama.cpp are now accessible, offering significant capabilities for a wide range of applications.

For example once you have Ollama installed you can very easily install a LLM locally using the following command:

ollama pull mistral        

There's even a CodeLlama model available which Phind has fine tuned.

ollama pull phind-codellama        

Now we can install the ChatBot docker images so we can interact with the downloaded models.

docker run -p 3000:3000 ghcr.io/ivanfioravanti/chatbot-ollama:main        

Once the docker image is running open your browser at https://localhost:3000 and start chatting to your local models!

The CodeLlama ChatBot efficiently responds to chat queries. After a brief initial processing period, the answer is swiftly displayed on the screen, illustrating the bot's capability to provide rapid and helpful programming guidance. This exemplifies the effectiveness of AI-assisted coding where developers can benefit significantly from such immediate and targeted assistance.

How to integrate this in your IDE?

This is where "Continue" plugin comes in play.

The aim of this project is to merge local & remote LLM's in your favourite IDE. It's still a very young project (v0.0.20) and buggy.

I'm hoping this project will inspire JetBrains to follow the example of supporting multiple models (local or remote) !!

You can download the plugin zip file and install it manually in your IDEA

Integrating AI capabilities into your IDE is made possible by the "Continue" plugin, a promising tool that bridges the gap between local and remote Large Language Models (LLMs) within your preferred Integrated Development Environment.

Here's a brief guide on how to integrate it:

Download the Plugin:

The "Continue" plugin is available for both IntelliJ and VS Code. Download the zip file and store it locally on your machine.

Manual Installation in Your IDE:

Open IntelliJ IDEA. Navigate to the settings or preferences section.Find the plugins section and choose the option to install a plugin from disk. Select the downloaded .zip file of the "Continue" plugin and install it.

Configure the Plugin:

After installation, you might need to configure the plugin to specify which models you want to use (local, remote, or both).Adjust any settings or preferences as per your requirements.

The available Model providers

Once you select a model provider it will show the available models of this provider.

Ollama models

I've selected both mistral and Phind CodeLlama for my IDE.

Restart the IDE:

Restart your IDE to ensure the plugin is properly loaded and configured.

IDE connecting to Continue Server

Start Using the Plugin:

Once set up, you can start using the plugin's features to enhance your coding experience, whether it's for writing code, debugging, or getting documentation help.

Use CMD-J to select a piece of code

You can see the Continue plugin in action in this marketing video.

Local Models, Privacy and Cost

For companies concerned with privacy and intellectual property, utilizing local models for ChatGPT becomes a compelling option. This approach not only addresses confidentiality concerns but also offers a cost-effective solution since it operates on your own infrastructure. While cloud-based solutions typically deliver faster responses and are generally more reliable, barring any service outages, the trade-off with local deployment is the increased control over data security and potential cost savings.

Yishai Rasowsky

?? ????? ?? | Release the hostages now | Data Scientist | Converting data insights into business solutions

9 个月

How would you efficiently design the question answering system so that one can seamlessly toggle between different large language models, including those not on hugging face, e.g. ChatGPT and Gemini which seemed to be the leaders nowadays. Thanks!

回复
Jesse Daniel Brown

Senior Software Engineer | AI Development

10 个月

This is good. I am making a HIVE-AI community now, but have come into the problem of training the localized AI subunits like Phi2, OpenChat, MAMBA SSM6, along with others. If there is a free cloud compute service that can train the AI, please let me know. This is amazing work btw. Would love to chat about it. message me if possible.

回复
Lize Raes

Software Engineer and Product Manager

12 个月

Awesome, this is exactly what is needed for companies with stringent privacy norms for their customers. Will try this out with our team!

要查看或添加评论,请登录