Run Large Language Model in docker — AI on Your Local Environment.
https://www.analyticsvidhya.com/blog/2024/07/meta-llama-3-1/

Run Large Language Model in docker — AI on Your Local Environment.

Introduction

Last year, at the Collision Conference, I had a thought-provoking discussion with a friend from the Czech Republic about the future of AI. We explored how AI would revolutionize various industries and aspects of our lives. Recently, I stumbled upon a fascinating TEDx talk by Mustafa Suleyman, where he described AI as a new technological species that could be personalized as digital assistance to enhance our lives.

What are Large Language Models?

Large Language Models (LLMs) are a type of artificial intelligence designed to process and generate human-like language. These models are trained on vast amounts of text data, enabling them to learn patterns, relationships, and context. LLMs have numerous applications, including language translation, text summarization, and conversational AI.

Setting up the Local Environment

To run LLMs on your local environment, you’ll need to set up a Docker container using the ollama/ollama image. Here's a step-by-step guide:

Docker Compose Configuration

Create a docker-compose.yml file with the following configuration

networks:
  ollama-docker:
    external: false
services:
  llama3:
    image: ollama/ollama:latest
    ports:
      - 7869:11434
    volumes:
      - .:/code
      - ./ollama/ollama:/root/.ollama
    container_name: ollama
    pull_policy: always
    tty: true
    restart: always
    environment:
      - OLLAMA_KEEP_ALIVE=24h
      - OLLAMA_HOST=0.0.0.0
    networks:
      - ollama-docker
  openwebui:
    image: ghcr.io/open-webui/open-webui:main
    container_name: ollama-webui
    volumes:
      - ./ollama/ollama-webui:/app/backend/data
    depends_on:
      - llama3
    ports:
      - 8080:8080
    environment: # https://docs.openwebui.com/getting-started/env-configuration#default_models
      - OLLAMA_BASE_URLS=https://host.docker.internal:7869 #comma separated ollama hosts
      - ENV=dev
      - WEBUI_AUTH=False
      - WEBUI_NAME=valiantlynx AI
      - WEBUI_URL=https://localhost:8080
      - WEBUI_SECRET_KEY=t0p-s3cr3t
    extra_hosts:
      - host.docker.internal:host-gateway
    restart: unless-stopped
    networks:
      - ollama-docker        

Running the Container

... Continue reading

要查看或添加评论,请登录

Michael Olayemi Olawepo的更多文章

社区洞察

其他会员也浏览了