Revolutionizing Enterprise AI: The Open Platform for Enterprise AI (OPEA)

Revolutionizing Enterprise AI: The Open Platform for Enterprise AI (OPEA)

The world of Generative AI is rapidly evolving, presenting incredible opportunities for businesses. However, successfully integrating GenAI into enterprise environments requires a standardized, secure, and scalable approach. That's where the Open Platform for Enterprise AI (OPEA) comes in.

What is OPEA?

OPEA is an open-source initiative under the LF AI & Data Foundation that aims to provide a detailed framework of composable building blocks for state-of-the-art generative AI systems. This includes everything from Large Language Models (LLMs) and data stores to prompt engines and architectural blueprints for Retrieval-Augmented Generation (RAG) workflows.

Key Benefits of OPEA

  • Efficiency: OPEA harnesses existing infrastructure, including AI accelerators and other hardware.
  • Seamless Integration: It integrates smoothly with enterprise software, providing heterogeneous support and stability across systems and networks.
  • Openness: OPEA brings together the best innovations and is free from proprietary vendor lock-in.
  • Ubiquity: Its flexible architecture is built to run everywhere – cloud, data center, edge, and PC.
  • Trust: OPEA features a secure, enterprise-ready pipeline with tools for responsibility, transparency, and traceability.
  • Scalability: Access to a vibrant ecosystem of partners helps build and scale your AI solutions.


OPEA-COMPs

A core component of OPEA is the comps directory, found within the GenAIComps repository. This directory contains a rich collection of pre-built, modular components designed to accelerate the development and deployment of GenAI applications.

The comps directory is organized into a series of subdirectories, each focusing on a specific aspect of GenAI application development:

  • agent: Components for building intelligent agents that can interact with users and systems.
  • animation: Tools and resources for generating animations using AI.
  • asr: Automatic Speech Recognition components for converting audio to text.
  • chathistory: Components for managing and utilizing chat history in conversational AI applications.
  • cores: Fundamental building blocks and utilities used across different components.
  • dataprep: Data preparation and preprocessing tools for cleaning and transforming data for AI models.
  • embeddings: Components for generating and working with vector embeddings of text and other data.
  • feedback_management: Tools for collecting and managing user feedback to improve AI models.
  • finetuning: Components for fine-tuning pre-trained models on specific datasets.
  • guardrails: Components for implementing safety and security measures in AI applications.
  • image2image: Components for image-to-image translation and manipulation.
  • image2video: Tools for creating videos from images using AI.
  • llms: Components for working with Large Language Models (LLMs).
  • lvms: Components for working with Large Vision Models (LVMs).
  • prompt_registry: Components for managing and organizing prompts for LLMs.
  • rerankings: Components for re-ranking search results or other lists using AI.
  • retrievers: Components for retrieving relevant information from data sources.
  • text2image: Components for generating images from text descriptions.
  • text2sql: Components for converting natural language queries into SQL queries.
  • third_parties: Integrations with third-party AI services and tools.
  • tts: Text-to-Speech components for converting text to audio.
  • web_retrievers: Components for retrieving information from the web.

This modular structure allows developers to easily select and combine the components they need to build custom GenAI applications, significantly reducing development time and effort.


How to run OPEA in Local Environment?

In order to run OPEA, we are going to follow steps as given in image below-

OPEA Process

A. Configure Docker -

Ensure you have Docker installed in your machine. You can also use Docker-Desktop.

Below is the Docker-Compose which will run Ollama on container. In this file-

LLM_MODEL_ID = llama3.2:1b
host_ip  =  IP of your machine        


docker-compose.yml


B. Start Ollama Service Container -

Run below command to start the Ollama Container in the docker-

docker compose up -d        

This will start the Docker Container in your machine -

Docker Running Ollama Container

C. Pull Model-

Pull model which is Meta's LLM LLAMA3:2:1b.

curl https://localhost:9000/api/pull -d '{
  "model": "llama3.2:1b"
}'        

D. Process Request-

Once model is pulled, you can raise a request for example-

curl https://localhost:9000/api/generate -d '{
  "model": "llama3.2:1b",
  "prompt": "Tell me about Mount Everest?"
}'
        

and you will be able to see LLM will provide you the response.

Response from LLM


Getting response with Code-

You can write Python or Javascript code to send the request and process the response from LLM as shown in example below-

import requests

def generate_response(prompt, my_model="llama3.2:1b"):

    url = "https://localhost:9000/api/generate"

    data = {
        "model": my_model,
        "prompt": prompt,
        "stream": False,  
    }

    response = requests.post(url, json=data)
    return response.json()

print(generate_response("Tell me about Mount Everest."))        

This post explains the what are different advantages on OPEA as well as shows an example to see how we can use OPEA in our local environment. As OPEA is running Ollama in container it is easy to ship, migrate to different environments. OPEA supports deployment across various environments, including cloud, data center, edge, and PC. This flexibility is crucial for cloud-native applications that may need to run in different environments.

Hope you have liked this post.

Do Like and Stay tuned!

#genai #opea #enterprise #ai

要查看或添加评论,请登录

Prashant Sharma的更多文章

社区洞察

其他会员也浏览了