Revolutionizing Enterprise AI: The Open Platform for Enterprise AI (OPEA)
Prashant Sharma
IBM Champion 2025-2024-2023-2022 ?? | ?? GenAI ?? IBM Maximo (M4/5/6/7.X & MAS) - CMMS & APM & Mobile | ?? 1xGCP-1xAzure-1xOCI | ?? XLRI ?? CSM? ??
The world of Generative AI is rapidly evolving, presenting incredible opportunities for businesses. However, successfully integrating GenAI into enterprise environments requires a standardized, secure, and scalable approach. That's where the Open Platform for Enterprise AI (OPEA) comes in.
What is OPEA?
OPEA is an open-source initiative under the LF AI & Data Foundation that aims to provide a detailed framework of composable building blocks for state-of-the-art generative AI systems. This includes everything from Large Language Models (LLMs) and data stores to prompt engines and architectural blueprints for Retrieval-Augmented Generation (RAG) workflows.
Key Benefits of OPEA
OPEA-COMPs
A core component of OPEA is the comps directory, found within the GenAIComps repository. This directory contains a rich collection of pre-built, modular components designed to accelerate the development and deployment of GenAI applications.
The comps directory is organized into a series of subdirectories, each focusing on a specific aspect of GenAI application development:
This modular structure allows developers to easily select and combine the components they need to build custom GenAI applications, significantly reducing development time and effort.
How to run OPEA in Local Environment?
In order to run OPEA, we are going to follow steps as given in image below-
A. Configure Docker -
Ensure you have Docker installed in your machine. You can also use Docker-Desktop.
Below is the Docker-Compose which will run Ollama on container. In this file-
LLM_MODEL_ID = llama3.2:1b
host_ip = IP of your machine
领英推荐
B. Start Ollama Service Container -
Run below command to start the Ollama Container in the docker-
docker compose up -d
This will start the Docker Container in your machine -
C. Pull Model-
Pull model which is Meta's LLM LLAMA3:2:1b.
curl https://localhost:9000/api/pull -d '{
"model": "llama3.2:1b"
}'
D. Process Request-
Once model is pulled, you can raise a request for example-
curl https://localhost:9000/api/generate -d '{
"model": "llama3.2:1b",
"prompt": "Tell me about Mount Everest?"
}'
and you will be able to see LLM will provide you the response.
Getting response with Code-
You can write Python or Javascript code to send the request and process the response from LLM as shown in example below-
import requests
def generate_response(prompt, my_model="llama3.2:1b"):
url = "https://localhost:9000/api/generate"
data = {
"model": my_model,
"prompt": prompt,
"stream": False,
}
response = requests.post(url, json=data)
return response.json()
print(generate_response("Tell me about Mount Everest."))
This post explains the what are different advantages on OPEA as well as shows an example to see how we can use OPEA in our local environment. As OPEA is running Ollama in container it is easy to ship, migrate to different environments. OPEA supports deployment across various environments, including cloud, data center, edge, and PC. This flexibility is crucial for cloud-native applications that may need to run in different environments.
Hope you have liked this post.
Do Like and Stay tuned!
#genai #opea #enterprise #ai