Unlocking the Power of Generative AI Meta Llama Instruct and Vision Models with Oracle Database 23ai
Introduction
This article explores the capabilities of Meta Llama models on Oracle Cloud and Oracle Database 23ai, demonstrating how to access and utilize these powerful AI tools for various applications. The article covers the following topics:
By following the steps outlined in this article, developers and users can unlock the full potential of Meta Llama models and Oracle Database 23ai on Oracle Cloud, enabling them to build innovative applications and solutions that harness the power of generative AI.
The ability to read image content, query the database on that content extracted, create alerts and push notifications, detect fraud in real time, make instant decisions, and run it in a multi-cloud environment opens tremendous opportunities in the world of enterprise business and AI itself. Looking forward to more of these emerging AI and Database innovations.
Demo video for this article
Oracle and Meta have a partnership that allows Meta to use Oracle Cloud Infrastructure (OCI) to train and deploy its Llama large language models (LLMs).?Oracle also helps Meta develop AI agents based on Llama models.?
OCI Generative AI now supports the pretrained Meta Llama 3.1 70 billion-parameter and 405 billion-parameter large language models. These models support eight languages, including English, French, Hindi, Italian, Portuguese, Spanish, and Thai and have a context length of 128,000 tokens, which is 16 times more than their previous Meta Llama 3 models.
Llama 3.1 405B is the first openly available model that rivals the top AI models when it comes to state-of-the-art capabilities in general knowledge, steerability, math, tool use, and multilingual translation. With the release of the 405B model, we’re poised to supercharge innovation—with unprecedented opportunities for growth and exploration. We believe the latest generation of Llama will ignite new applications and modeling paradigms, including synthetic data generation to enable the improvement and training of smaller models, as well as model distillation—a capability that has never been achieved at this scale in open source.
Table of Contents
Access Meta.llama-3.1-70b-instruct on Oracle Cloud
Prompt the OCI Generative AI chat models to generate text. You can ask questions in natural language and optionally submit text such as documents, emails, and product reviews to the chat models. Each model reasons over the text and provides intelligent answers.
Login to cloud.oracle.com, choose a region where Oracle Generative AI Services are available, for example Chicago region, and from the top Navigation,
Select Analytics & AI > AI Services > Generative AI as shown below
Click on chat to see all the available models. This list may vary depending upon available models, versions and services available
Generate a Job Description, Product Pitch and Email
Select Meta llama 3.1 instruct. From the examples given, select a job description.
Generate a Job Description:
Input Prompt
Generate a job description for an Oracle APEX developer with the following three qualifications only:
1) At least 5 years of experience in web or mobile application development
2) Knowledge of Oracle APEX, PL/SQL, Oracle Database, DevOps
3) Ability to learn, innovate and develop next-generation applications
Output Response
Generate a Product Pitch
Input Prompt:
Our product helps detect money laundering and money mules, monitor real-time financial transactions, raise fraud alerts, and automatically block cards while transaction anomalies are detected. We are a next-generation financial services company with a core team of experts in the finance and banking industry, and we have over 100 customer implementations globally.
That was a great pitch for my product :-)
View Code and Develop Application
You can now view the code both in Java and Python, run it in your own environment, or use REST API invocation with PLSQL or any other technologies that you are familiar with
You can view auto-generated code under by clicking on the View code button
import oci
# Setup basic variables
# Auth Config
# TODO: Please update config profile name and use the compartmentId that has policies grant permissions for using Generative AI Service
compartment_id = "<Your-Compartment-OCID>"
CONFIG_PROFILE = "DEFAULT"
config = oci.config.from_file('~/.oci/config', CONFIG_PROFILE)
# Service endpoint
endpoint = "https://inference.generativeai.us-chicago-1.oci.oraclecloud.com"
generative_ai_inference_client = oci.generative_ai_inference.GenerativeAiInferenceClient(config=config, service_endpoint=endpoint, retry_strategy=oci.retry.NoneRetryStrategy(), timeout=(10,240))
chat_detail = oci.generative_ai_inference.models.ChatDetails()
content = oci.generative_ai_inference.models.TextContent()
content.text = "Generate a product pitch for a USB connected compact microphone that can record surround sound. The microphone is most useful in recording music or conversations. The microphone can also be useful for recording podcasts."
message = oci.generative_ai_inference.models.Message()
message.role = "USER"
message.content = [content]
chat_request = oci.generative_ai_inference.models.GenericChatRequest()
chat_request.api_format = oci.generative_ai_inference.models.BaseChatRequest.API_FORMAT_GENERIC
chat_request.messages = [message]
chat_request.max_tokens = 600
chat_request.temperature = 0.25
chat_request.frequency_penalty = 1
chat_request.presence_penalty = 0
chat_request.top_p = 0.75
chat_request.top_k = -1
chat_detail.serving_mode = oci.generative_ai_inference.models.OnDemandServingMode(model_id="ocid1.generativeaimodel.oc1.us-chicago-1.<model-id>")
chat_detail.chat_request = chat_request
chat_detail.compartment_id = compartment_id
chat_response = generative_ai_inference_client.chat(chat_detail)
# Print result
print("**************************Chat Result**************************")
print(vars(chat_response))
Access Meta.llama-3.2-90b-vision-instruct on Oracle Cloud
For some models, you can submit images and ask questions about the image.
Select Meta.llama-3.2-90b-vision-instruct model, let us drag and drop an image of apple in the Gen AI playground.
领英推荐
Meta llama Vision Instruct model response
You can always view generated Java and Python code by clicking on View code button.
Read the contents of an Image
Prompt Image and Text
Prompt: What is the information provided in this cheque
Meta llama Vision Instruct model response
Prompt: What is the card number?
Meta llama Vision Instruct model response
The card number is: 1234 4568 1234 4568.
Leveraging Oracle Database with Meta Llama Models
Now that we know how to get information from an image, we can use this text data to query from the Oracle Autonomous Database. Let us take an example of how we can find customer information based on the card number read through the Meta Llama Vision model.
Create a Simple Cards table, to hold card and customer information
CREATE TABLE "CC_FD"
( "ID" NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY MINVALUE 1 MAXVALUE 9999999 INCREMENT BY 1 START WITH 1 CACHE 20 NOORDER NOCYCLE NOKEEP NOSCALE NOT NULL ENABLE,
"CUST_ID" NUMBER, -- Customer Id
"CC_NO" NUMBER, -- Card Number
"STATUS" VARCHAR2(50),
"VALIDITY" DATE,
"FIRST_NAME" VARCHAR2(50),
"LAST_NAME" VARCHAR2(50),
"BANK_NAME" VARCHAR2(50),
PRIMARY KEY ("ID")
USING INDEX ENABLE
) ;
PL/SQL reads card information from an image uploaded and gets the customer details from a database table, and generates Audio response
declare
l_blob_content blob;
l_mime_type varchar2(200);
l_file_name varchar2(200);
l_base64_content clob;
l_response_text clob;
l_request_body clob;
l_text varchar2(32000);
l_api_url varchar2(2000) := 'https://inference.generativeai.us-chicago-1.oci.oraclecloud.com/20231130/actions/chat';
l_compartment_id varchar2(1000) := '<Your compartment OCID>';
l_model_id varchar2(100) := 'meta.llama-3.2-90b-vision-instruct';
l_id number;
l_cardno number;
l_first_name varchar2(50);
l_status varchar2(20);
l_filename varchar2(50);
begin
select blob_content, mime_type, filename, id
into l_blob_content,l_mime_type, l_file_name, l_id
from apex_application_temp_files
where name = :P48_IMAGE_UPLOAD;
-- Set the Image ID to a page item, here we are using page item P48_ID
:P48_ID := l_id;
dbms_lob.createtemporary(l_base64_content, true);
SELECT
REPLACE(REPLACE(APEX_WEB_SERVICE.BLOB2CLOBBASE64(l_blob_content),
CHR(10),
''),
CHR(13),
'')
into l_base64_content
from dual;
-- Build JSON request body for meta.llama-3.2-90b-vision-instruct
l_request_body := '
{
"compartmentId": l_compartment_id,
"servingMode": {
"servingType": "ON_DEMAND",
"modelId": "meta.llama-3.2-90b-vision-instruct"
},
"chatRequest": {
"messages": [
{
"role": "USER",
"content": [
{
"type": "TEXT",
"text": "what is the card number"
},
{
"type": "IMAGE",
"imageUrl": {
"url": "data:image/png;base64,'||l_base64_content||'"
}
}
]
}
],
"maxTokens": 2500,
"isStream": false,
"apiFormat": "GENERIC",
"temperature": 0.75,
"frequencyPenalty": 1,
"presencePenalty": 0,
"topP": 0.7,
"topK": 1
}
}';
apex_web_service.g_request_headers(1).name := 'Content-Type';
apex_web_service.g_request_headers(1).value := 'application/json';
-- Make the API call
l_response_text := apex_web_service.make_rest_request(
p_url => 'https://inference.generativeai.us-chicago-1.oci.oraclecloud.com/20231130/actions/chat',
p_http_method => 'POST',
p_body => l_request_body,
p_credential_static_id => 'Ind_OCI_WebCred'--'credentials_for_ociai'
);
SELECT jt.text INTO l_text
FROM dual,
JSON_TABLE(
l_response_text,
'$.chatResponse.choices[*].message.content[*]'
COLUMNS (
text CLOB PATH '$.text'
)
) jt;
-- get card number from image
select regexp_replace(l_text, '[^[:digit:]]', '') into l_cardno from dual;
-- get customer details from a table based on card number uploaded
select first_name, status into l_first_name, l_status from cc_fd where cc_no = l_cardno and rownum = 1;
-- Optional Generate Speech AI Output
--l_filename := card2speech (l_cardno );
:P48_CARDNO := l_cardno;
:P48_STATUS := l_status;
:P48_FILENAME := l_filename;
if (l_status = 'Blocked') then
:P48_CUSTNAME := 'Card has been Blocked..';
else
:P48_CUSTNAME := 'Welcome '||l_first_name;
end if;
-- Add success message
apex_application.g_notification := 'API called successfully!';
end;
Create an Oracle APEX Dynamic Action after Image Upload with the above code and Grab the File input image
Acknowledgement:
Thanks to Karthik Sukumar on helping with PL/SQL to read Meta Llama vision content.
How do you set up Oracle APEX OCI web credentials and call generative AI services?
How to Convert Text Input to Audio Output with OCI Speech AI (Text To Speech)?
Troubleshooting (Updated Mar 13th 2025)
Error Message: "Image is corrupted or unreadable" or "Object not found"
In the Request JSON, Replace the below line
"url": "data:image/png;base64, '||l_base64_content||'"
with this one as shown below (no space before passing image Clob)
"url": "data:image/png;base64,'||l_base64_content||'"
Check us out here!
Conclusion
The ability to read image content, query the database on that content extracted, create alerts and push notifications, detect fraud in real time, make instant decisions, and run it in a multi-cloud environment opens tremendous opportunities in the world of enterprise business and AI itself. Looking forward to more of these emerging AI and Database innovations.
Thanks for reading, liking and sharing
Regards, Madhusudhan Rao
Project manager at Databoost
3 个月Thank you ??
Principal Oracle CX/APEX/AI Consultant, Co-founder of the FACES Digital Experience Platform, and Fujitsu Distinguished Engineer
3 个月Another great article Madhusudhan Rao thank you ??
Principal Technical Program Manager at Oracle Database DevRel
3 个月You never cease to amaze, Madhusudhan Rao ??????!
Client Technical Specialist and Chief Database Architect at Mphasis, a Blackstone company || Health AI @ DocNote.ai || GenAI Search @ MetaRAG.ai || GRC @ NIST.ai || KYC @ OFAC.ai
3 个月Madhusudhan Rao ????
Oracle Database, Principal Product Manager
3 个月Demo video https://www.youtube.com/watch?v=neK53OHcOUo