Supercharge Your AI with Gemini: Step-by-Step Guide to RAG and Search Integration

Supercharge Your AI with Gemini: Step-by-Step Guide to RAG and Search Integration

Summary:

The code above sets up an environment for using Google's Gemini model with LangChain for tasks involving Retrieval-Augmented Generation (RAG) and search capabilities. This setup allows you to integrate powerful language models with document retrieval and embedding functionalities, making your AI applications more effective and versatile.


Step-by-Step Tasks:

  1. Install Required Packages: Ensure the following Python packages are installed:

Sample Code:

!pip -q install langchain_experimental langchain_core

!pip -q install google-generativeai==0.3.1

!pip -q install google-ai-generativelanguage==0.4.0

!pip -q install langchain-google-genai

!pip -q install wikipedia

!pip -q install langchain[docarray)

!pip -q install docarray

!pip install --upgrade langchain_community docarray

!pip -q install --upgrade protobuf google.protobuf

  1. Configure Google API Key: Retrieve and configure the Google API key for authentication:

Sample Code:

import os

import google.generativeai as genai

?key_name = !gcloud services api-keys list --filter="gemini-api-key" --format="value(name)"

key_name = key_name[0]

api_key = !gcloud services api-keys get-key-string $key_name --location="us-central1" --format="value(keyString)"

api_key = api_key[0]

os.environ["GOOGLE_API_KEY"] = api_key

genai.configure(api_key=os.environ["GOOGLE_API_KEY"])

  1. Initialize Gemini Model: List available models and choose one (e.g., gemini-pro):

Sample Code:

models = [m for m in genai.list_models()]

model = genai.GenerativeModel('gemini-pro')

  1. Generate Text Using Gemini: Generate a text response from the Gemini model:

python

Copy code

from IPython.display import Markdown

?

prompt = 'Who are you and what can you do?'

response = model.generate_content(prompt)

display(Markdown(response.candidates[0].content.parts[0].text))

  1. Integrate with LangChain: Set up LangChain with Gemini model:

Sample Code:

from langchain_core.messages import HumanMessage

from langchain_google_genai import ChatGoogleGenerativeAI

?

llm = ChatGoogleGenerativeAI(model="gemini-pro", temperature=0.7)

result = llm.invoke("What is a LLM?")

display(Markdown(result.content))

  1. Create Basic Chains: Example of generating a haiku:

Sample Code:

for chunk in llm.stream("Write a haiku about LLMs."):

??? print(chunk.content)

  1. Create a joke generation chain:

Sample Code:

from langchain.prompts import ChatPromptTemplate

from langchain.schema.output_parser import StrOutputParser

?

prompt = ChatPromptTemplate.from_template("tell me a short joke about {topic}")

output_parser = StrOutputParser()

chain = prompt | model | output_parser

chain.invoke({"topic": "machine learning"})

  1. Set Up RAG (Retrieval-Augmented Generation): Load documents using Wikipedia loader:

Sample Code:

from langchain.document_loaders import WikipediaLoader

?

docs = WikipediaLoader(query="Machine Learning", load_max_docs=10).load()

docs += WikipediaLoader(query="Deep Learning", load_max_docs=10).load()

docs += WikipediaLoader(query="Neural Networks", load_max_docs=10).load()

  1. Set up embeddings and vector store:

Sample Code:

from langchain_google_genai import GoogleGenerativeAIEmbeddings

from langchain.vectorstores import DocArrayInMemorySearch

?

embeddings = GoogleGenerativeAIEmbeddings(model="models/embedding-001")

vectorstore = DocArrayInMemorySearch.from_documents(docs, embedding=embeddings)

retriever = vectorstore.as_retriever()

  1. Create RAG Chains: Retrieve relevant documents and generate answers:

Sample code:

from langchain.schema.runnable import RunnableMap

from langchain.prompts import ChatPromptTemplate

?

template = """Answer the question in a full sentence, based only on the following context:

{context}

Return your answer in three back ticks

Question: {question}"""

?

prompt = ChatPromptTemplate.from_template(template)

chain = RunnableMap({

??? "context": lambda x: retriever.get_relevant_documents(x["question"]),

??? "question": lambda x: x["question"]

}) | prompt | model | output_parser

?

chain.invoke({"question": "What is machine learning?"})

By following these steps, you can integrate Gemini's advanced language model with document retrieval and embedding capabilities, creating a robust system for various AI tasks.

Ryan Bass

Orlando Magic TV host, Rays TV reporter for FanDuel Sports Network, National Correspondent at NewsNation and Media Director for Otter Public Relations

2 个月

Great share, Anju!

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了