Supercharge Your AI with Gemini: Step-by-Step Guide to RAG and Search Integration
Summary:
The code above sets up an environment for using Google's Gemini model with LangChain for tasks involving Retrieval-Augmented Generation (RAG) and search capabilities. This setup allows you to integrate powerful language models with document retrieval and embedding functionalities, making your AI applications more effective and versatile.
Step-by-Step Tasks:
Sample Code:
!pip -q install langchain_experimental langchain_core
!pip -q install google-generativeai==0.3.1
!pip -q install google-ai-generativelanguage==0.4.0
!pip -q install langchain-google-genai
!pip -q install wikipedia
!pip -q install langchain[docarray)
!pip -q install docarray
!pip install --upgrade langchain_community docarray
!pip -q install --upgrade protobuf google.protobuf
Sample Code:
import os
import google.generativeai as genai
?key_name = !gcloud services api-keys list --filter="gemini-api-key" --format="value(name)"
key_name = key_name[0]
api_key = !gcloud services api-keys get-key-string $key_name --location="us-central1" --format="value(keyString)"
api_key = api_key[0]
os.environ["GOOGLE_API_KEY"] = api_key
genai.configure(api_key=os.environ["GOOGLE_API_KEY"])
Sample Code:
models = [m for m in genai.list_models()]
model = genai.GenerativeModel('gemini-pro')
python
Copy code
from IPython.display import Markdown
?
prompt = 'Who are you and what can you do?'
response = model.generate_content(prompt)
display(Markdown(response.candidates[0].content.parts[0].text))
Sample Code:
from langchain_core.messages import HumanMessage
from langchain_google_genai import ChatGoogleGenerativeAI
?
llm = ChatGoogleGenerativeAI(model="gemini-pro", temperature=0.7)
result = llm.invoke("What is a LLM?")
display(Markdown(result.content))
领英推荐
Sample Code:
for chunk in llm.stream("Write a haiku about LLMs."):
??? print(chunk.content)
Sample Code:
from langchain.prompts import ChatPromptTemplate
from langchain.schema.output_parser import StrOutputParser
?
prompt = ChatPromptTemplate.from_template("tell me a short joke about {topic}")
output_parser = StrOutputParser()
chain = prompt | model | output_parser
chain.invoke({"topic": "machine learning"})
Sample Code:
from langchain.document_loaders import WikipediaLoader
?
docs = WikipediaLoader(query="Machine Learning", load_max_docs=10).load()
docs += WikipediaLoader(query="Deep Learning", load_max_docs=10).load()
docs += WikipediaLoader(query="Neural Networks", load_max_docs=10).load()
Sample Code:
from langchain_google_genai import GoogleGenerativeAIEmbeddings
from langchain.vectorstores import DocArrayInMemorySearch
?
embeddings = GoogleGenerativeAIEmbeddings(model="models/embedding-001")
vectorstore = DocArrayInMemorySearch.from_documents(docs, embedding=embeddings)
retriever = vectorstore.as_retriever()
Sample code:
from langchain.schema.runnable import RunnableMap
from langchain.prompts import ChatPromptTemplate
?
template = """Answer the question in a full sentence, based only on the following context:
{context}
Return your answer in three back ticks
Question: {question}"""
?
prompt = ChatPromptTemplate.from_template(template)
chain = RunnableMap({
??? "context": lambda x: retriever.get_relevant_documents(x["question"]),
??? "question": lambda x: x["question"]
}) | prompt | model | output_parser
?
chain.invoke({"question": "What is machine learning?"})
By following these steps, you can integrate Gemini's advanced language model with document retrieval and embedding capabilities, creating a robust system for various AI tasks.
Orlando Magic TV host, Rays TV reporter for FanDuel Sports Network, National Correspondent at NewsNation and Media Director for Otter Public Relations
2 个月Great share, Anju!