LangChain Models

LangChain Models

In LangChain, there are two types of models: LLMs (Large Language Models) and Chat Models. Let’s explore each of them with some examples.

Note: This article is an extension to the following Medium article:
Note: Before starting in the examples, make sure you prepared your machine, installed Python, installed OpenAI, and got the API Key as explained in the following three articles:

Preparing Your?Machine

You will need to install Python on your machine first before we start. The following articles can help with that.

OpenAI APIs

The following articles will get you started with OpenAI APIs that we will need to continue with LangChain.


Note: We may use Google Colaboratory Python notebooks to avoid setup and environment delays. The focus of this article is to get you up and running in Machine Learning with Python, and we can do all that we need there, The following article can help you get started

Installing LangChain for?Python

  • On Google Colab you can run the following

!pip install langchain[all]        

- On zsh terminal, you can run the following

pip install 'langchain[all]'        

Let's upgrade, as well to make sure all packages are compatible:

pip install --upgrade langchain        

LLMs (Large Language Models):

  1. LLMs are models that take text as input and generate text as output. They are primarily used for tasks like text generation, completion, or summarization. In LangChain, you can use LLMs through the llms module.

LangChain LLM?Example

  1. Import OpenAI from langchain.llms as follows:

from langchain.llms import OpenAI        
Note: we will use the OpenAI API Key we stored in the environment variable as explained in the following article?


openai.api_key = os.environ.get(“OPENAI_API_KEY”)
llm = OpenAI(openai_api_key=openai.api_key)        

Now, let’s use the text completion feature by providing text and it will respond with completion as follows:

print(llm('Here is a fun fact about Dinosaurs:'))        
Dinosaurs had a great sense of smell and could detect scents from up to a mile away!        

You can also ask for multiple prompts at once as follows:

print(llm.generate(['fact about Dinosaurs', 'fact about Ninja']))        
generations=[[Generation(text='\n\nThe first dinosaur fossil was discovered in 1824 in England.', generation_info={'finish_reason': 'stop', 'logprobs': None})], 

[Generation(text='\n\nNinja were trained to be able to jump over walls as high as 8 feet, and to run across rooftops with ease.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'prompt_tokens': 7, 'completion_tokens': 42, 'total_tokens': 49}, 'model_name': 'text-davinci-003'} run=[RunInfo(run_id=UUID('b9c64034-67b2-466c-bded-6f3aee5f0744')), RunInfo(run_id=UUID('aa0ccf40-5b40-452d-a0d9-22618f36eaee'))]        

Let’s store the response in a variable:

response = llm.generate(['fact about Dinosaurs', 'fact about Ninja'])        

Now, let’s check the schema:

response.schema()        
{'title': 'LLMResult',
 'description': 'Class that contains all results for a batched LLM call.',
 'type': 'object',
 'properties': {'generations': {'title': 'Generations',
   'type': 'array',
   'items': {'type': 'array', 'items': {'$ref': '#/definitions/Generation'}}},
  'llm_output': {'title': 'Llm Output', 'type': 'object'},
  'run': {'title': 'Run',
   'type': 'array',
   'items': {'$ref': '#/definitions/RunInfo'}}},
 'required': ['generations'],
 'definitions': {'Generation': {'title': 'Generation',
   'description': 'A single text generation output.',
   'type': 'object',
   'properties': {'text': {'title': 'Text', 'type': 'string'},
    'generation_info': {'title': 'Generation Info', 'type': 'object'}},
   'required': ['text']},
  'RunInfo': {'title': 'RunInfo',
   'description': 'Class that contains metadata for a single execution of a Chain or model.',
   'type': 'object',
   'properties': {'run_id': {'title': 'Run Id',
     'type': 'string',
     'format': 'uuid'}},
   'required': ['run_id']}}}        

Let’s print just the second fact about Ninja

print(response.generations[1][0].text)        
Ninjas were trained to use a range of weapons including swords, nunchucks, 
spiked clubs, and even throwing stars.        


2. Chat Models:

In the ever-evolving field of artificial intelligence, chat models have emerged as a transformative technology, paving the way for more human-like interactions with machines. Unlike traditional language models that simply process raw text input and output, chat models are designed to engage in conversations, making them a vital component of the conversational AI landscape. In this article, we'll explore what chat models are, their key features, and the benefits they offer, with a focus on LangChain's impressive ChatOpenAI model.

Understanding Chat Models

Chat models represent a specialized variation of language models. They excel in understanding and generating responses in a conversational context, mimicking human-like interactions. These models operate based on chat messages, consisting of different types of messages, such as HumanMessage, AIMessage, and SystemMessage. Here's a breakdown of some core features of chat models:

1. Conversational Input

Chat models take a list of message objects as input, which typically includes HumanMessages representing user input, AIMessages conveying AI responses, and SystemMessages for context or system-generated content. This structure allows the model to comprehend and respond to multi-turn conversations effectively. Here is how you send the Human Message in code:

from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage

chat = 
ChatOpenAI(openai_api_key=os.environ.get("OPENAI_API_KEY"))

chat([HumanMessage(content="Hello!")])
        

In this case, it will respond with an AI Message:

AIMessage(content='Hi there! How can I assist you today?')
        


2. Single Response Output

The output of a chat model is a single AIMessage that encapsulates the AI's response to the conversation. This simplicity ensures that developers can seamlessly integrate chat models into their applications.

3. Behavior (SystemMessage)

The SystemMessage class in Langchain is used for initializing the behavior of a model, typically as the first message in a sequence of input messages. It's a sort of instruction or context for the AI model.

For example, if we added the "very rude" behavior to the previous example, it will change the response:


from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage, SystemMessage

chat = ChatOpenAI(openai_api_key=secret_key)

response = chat(
    [
        SystemMessage(
            content="You are a very rude person who does not like to respond to greetings and if he has to repond, he reponds in a very rude way"
        ),
        HumanMessage(content="Hello!"),
    ]
)        

The response will change as follows:

What do you want?        

Here is the full code:

secret_key = "Put Your API Key Here"

from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage, SystemMessage

chat = ChatOpenAI(openai_api_key=secret_key)

response = chat(
    [
        SystemMessage(
            content="You are a very rude person who does not like to respond to greetings and if he has to respond, he responds in a very rude way"
        ),
        HumanMessage(content="Hello!"),
    ]
)

print(response.content)
# Output: "What do you want?"
        

Let's understand the important parameters of the SystemMessage:

  • content (Required): This is a string that represents the main content of the system message. It should specify the behavior you want the model to adopt.
  • additional_kwargs (Optional): This is a dictionary that can contain any additional information.
  • is_chunk (Optional): This parameter should always be False for SystemMessage.
  • type (Optional): The type of message. For SystemMessage, this should always be 'system'.

Now, let's consider a scenario where we have an AI model that we want to behave as a teaching assistant. The SystemMessage can be utilized as follows:

from langchain.schema.messages import SystemMessage

# Create a system message
system_message = SystemMessage(content="You are a teaching assistant who helps students in learning Python.")

# The 'content' sets the behavior of the AI model.
        

The SystemMessage also includes various methods like copy(), dict(), json() etc., which allow for duplicating the model, generating a dictionary or JSON representation of the model, etc.

Another example of using SystemMessage in the context of an agent:

from langchain.chat_models import ChatOpenAI
from langchain.agents import OpenAIFunctionsAgent
from langchain.schema import SystemMessage

llm = ChatOpenAI(temperature=0)
system_message = SystemMessage(content="You are a web researcher who uses search engines to look up information.")
prompt = OpenAIFunctionsAgent.create_prompt(system_message=system_message)
agent = OpenAIFunctionsAgent(llm=llm, prompt=prompt)

# Now the agent has been primed with the system message, dictating its behavior as a web researcher.
        

The output for these examples would not be visible as these are setup actions, telling the AI how to behave in subsequent interactions. You would see the effect of these in the responses when actual queries are passed to the model or agent.

4. Multiple Prompts and Responses

You can send multiple SystemMessages, HumanMessages as follows:

response = chat.generate(
    [
        [
            SystemMessage(
                content="You are a very rude person who does not like to respond to greetings and if he has to respond, he responds in a very rude way"
            ),
            HumanMessage(content="Hello!"),
        ],
        [
            SystemMessage(
                content="You are a very nice person who welcomes people and like to return long welcoming greetings"
            ),
            HumanMessage(content="Hello!"),
        ],
    ]
)        

You will get an object, but here are the two responses:

ChatGeneration(text='What do you want?'

ChatGeneration(text="Hello there! Welcome to our conversation! I hope you're doing well today. It's a pleasure to meet you. How can I assist you or just have a friendly chat?"        

Here is the full Code:

secret_key = "---"

from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage, SystemMessage

chat = ChatOpenAI(openai_api_key=secret_key)

response = chat.generate(
    [
        [
            SystemMessage(
                content="You are a very rude person who does not like to respond to greetings and if he has to respond, he responds in a very rude way"
            ),
            HumanMessage(content="Hello!"),
        ],
        [
            SystemMessage(
                content="You are a very nice person who welcomes people and like to return long welcoming greetings"
            ),
            HumanMessage(content="Hello!"),
        ],
    ]
)

print(f"The Rude Response: {response.generations[0][0].text}")
print(f"The Nice Response: {response.generations[1][0].text}")
        
The Rude Response: What do you want?
The Nice Response: Hello! Welcome to our conversation! I'm delighted to have you here. How can I assist you today?
        

The full response looks as follows:

generations=[[ChatGeneration(text='What do you want?', generation_info={'finish_reason': 'stop'}, message=AIMessage(content='What do you want?'))], [ChatGeneration(text="Hello there! Welcome to our conversation! I hope you're doing well today. It's a pleasure to meet you. How can I assist you or just have a friendly chat?", generation_info={'finish_reason': 'stop'}, message=AIMessage(content="Hello there! Welcome to our conversation! I hope you're doing well today. It's a pleasure to meet you. How can I assist you or just have a friendly chat?"))]] llm_output={'token_usage': {'prompt_tokens': 70, 'completion_tokens': 41, 'total_tokens': 111}, 'model_name': 'gpt-3.5-turbo'} run=[RunInfo(run_id=UUID('8d5e1e4b-2c78-4386-9562-16b939293def')), RunInfo(run_id=UUID('ec04910f-6264-430d-be36-af1b4304bc0b'))]        

As you can see, it is a list called generations. The generations list is a list of lists.

Extra Parameters and Args

You can add additional parameters and args such as, temperature, presence_penalty, and max_tokens. When you have a low temperature as 0, it will always give you the same response

result = chat([HumanMessage(content='Can you tell me a joke?')],
                 temperature=0)        

No matter how many you run this command, you will get the same joke.


Caching

LangChain allows you to store responses so that you do not have to hit the API, which will cost money every time. It makes sense with temperature 0 if you expect the same response.

import langchain
from langchain.chat_models import ChatOpenAI

llm = ChatOpenAI(openai_api_key=os.environ['OPENAI_API_KEY'])

langchain.llm = InMemoryCache()

from langchain.cache import InMemoryCache
langchain.llm_cache = InMemoryCache()

should take longer
llm.predict("Tell me a joke")        

Note the difference in response between both cases

Batch Processing

Chat models are not limited to single conversations. They can process multiple conversations simultaneously in batch mode, making them highly efficient for applications requiring scalability.


Support for Context

Chat models retain information from previous messages within the conversation, enabling them to maintain context throughout the interaction. This contextual awareness is crucial for delivering coherent and relevant responses.

Built-in Integrations

One of the remarkable features of LangChain's chat models, such as ChatOpenAI, is their extensive list of built-in integrations with various chat model providers. These integrations allow developers to leverage the power of different AI models effortlessly. Some notable providers include:

  • OpenAI: Renowned for its cutting-edge language models, OpenAI offers state-of-the-art chat capabilities that can be seamlessly integrated into your applications.
  • Anthropic: Anthropic's chat model integration provides an alternative approach to conversational AI, offering unique capabilities and perspectives.
  • Cohere: Cohere's integration adds versatility to chat models, allowing developers to explore different conversational AI options.
  • Google Vertex AI: Google's Vertex AI is known for its robust AI solutions, and integrating it with LangChain's chat models brings a wealth of AI expertise to your projects.
  • Jina AI: Jina AI's integration provides an open-source, cloud-native solution for developing and deploying conversational AI applications.

Example Usage

Let's take a glimpse into how LangChain's ChatOpenAI can be used to facilitate a conversation:

secret_key = "Your OpenAI API Key"

from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage

chat = ChatOpenAI(openai_api_key=secret_key)

response = chat([HumanMessage(content="Hello!")])

print(response.content)
# Output: "Hi there! How can I help you today?"        


Key Benefits of Chat Models

Chat models offer several key advantages, making them an indispensable tool for developers and businesses:

1. Consistent Interface

Chat models provide a consistent interface across various providers, simplifying the development and integration process. This ensures a unified user experience, regardless of the underlying AI model.

2. Support for Advanced Features

They support advanced features like asynchronous communication, real-time streaming, and batch processing, catering to diverse application needs.

3. Context Handling

With their ability to manage conversation context, chat models deliver more meaningful and context-aware responses, enhancing the user experience.

4. Built-in Integrations

LangChain's chat models offer built-in integrations with multiple providers, giving developers the flexibility to choose the best AI model for their specific use cases.

In conclusion, chat models represent a significant leap forward in the field of conversational AI. Their ability to comprehend and generate human-like responses in a conversational context is transforming how we interact with AI-powered systems. With LangChain's ChatOpenAI and similar models, developers can harness the potential of these technologies to create more engaging and intelligent applications. As the conversational AI landscape continues to evolve, chat models will undoubtedly play a pivotal role in shaping the future of human-machine interactions.




要查看或添加评论,请登录

社区洞察

其他会员也浏览了