Introducing D-AI: Unleashing Creativity with Amazon Bedrock

Introducing D-AI: Unleashing Creativity with Amazon Bedrock

Welcome to the heart of innovation, where Amazon Bedrock sets the stage for a symphony of creativity.

Our project is all about experiencing the incredible power of Generative AI. We're orchestrating AI models provided by AWS Bedrock with our custom Knowledge Base. But that's not all – we're also connecting Agents that can trigger Lambda functions to grab real-time info. And to top it all off, we've designed a user-friendly Python-based UI that resembles a chat interface, making it easy and enjoyable to interact with.

What is Amazon Bedrock?

AWS Bedrock is a groundbreaking platform designed to drive innovation in the emerging field of artificial intelligence. It serves as a powerhouse for creativity, providing developers with a suite of powerful tools and resources to unlock the full potential of Gen-AI. From Foundation Models to customized experiences, AWS Bedrock offers a comprehensive solution for crafting innovative solutions

Let's dive deeper into the features and capabilities of this revolutionary platform:

  • The Core of AWS Bedrock: At its core, AWS Bedrock is built upon the vision of empowering creators, developers, businesses, etc, to bring their AI-integrated ideas to life. With a strong emphasis on security, privacy, and responsible AI practices, without worrying about the setup of infrastructure, configurations, or anything else. AWS Bedrock lays the foundation for a new era of innovation.
  • Foundation Models and Customization ~ Creativity to Perfection: In AWS Bedrock, Foundation Models (FMs) from industry leaders like AI21 Labs, Anthropic, and Meta act as the base for innovation. These high-performing models allow seamless experimentation and fine-tuning to achieve personalized user experiences, emphasizing customization to perfection.
  • Model Orchestration ~ RAGs & Agents of Change: AWS Bedrock goes beyond traditional AI models with its integration of Retrieval Augmented Generation (RAG). By leveraging the custom knowledge base provided by the user, RAG enriches responses with proprietary insights, taking productivity to a new level and ensuring relevance and accuracy in the generated content. To empower RAGs further, we have Agents that can retrieve data or perform functions in the real world with the help of a custom program or code.
  • The Power of Choice ~ What Would You Like to Have? AWS Bedrock offers a diverse array of AI models, each with its own unique strengths and capabilities. Whether it's image processing, dialogue generation, or conversational AI, there's a model to suit every vision. With the ability to cherry-pick and fine-tune models, developers have the flexibility to curate solutions tailored to their specific generative AI applications.

Top Options Available in Bedrock

Getting Started with the Project:

Project Outline

  1. Getting Started with Bedrock: Understanding basic usage of Bedrock
  2. IAM Role Creation & Setting Up S3: Establishing IAM roles and configuring S3 for the custom knowledge base.
  3. Developing the Lambda Function: Creating Lambda functions to facilitate agent functionality.
  4. The Core ~ Orchestrating the Model: Integrating knowledge base and agent to orchestrate AI models effectively.
  5. Developing D-AI Python Chat App: Enhancing user experience by developing a user-friendly Python-based chat application.


The Overall Project Architecture

Getting Started with Bedrock:

To get started, open the AWS console and navigate to AWS Bedrock Service, the Bedrock service is only available in a few regions, here we will use the Virginia region.

Amazon Bedrock

Before using any model, we need to gain access to that model. To access the required model, click "Model Access" in the sidebar, and on the model access page click the "Manage Model Access" button to make an access request for the required model.

Model Access List


Once, a model access request is complete, we can move to the chat playground using the sidebar menu

Chat Playground

In the chat playground, we can select the model we requested earlier and then we can chat or fine-tune it

Jurassic-2 Ultra Chat Playground

Example Chat:

Asking Capital of India

Now, If I say to give a punch line for computer business, it will use the context that is related to Delhi and you can witness the results as:

Context is preserved

I didn't mention Delhi in the punch line, but as earlier I asked for India's capital, the model took the base context as Delhi, Interesting, Right?

Just down on the same page, we can see the model metrics, which also include the real-time billing details.

Model Metrics


Similarly, we can ask access for the image-based models and generate cool images. Here is a masterpiece by me ????

It is a game of better prompts after all!


Great till now! Now let's move on to the next part.


IAM Role Creation & Setting Up S3:

To orchestrate models effectively, it's essential to utilize IAM users when accessing S3, the steps are as:

Navigate to the IAM Dashboard

IAM Dashboard

Using the sidebar menu, select "Users" and add a new user

Add a new user

Give the user name, grant access to the AWS Management Console, and provide the password.

Adding User

Once done, Now it's time to give powers to the new IAM user, click on "Attach Policies directly" and then give the AdministratorAccess. [It's Dangerous!]

Give Admin Access!

Review and Create, then we should be able to see the IAM user details

IAM User Details


Then, log in to the AWS cloud using the IAM user

Logged-In as an IAM user

Now, on we will do everything in the IAM user account itself.


Let's now set the S3 for the custom Knowledge base and Agents

Navigate to the Amazon S3 Storage Page

S3 Page

Create a new bucket in the same region, and give it a unique name

Creating a S3 Bucket

Then Upload the objects (files) to the S3 Bucket, click the "Upload" button, and proceed.

Upload Objects

Then click on "add files" and upload the custom Knowledge base and open API schema to it.

Adding Files

A brief about the data, we have 2 data sets here, worldcities.csv has a collection of various cities around the world, with Lat, Long info, and many other details. I have trimmed down the dataset because, with a larger dataset, it was taking time to generate embeddings. Another dataset DLabs_internal_data set is just a dummy dataset to test the LLM capabilities. And we have one more important file openAPI Schema, that will help Agent and Lambda to communicate over JSON format.

Download Datasets from here.

Download OpenAPI Schema from here.


Great going! Now let's move on to the next part.


Developing the Lambda Function

For allowing agents to connect to the real world, we will be using AWS Lambda. The programming language will be Python and will write a simple code to fetch weather using OpenWeatherMap API based on the latitudes and longitudes provided by the user.

OpenWeatherMap

One Prerequisite, we need to generate OpenWeatherMap API.


Now, let's head on to the Lambda Functions Page

AWS Lambda

Start creating the lambda function, give the function name, select run-time as Python 3.xx, and let the Arch be x86.

AWS Python Lambda

Make a basic lambda first, the code shown here is autogenerated.

Basic Lamda

Test and Deploy the function.

Test Output

Now, verify the run event on Cloudwatch, click on Monitor, and then Cloudwatch

Go to Cloudwatch

Confirm the run event, we can see Lambda was triggered two times.

Lambda Run Event

Download the Lambda Py Code from here.

Now, use the code, to fetch the weather, and make sure to change and update the API key value.


Great, now let's move towards the Core Part.


The Core ~ Orchestrating the Model

The most interesting part, first let's start with the Dlbas dataset for the custom knowledge base.

Navigate to AWS Bedrock, using the sidebar go to the Knowledge base

Cutsom Knowledge base

Click on Create Knowledge Base, and make sure you are logged in via IAM user ID.

Give it a name and let other things as it is.

Name of Knowledge base

Now, set up the data source, use the Browse S3 options, and load the Dlabs dataset.

Set Up Data Source

Then select the embeddings model. It is required as LLM can't directly understand text, it needs to be converted to number-based data, speaking in layman's terms.


Select the Embedding model

Select the Titan G1 by Amazon, it has 1536 Dimensions, it's so high.

Once, we have selected the embeddings generator, we need a vector Db to store them, we can create our own or bedrock can also provision using OpenSerach Service.

Vector Db

Review and Create, then it will take a few minutes depending upon how big or small is the dataset provided.

Preparing Vect Db

Once the Db is ready, click the "Sync" button to sync the embeddings and use the model with custom info.

Sync the embeddings

Then our data source is ready.

Ready!

The data source, it's a simple one, we have the Employee ID, Name, telephone number, pay/month, and Department they look after

Simple dataset

Now, it is time to test, select the model

Select Model

Select the model of your choice, and make sure it's pocket-friendly.

Select Model

Let's ask some questions:

A. I asked the names of all the employees at Dlabs and perfectly answered it.

Question 1

B. I asked it to give the monthly salary of Ddhruv and then also calculate the yearly salary and both worked.

Few more questions

Now One incorrect answer I received:

C. I asked it to calculate the overall salary, that each month the company pays, it should be 40 Lakhs as per data, but somehow the model says it to be 41 Lakhs. It's possible my prompt is not correct or maybe just a Digital hallucination.

Incorrect Response

Great till now, let's use the more precise and powerful data set, the worldcities dataset, use the above process to add it, and remove the Dlabs dataset.

World cities Data set

Now, let's ask some questions about it:

Works just fine

Great!!, Now, let us move towards configuring Agent

Remember the lambda function made earlier

Python Lamba Function

Now, again on the Bedrock page in the sidebar menu, select the Agents, provide Agent details, give the Agent Name, provide a description, and let everything else be same.

Configure Agent

Select the model, and give the instruction for the Agent, make sure it's correct as it will be used by the agent to get triggered.

Agent Prompt

Select the Knowledge from the drop-down

Select the knowledge base from the drop-down

Then select the openAPI schema uploaded to S3 earlier

openAPI Schema

Finally, click on Create Agent, then copy the Agent ARN.

Move back to our lambda function, and add a Resource-based access, so that bedrock Agent can invoke it. Select AWS Service, as bedrock is relatively new, select the "other" option, give it a unique ID, the principle as bedrock.amazonaws.com, provide the copied ARN, and make the selection action as Invoke Function.

Lambda Configurations

The Resource-based Policy looks like:

Resource-based Policy

Finally, we are all set, let's test it.

Able to fetch Weather


It is able to fetch the weather in real-time using the API.

Now, before moving to create an agent Alias, unfortunately, I forgot to take the screenshot, but it's easy just click create Alias under the deployment option and provide the alias name, and that's it.

Note: The OpenAPI Schema is very important and the lambda return format, if they are incorrect Agent will start experiencing issues.


Finally, let's move towards the last step, making the Python UI


Developing D-AI Python Chat App

Now, it's time to connect to the orchestrated model and use it in real time via a custom UI.

D-AI Chat App


Firstly we need to download AWS CLI. Once downloaded, install the CLI. Reopen the cmd and type the following command:

aws --version        

The output should be like this

AWS CLI

Great, AWS CLI is now installed.


Now, head to AWS Console click on the user name, and select security credentials

Click Security Credentials


Then in security credentials, scroll down to Access Keys and generate a new one.

Generate Access Key

Note: We are making a Root Key, which can be dangerous if the key is not handled securely.

Now, again move to the AWS CLI and type the following command:

aws configure        

Provide the Key details and it should be ready to use.

AWS CLI - Key config

The install boto3 Library, use the command:

pip install boto3==1.34.59        

Now, start a new Jupyer notebook to obtain a few details, you can find the jupyter notebook, that I used here.

Now, import Boto3 and initialize a client in region us-east-1, and service as bedrock-agent

import boto3

client = boto3.client('bedrock-agent', region_name="us-east-1")        

Then, get the list of Agents, and use the code:

client.list_agents() # get agents list, I have executed the code again after deleting agent to confirm that id is removed.        

This is to provide the list of available agents, from the output we need to keep in mind the Agent ID returned.

Agent List

Now, it is time to fetch agent aliases, use the code:

client.list_agent_aliases(agentId='Agent Id from last step')        

From the output, copy the agentAliasId.

Copy the Alias ID


Great, now let's quickly build the D-AI Python App

Firstly install Streamlit, using the command

pip install streamlit        

Then open the code editor of your choice, import the required libraries as

import streamlit as st
import boto3
import streamlit as st
from dataclasses import dataclass        

Then again initialize the boto3 client in the same region but this time with

bedrock-agent-runtime        

The code:

client = boto3.client('bedrock-agent-runtime', region_name="us-east-1")        

Now let's understand the part to invoke the agent from the code


resp = client.invoke_agent(
        sessionId="Give a Unique ID for session",
        # refer to the jupyer notebook for the agentAliasId
        agentAliasId='Agent-Alias-Id',
        enableTrace=False,
        endSession=False,
        # refer to the jupyer notebook for the agentId
        agentId='Agent-Id',
        inputText=prompt,)
    
    response_data = resp['completion']
    
    for d in response_data:
        print(d)
        
    ai_response = d['chunk']['bytes'].decode('utf-8')

    # Print the extracted text
    print(ai_response)        

So, here we need to use the function invoke agent, provide a unique string of id for sessionId, we need to give agent id and agent alias id copied in the previous step, and in input text, we need to give the prompt

Now, the response for Agent will come in a key named 'completion', which is essentially iterable, so to get the output we need to iterate over it using a for loop, Once iterated, we need to do some data cleaning and the output is ready!!

The Logic of the Streamlit Chat app is relatively simple and self understandable, we are using sessions, so it presents the output as a chat, otherwise streamlit will clear the page after every prompt, the complete code is here.

The Outputs and Capabilities:

First Chat, the simple one, I asked about the weather in Delhi city and its coordinates. The coordinates were available in the dataset we provided earlier, and using the agent it was able to fetch the current weather.

Simple Chat

Now, let's make it a little difficult, I asked about the weather of Jaipur, which is not in the dataset, so it asked me for the coordinates of Jaipur, once I provided them, it invoked the agent and the temperature was right here. So powerful Right?

Temp of Jaipur


Finally, to make it complex, I again asked about the weather of Shillong, again its coordinates were not there in the dataset, but this time I provided Coordinates in shorthand, unlike the previous prompt.

Temp of Shillong

And, to my surprise it worked! So Powerful. I checked for the temperature on Google, just to be sure, and they were correct.

Congratulations! We've successfully developed the D-AI application.


Disclaimer: This project involves costs associated with AWS cloud services and is not free. Users should be aware that usage of AWS resources, including but not limited to compute, storage, and data transfer, may result in charges to their AWS account. The actual cost may vary depending on usage patterns and resource configurations. The project creator (I, myself) incurred a cost of $9.48 during development. Users are responsible for monitoring their AWS usage and managing costs accordingly.

Cost and Billing

Download Jupyter Notebook from here.

Download the D-AI App from here.

GitHub Repo:


A Word of Thanks ????

Thank you to all the readers for taking the time to read this article. A special thanks to Vimal Daga sir for his invaluable guidance and education on this topic. Your support and mentorship have been truly appreciated.

Alex Armasu

Founder & CEO, Group 8 Security Solutions Inc. DBA Machine Learning Intelligence

6 个月

Much thanks for your post!

Exploring AI with Amazon Bedrock truly underlines that imagination is only the beginning. As Elon Musk said - pushing past what we know leads to greatness. Dive deeper, innovate boldly! ???? #innovation #creativity

回复

Great work ??

Yassine Fatihi ??

Crafting Audits, Process and Automations that Generate ?+??| Work remotely Only | Founder & Tech Creative | 30+ Companies Guided

7 个月

Can't wait to dive into this innovative AI world with Amazon Bedrock! Ddhruv Arora

Harshita Sharma

Arth 4.0 Trainee || AI & ML Enthusiast || Python || C/C++ || RHEL9 || Bioinformatics || Biotechnology

7 个月

Very descriptive and informative. Great article.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了