Over the past year, there has been a significant increase in the development and use of Large Language Models (LLMs) and related embeddings models in the field of Retrieval Augmented Generation (RAG). This area combines LLMs, embeddings, vector databases, and Agent Chains (automation agents) to create advanced applications. We previously discussed building an RAG application using Qdrant and Huggingface in a prior blog post. In this article we focuse on developing a similar application but using the Amazon Bedrock Platform, with a specific emphasis on "abstractive models" that are designed for text generation.
Over the past year, there has been a significant increase in the development and use of Large Language Models (LLMs) and related embeddings models in the field of Retrieval Augmented Generation (RAG). This area combines LLMs, embeddings, vector databases, and Agent Chains (automation agents) to create advanced applications. We previously discussed building an RAG application using Qdrant and Huggingface in a prior blog post. In this article we focuse on developing a similar application but using the Amazon Bedrock Platform, with a specific emphasis on "abstractive models" that are designed for text generation.