课程: Learn Databricks GenAI

今天就学习课程吧!

今天就开通帐号,24,700 门业界名师课程任您挑!

Use Vector Search index result

Use Vector Search index result

- [Instructor] Now that we've seen how to generate embeddings and query against them, how does that fit into building an augmented LLM application? Well, let's return to our documentation and learn a little bit more. So in the documentation here, we talk about RAG agents. A RAG agent is a part of your RAG app that enhances the capabilities of your LLM. By integrating this external data for retrieval, it will process user queries, retrieve the relevant data from the vector database, and then pass the data to the LLM. The step that is, again, often underestimated is, how do you take this embedding data and get it in a format to pass it to the LLM? Of course, when we communicate with an LLM, we're using some sort of prompt, and we've done our prompt engineering, hopefully. Well, there are a number of open-source libraries, LangChain is probably the one I use the most here. So tools like LangChain or Pyfunc or others will link these steps by connecting the inputs and outputs. Basically…

内容