Azure AI Studio - Prompt Flow & RAG Copilot

Azure AI Studio - Prompt Flow & RAG Copilot

This article is focusing on Azure AI Studio - Prompt Flow & RAG Copilot.

In the rapidly advancing field of artificial intelligence (AI), developers are often tasked with creating complex AI solutions. These solutions integrate machine learning models, AI services, prompt engineering solutions, and custom code. Microsoft Azure has been instrumental in this area, providing a variety of services for building AI solutions. However, the challenge has been managing multiple tools and web portals for a single project.

Azure AI Studio is a breakthrough in this domain. It consolidates the features of Azure Machine Learning, Azure OpenAI, and other Azure AI services into a single workspace. This collaborative platform enables developers to work in tandem with data scientists and other professionals to construct AI solutions.

This article provides an overview of Azure AI Studio and its application in creating and managing AI development projects.


Azure AI Studio

You need an Azure AI Hub in your Azure subscription to host projects.

mslearn-ai-studio (microsoftlearning.github.io)

https://ai.azure.com/

Azure AI Studio is a one-stop-shop for AI development, merging various Azure AI services into a single platform. It combines the model catalog and Prompt Flow from Azure Machine Learning Service, generative AI capabilities of Azure OpenAI service, and integrates with Azure AI Services for various AI functionalities.

It provides a collaborative workspace, Azure AI Hubs, for data scientists and developers to work together. It also allows for project creation, scalable computing, integration with data sources, and other cloud services. It offers web-based code development environments and automation libraries.

With Azure AI Studio, teams can efficiently work on AI projects, deploy models, test generative AI models, integrate data for prompt engineering, define workflows, and integrate content security filters. It’s a powerful tool for expanding AI solutions with multiple functions.

How does it Work

An AI Hub serves as a collaborative workspace for the development and management of AI solutions. To utilize the features and capabilities of AI Studio for solution development, at least one Azure AI Hub is required.

Hub Overview

An Azure AI Hub can host one or more projects. Each project encapsulates the tools and resources used to create a specific AI solution. For instance, you can create a project to facilitate collaboration between data scientists and developers in building a custom Copilot business application or process.

Project Overview

An Azure AI Hub is the cornerstone for AI development projects on Azure, allowing you to define shared resources that can be used across multiple projects. With AI Studio, you can manage members, compute instances, connections with resources, and define policies for behavior management.

All AI development in Azure AI Studio takes place within a project. You can create a new project and use it to deploy large language models (LLMs), test models, add your own data to expand prompts, define flows that combine models, prompts, and custom code, evaluate model responses to prompts, manage indices and datasets for custom data, define content filters to avoid potentially harmful responses, use Visual Studio Code in the browser to create custom code, and deploy solutions as web apps and containerized services.

You can use Azure AI Studio to create an Azure AI Hub, or you can create a hub while creating a new project. This creates an AI Hub resource in your Azure subscription in the resource group you specify, providing a workspace for collaborative AI development.

In addition to the central AI Hub resource, additional Azure resources are created to provide supporting services. These include a storage account, a key vault, a container registry, an Application Insights resource, and an Azure OpenAI Service resource.

Azure AI Studio serves as an integration point for other AI services, such as speech, language, and vision. By adding more AI services to your solution, you can add even more features to your AI solution.

When to use Azure AI Studio

Azure AI Studio is a comprehensive platform designed to empower developers and data scientists in creating bespoke copilots and advanced, market-ready, responsible generative AI applications. Here’s a succinct summary of its key features:

  • Project Management: Azure AI Studio serves as a unified hub for all your AI endeavors, facilitating resource management, team collaboration, and workflow optimization.
  • Generative AI Development: For those aiming to create applications capable of content generation or to construct their own prompt flow, Azure AI Studio’s generative AI functionalities are indispensable.
  • AI Model Exploration: Azure AI Studio’s model catalog allows for experimentation with a variety of AI models from leading providers such as OpenAI, Microsoft, and Hugging Face.
  • Retrieval Augmented Generation (RAG): Projects necessitating the amalgamation of retrieval and generation can benefit from Azure AI Studio’s RAG features, which enhance the quality and relevance of the generated content.
  • AI Model Evaluation: Azure AI Studio offers robust tools for the assessment and monitoring of your prompt flows and AI models, ensuring they meet the desired performance metrics.
  • Integration with Azure Services: For AI applications that need to integrate seamlessly with other Azure services, Azure AI Studio provides easy integration, making it a versatile choice for complex projects.
  • Responsible AI Development: Azure AI Studio underscores the ethical use of AI, offering guidance and tools to ensure your applications comply with ethical standards and best practices.


Prompt Flow

Prompt Flow in Azure AI Studio is a powerful tool for harnessing the capabilities of Large Language Models (LLMs). It’s a one-stop solution for managing, developing, and deploying LLM applications. Here’s a brief rundown:

  • Centralized Management: It serves as a hub for all AI projects, streamlining collaboration and resource management.
  • Generative AI: It’s instrumental in building applications that generate content or custom prompt flows.
  • Model Catalog: It offers a range of AI models from industry leaders for experimentation.
  • Retrieval Augmented Generation (RAG): It enhances content quality and relevance by combining retrieval and generation.
  • Performance Monitoring: It provides tools for evaluating and monitoring AI models and prompt flows.
  • Integration: It ensures seamless integration with other Azure services for complex projects.
  • Ethical AI: It promotes responsible AI use, providing tools and guidance for ethical standards and best practices.

Development lifecycle of a Large Language Model (LLM) application

The development lifecycle of a Large Language Model (LLM) application is a comprehensive process that includes several key stages:

  • Initialization: This is the stage where you define the use case and design the solution. For instance, if you’re developing an LLM application for classifying news articles, you need to define the output categories, understand the structure of a typical news article, and determine how the application will generate the desired output.
  • Experimentation: Once you have a clear understanding of your application’s requirements, you develop a flow and test it with a small dataset. This iterative process involves running the flow, evaluating its performance, and making necessary adjustments until you’re satisfied with the results.
  • Evaluation and Refinement: After successful experimentation, you evaluate the flow with a larger dataset. This allows you to assess how well the LLM application generalizes to new data and identify potential areas for optimization or refinement.
  • Production: Once your LLM application has proven to be robust and reliable, it’s ready for production. This involves optimizing the flow for efficiency and effectiveness, deploying the flow on an endpoint, and monitoring the performance of your solution by collecting usage data and end-user feedback.

Creating a Large Language Model (LLM) application with Prompt Flow involves understanding its core components:

  • Flow: A feature within Azure AI Studio that allows you to author executable workflows, often consisting of three parts: Inputs (data passed into the flow), Nodes (tools that perform data processing or task execution), and Outputs (data produced by the flow).
  • Tools: These are executable units with specific functions. Common tools include the LLM tool for custom prompt creation, Python tool for executing custom Python scripts, and Prompt tool for preparing prompts as strings for complex scenarios or integration with other tools.
  • Types of Flows: There are three types of flows you can create with Prompt Flow: Standard Flow for general LLM-based application development, Chat Flow designed for conversational applications, and Evaluation Flow focused on performance evaluation.

Once you understand how a flow is structured and what you can use it for, you can start creating a flow. This involves adding new nodes (or tools) to your flow, defining the expected inputs and outputs, and linking nodes together. By defining the inputs, connecting nodes, and defining the desired outputs, you can create a flow that helps you create LLM applications for various purposes.

Creating a Large Language Model (LLM) application with Prompt Flow involves two crucial steps: configuring connections and setting up runtimes.

  • Connections: These are secure links between Prompt Flow and external services, ensuring seamless and safe data communication. Connections securely store the endpoint, API key, or credentials necessary for Prompt Flow to communicate with the external service. They automate API credential management and enable secure data transfer from various sources, crucial for maintaining data integrity and privacy across different environments.

  • Runtimes: These provide the necessary compute resources to run your flow. Runtimes are a combination of a compute instance and an environment that specifies the necessary packages and libraries that need to be installed before running the flow. They offer a controlled environment where flows can be run and validated, ensuring that everything works as intended in a stable setting.

Get started with prompt flow in the Azure AI Studio

It's best to go through the whole thing yourself and test it:

mslearn-ai-studio (microsoftlearning.github.io)


RAG-based copilot solution

Language models, particularly when used in chat interfaces, offer an intuitive way to deliver coherent and impressive responses to user queries. However, a key challenge in implementing these models is ensuring “groundedness” - that is, making sure the model’s responses are rooted in factual information or a specific context.

Ungrounded Prompts and Responses: When a language model generates a response to a prompt, it bases its answer on the data it was trained on, which often consists of large amounts of uncontextualized text from the internet or other sources. While the resulting response may be grammatically coherent and logical, it may not be grounded in relevant, factual data. This can lead to uncontextualized or even inaccurate responses that may include invented information.


Grounded Prompts and Responses: To address this, you can ground the prompt with relevant, factual context from a data source. The prompt, along with this grounding data, can then be submitted to the language model to generate a contextualized, relevant, and accurate response. The data source can be any repository of relevant data. For instance, data from a product catalog database could be used to ground a prompt about product recommendations, ensuring the response includes details of actual products in the catalog.

Understanding Language Models and Copilots

Language models are great at generating engaging text, making them perfect for building copilots - chat-based applications that assist users. However, to ensure the language model provides factual and relevant information, a technique called Retrieval Augmented Generation (RAG) is used.

Retrieval Augmented Generation (RAG)

RAG is a process that retrieves relevant information for a user’s initial prompt. It involves three steps:

  1. Retrieve: Gather grounding data based on the user’s prompt.
  2. Augment: Enhance the prompt with the grounding data.
  3. Generate: Use a language model to create a grounded response.

This ensures the language model uses relevant information when responding, rather than just its training data.

Grounding Data in Azure AI Project

Azure AI Studio allows you to build a custom copilot using your own data to ground prompts. It supports various data connections like Azure Blob Storage, Azure Data Lake Storage Gen2, and Microsoft OneLake, ensuring your copilot’s responses are grounded in reality and specific context.

You can also upload files or folders to the storage used by your AI Studio project.

Building a Grounded Copilot with Azure AI Studio

When creating a copilot that uses your own data to generate accurate responses, efficient data search is crucial. Azure AI Studio, integrated with Azure AI Search, allows you to retrieve relevant context in your chat flow.

Azure AI Search

Azure AI Search is a retriever you can include when building a language model application with Prompt Flow. It allows you to bring your own data, index it, and query the index to retrieve any needed information.

Using a Vector Index

While a text-based index improves search efficiency, a better data retrieval solution can often be achieved using a vector-based index. This index contains embeddings representing the text tokens in your data source.

Embeddings are special data representations that a search engine can use to easily find relevant information. Specifically, an embedding is a vector of floating-point numbers.

For instance, consider two documents:

  1. “The children played joyfully in the park.”
  2. “Kids happily ran around the playground.”

These documents contain semantically related texts, even though different words are used. By creating vector embeddings for the text in the documents, the relation between the words in the text can be mathematically calculated.

The distance between vectors can be calculated by measuring the cosine of the angle between two vectors, also known as the cosine similarity. This computes the semantic similarity between documents and a query.

By representing words and their meanings with vectors, you can extract relevant context from your data source, even when your data is stored in different formats (text or image) and languages.

To use vector search to search your data, you need to create embeddings when creating your search index. To create embeddings for your search index, you can use an Azure OpenAI embedding model available in the Azure AI Studio.

Azure OpenAI Service embeddings - Azure OpenAI - embeddings and cosine similarity | Microsoft Learn

Creating a Search Index with Azure AI Search

In Azure AI Search, a search index is a way to organize your content to make it searchable. Think of it as a catalog in a library that contains relevant data about books, making any book easy to find.

The integration of Azure AI Search in Azure AI Studio simplifies the process of creating an index suitable for language models. You can add your data to Azure AI Studio and then use Azure AI Search to create an index using an embedding model. This index asset is stored in Azure AI Search and queried by Azure AI Studio when used in a chat flow.

Configuring Your Search Index

The configuration of your search index depends on your data and the context you want your language model to use. For instance, keyword search enables you to retrieve information that exactly matches the search query. Semantic search goes a step further by retrieving information that matches the meaning of the query instead of the exact keyword, using semantic models. The most advanced technique currently is vector search, which creates embeddings to represent your data. This technique allows for more nuanced and contextually relevant search results.

Vector search - Azure AI Search | Microsoft Learn

Searching

There are several methods to query information in an index:

  • Keyword Search: This method identifies relevant documents or passages based on specific keywords or terms provided as input.
  • Semantic Search: This approach retrieves documents or passages by understanding the meaning of the query and matching it with semantically related content, rather than relying solely on exact keyword matches.
  • Vector Search: This technique uses mathematical representations of text (vectors) to find similar documents or passages based on their semantic meaning or context.
  • Hybrid Search: This method combines any or all of the other search techniques. Queries are executed in parallel and are returned in a unified result set.

When you create a search index in Azure AI Studio, you’re guided to configure an index that is most suitable to use in combination with a language model. When your search results are used in a generative AI application, hybrid search provides the most accurate results.

Hybrid search is a combination of keyword (and full text), and vector search, with the optional addition of semantic ranking. When you create an index compatible with hybrid search, the retrieved information is precise when exact matches are available (using keywords), and still relevant when only conceptually similar information can be found (using vector search).

Hybrid search - Azure AI Search | Microsoft Learn

Prompt Flow and Large Language Models (LLMs)

Prompt Flow is a development framework for defining flows that orchestrate interactions with an LLM. A flow begins with one or more inputs, usually a question or prompt entered by a user. The flow is then defined as a series of connected tools, each performing a specific operation on the inputs and other environmental variables. Finally, the flow has one or more outputs, typically to return the generated results from an LLM.

Using RAG in a Prompt Flow

The key to using the RAG pattern in a prompt flow is to use an Index Lookup tool to retrieve data from an index. This allows subsequent tools in the flow to use the results to augment the prompt used to generate output from an LLM.

Creating a Chat Flow

Prompt Flow provides various samples you can use as a starting point to create an application. When you want to combine RAG and a language model in your application, you can clone the Multi-round Q&A on your data sample. This sample contains the necessary elements to include RAG and a language model.

Modifying Query with History

The first step in the flow is a Large Language Model (LLM) node that takes the chat history and the user’s last question and generates a new question that includes all necessary information. This generates more succinct input that is processed by the rest of the flow.

Looking Up Relevant Information: You use the Index Lookup tool to query the search index you created with Azure AI Search, finding the relevant information from your data source. Index lookup tool for flows in Azure Machine Learning - Azure Machine Learning | Microsoft Learn

Generating Prompt Context: The output of the Index Lookup tool is the retrieved context you want to use when generating a response to the user. You parse this output into a suitable format to be used in a prompt sent to a language model.

Defining Prompt Variants: When constructing the prompt to send to your language model, you can use variants to represent different prompt contents. This helps ground the chatbot’s responses and explore which content provides the most groundedness.

Chatting with Context: Finally, you use an LLM node to send the prompt to a language model to generate a response using the relevant context retrieved from your data source. The response from this node is also the output of the entire flow.

After configuring the sample chat flow to use your indexed data and the language model of your choosing, you can deploy the flow and integrate it with an application to offer users a copilot experience.

Create a custom copilot that uses your own data

Launch the exercise and follow the instructions.

mslearn-ai-studio (microsoftlearning.github.io)


Thank you for taking the time to read this article on Azure AI Studio - Prompt Flow & RAG Copilot. I hope you found it informative and helpful. As we continue to explore the exciting world of AI, remember that the journey of learning is ongoing. Stay curious, keep asking questions, and never stop learning. Your engagement and feedback are greatly appreciated. Until next time, happy reading! ??????


Bülent Altinsoy

MCT | Business Applications Portfolio Lead @ Avanade | Power Platform & Copilot Studio Expert | Content Creator

3 个月

Love this article!!! ?? great work here! Thanks for sharing!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了