Azure AI Engineer Associate Guide
Aritra Ghosh
Founder at Vidyutva | EV | Solutions Architect | Azure & AI Expert | Ex- Infosys | Passionate about innovating for a sustainable future in Electric Vehicle infrastructure.
The Azure AI Engineer Associate exam measures the skills and knowledge required to design and implement AI solutions that leverage Azure cognitive services and Azure Cognitive Search. This exam covers various topics, including speech services, video analysis, language understanding, and bot framework.
This guide has been created to help candidates prepare for the Azure AI Engineer Associate renewal exam. It provides a collection of sample questions, answers, and explanations for each of the exam's domains, which have been grouped into the following categories:
Group 1: Create speech-enabled apps with the Speech service
Group 2: Translate speech with the speech service
Group 3: Analyze video
Group 4: Build a Language Understanding model
Group 5: Publish and use a Language Understanding app
Group 6: Extract insights from text with the Language service
Group 7: Create a bot with the Bot Framework SDK
Group 8: Create a Bot with the Bot Framework Composer
Group 9: Create an Azure Cognitive Search solution
Group 10: Create a custom skill for Azure Cognitive Search
Group 11: Create a knowledge store with Azure Cognitive Search
Now we will discuss in detail each and every group and what kind of questions that you might face.
Group 1: Create speech-enabled apps with the Speech service
1. What is the Speech service in Azure?
Answer: The Speech service is an Azure service that provides speech-to-text and text-to-speech capabilities.
Explanation: The Speech service provides two primary functions: speech-to-text and text-to-speech. With speech-to-text, the service transcribes audio input into text. With text-to-speech, the service synthesizes natural-sounding speech from text input.
2. What is a Speech resource in Azure?
Answer: A Speech resource is an Azure resource that provides access to the Speech service.
Explanation: A Speech resource is an Azure resource that can be provisioned through the Azure portal or programmatically through the Azure REST API. The Speech resource provides access to the Speech service and is required to use the service.
3. How can you create a Speech resource in Azure?
Answer: You can create a Speech resource in Azure by provisioning it through the Azure portal or programmatically through the Azure REST API.
Explanation: To create a Speech resource through the Azure portal, navigate to the Speech services page and select "Create Speech resource". You will need to specify a name, subscription, resource group, location, and pricing tier for the resource. To create a Speech resource programmatically, you can use the Azure REST API to send a POST request to the Speech resource provider.
Group 2: Translate speech with the speech service
1. What is the Translation service in Azure?
Answer: The Translation service is an Azure service that provides real-time language translation capabilities.
Explanation: The Translation service provides real-time language translation capabilities for both text and speech input. The service can translate text input into multiple languages and can also transcribe speech input and translate it into another language.
2. How can you create a Translation resource in Azure?
Answer: You can create a Translation resource in Azure by provisioning it through the Azure portal or programmatically through the Azure REST API.
Explanation: To create a Translation resource through the Azure portal, navigate to the Translation services page and select "Create Translation resource". You will need to specify a name, subscription, resource group, location, and pricing tier for the resource. To create a Translation resource programmatically, you can use the Azure REST API to send a POST request to the Translation resource provider.
3. What is the Translator Speech API in Azure?
Answer: The Translator Speech API is an Azure API that provides real-time speech translation capabilities.
Explanation: The Translator Speech API is an Azure API that provides real-time speech translation capabilities. The API can transcribe speech input into text, translate the text into another language, and synthesize natural-sounding speech in the target language.
Group 3: Analyze video
1. What is the Video Indexer service in Azure?
Answer: The Video Indexer service is an Azure service that provides intelligent video analysis capabilities.
Explanation: The Video Indexer service provides intelligent video analysis capabilities that can automatically extract metadata from video content. The service can detect scene changes, identify objects and people, transcribe speech, and perform face and emotion recognition.
2. What is the Video Indexer API in Azure?
Answer: The Video Indexer API is an Azure API that provides programmatic access to the Video Indexer service.
Explanation: The Video Indexer API provides programmatic access to the Video Indexer service. The API can be used to upload videos for analysis, retrieve video metadata, and search for specific video content.
3. What is the Azure Media Services v3 API in Azure?
Answer: The Azure Media Services v3 API is an Azure API that provides programmatic access to Azure Media Services.
Explanation: Azure Media Services is an Azure service that provides cloud-based video processing and delivery capabilities.
Group 4: Build a Language Understanding model
1. You need to build a Language Understanding model. What is the first step in building the model?
Answer: The first step in building a Language Understanding model is to define the intents and entities that the model needs to recognize.
Explanation: To build a Language Understanding model, you need to first define the intents and entities that the model needs to recognize. An intent represents a user goal or action, and entities represent important pieces of information related to the intent.
2. You are building a Language Understanding model. What is the purpose of the example utterances?
Answer: The purpose of the example utterances is to train the Language Understanding model to recognize user intents and entities.
Explanation: Example utterances are used to train the Language Understanding model to recognize user intents and entities. The more example utterances you provide, the better the model will be at recognizing different variations of user inputs.
3. You are building a Language Understanding model. What is an entity in the context of a Language Understanding model?
Answer: In the context of a Language Understanding model, an entity is a piece of information that is relevant to an intent.
Explanation: In the context of a Language Understanding model, an entity is a piece of information that is relevant to an intent. For example, if the intent is to book a flight, the entities might include the departure airport, destination airport, and date of travel.
Group 5: Publish and use a Language Understanding app
1. You have built a Language Understanding model. What is the next step in publishing the model as a Language Understanding app?
Answer: The next step in publishing the model as a Language Understanding app is to create an endpoint and publish the app.
Explanation: After you have built a Language Understanding model, the next step is to create an endpoint and publish the app. This enables the app to receive user input and return the appropriate response based on the model's understanding of the user's intent.
2. You have created a Language Understanding app. How do you use the app to recognize intents and entities in user input?
Answer: To use a Language Understanding app to recognize intents and entities in user input, you need to send the user input to the app's endpoint and receive the app's response.
Explanation: To use a Language Understanding app to recognize intents and entities in user input, you need to send the user input to the app's endpoint and receive the app's response. The app's response will include the recognized intents and entities, which can then be used to determine the appropriate response to return to the user.
Group 6: Extract insights from text with the Language service
领英推荐
1. You need to extract key phrases from a text document. Which Azure service should you use?
Answer: You should use the Language service to extract key phrases from a text document.
Explanation: The Language service provides a key phrase extraction API that can be used to extract key phrases from a text document. The API uses machine learning to identify the most relevant phrases in the document.
2. You have a large set of text documents that you need to analyze for sentiment. Which Azure service should you use?
Answer: You should use the Text Analytics service to analyze the sentiment of a large set of text documents.
Explanation: The Text Analytics service provides a sentiment analysis API that can be used to analyze the sentiment of a large set of text documents. The API uses machine learning to identify the sentiment of each document as positive, negative, or neutral.
3. Question: You have a language understanding model for a customer service chatbot. Which service should you use to determine the sentiment of a customer's message?
Answer: Text Analytics
Explanation: Text Analytics is an Azure service that allows you to determine the sentiment of a customer's message, among other things. It is commonly used in language understanding models for chatbots to help them understand and respond appropriately to customer messages.
4. Question: You are building a language understanding model to identify entities in customer support tickets. Which Azure service should you use to extract entities from text?
Answer: Language Understanding (LUIS)
Explanation: Language Understanding (LUIS) is an Azure service that enables you to build natural language understanding into your apps. It uses machine learning to extract entities from text and understand user intentions. In this case, you would use LUIS to extract entities from customer support tickets.
5. Question: You have an application that needs to extract key phrases from text. Which Azure service should you use?
Answer: Text Analytics
Explanation: Text Analytics is an Azure service that can extract key phrases from text. It is commonly used in language understanding models to help identify the main topics or themes in customer messages or other forms of text.
Group 7: Create a bot with the Bot Framework SDK
1. Question: Which language can you use to build a bot with the Bot Framework SDK?
Answer: C# or Node.js
Explanation: The Bot Framework SDK supports building bots using either C# or Node.js. Both languages are widely used in web development and have strong communities with lots of resources available.
2. Question: What is the primary purpose of the Bot Framework SDK?
Answer: To provide tools and libraries for building conversational bots.
Explanation: The Bot Framework SDK is a set of tools and libraries that developers can use to build conversational bots. It includes features such as natural language processing, dialogs, and messaging APIs to make it easier to create bots that can carry on complex conversations with users.
3. Question: Which Azure service provides a web chat control that can be embedded into a website to allow users to chat with a bot?
Answer: Bot Service
Explanation: The Bot Service is an Azure service that provides a web chat control that can be embedded into a website to allow users to chat with a bot. This control supports a wide range of features, including rich cards, buttons, and other elements that can be used to create engaging conversations with users.
Group 8: Create a Bot with the Bot Framework Composer
1. Question: What is the Bot Framework Composer?
Answer: A visual authoring tool for building conversational bots.
Explanation: The Bot Framework Composer is a visual authoring tool for building conversational bots. It provides a visual interface for building dialogs, testing bots, and integrating them with other services. It can be used with the Bot Framework SDK to create bots using either C# or Node.js.
2. Question: What types of conversational interfaces can you build with the Bot Framework Composer?
Answer: Text-based bots, voice-based bots, and bots with a combination of text and voice.
Explanation: The Bot Framework Composer can be used to create a wide range of conversational interfaces, including text-based bots, voice-based bots, and bots with a combination of text and voice. It provides a visual interface for building dialogs and can be used to integrate with a variety of services, including language understanding models and custom APIs.
3. Question: Which Azure service provides hosting and deployment capabilities for bots built with the Bot Framework Composer?
Answer: Bot Service
Explanation: The Bot Service is an Azure service that provides hosting and deployment capabilities for bots built with the Bot Framework Composer. It supports a variety of channels, including web chat, Facebook Messenger, and Skype, and provides features such as telemetry, logging.
Group 9: Create an Azure Cognitive Search solution
1. Question: You are planning to build an Azure Cognitive Search solution that will search documents. You need to create a skillset that can extract entities from text fields. Which skill should you use?
Answer: The entity recognition skill should be used to extract entities from text fields.
Explanation: The entity recognition skill is used to extract named entities from unstructured text. It can identify people, places, organizations, dates, and more. This skill can be used to enhance search results by allowing users to filter and refine their searches based on specific entities.
2. Question: You are building an Azure Cognitive Search solution for a dataset that contains the following fields: Date, Title, Summary, Category. You need to ensure that users can filter based on a prespecified set of categories and can order the results by category and date. Which two properties should you use?
Answer: The filterable and sortable properties should be used.
Explanation: The filterable property allows users to filter search results based on a specific field. In this case, the category field should be marked as filterable to allow users to filter based on a preset list of categories. The sortable property allows users to sort search results based on a specific field. In this case, both the category and date fields should be marked as sortable to allow users to order the results by category and date.
3. Question: You have an enrichment pipeline that uses Azure Cognitive Search for images. You need to persist the normalized images. Which three strings should you include in the knowledge store definition?
Answer: The storageConnectionString, storageContainer, and generatedKeyName strings should be included in the knowledge store definition.
Explanation: The storageConnectionString is the connection string to the storage account that will be used to store the normalized images. The storageContainer is the name of the container in the storage account that will be used to store the images. The generatedKeyName is the name of the key that will be generated by Azure Cognitive Search to access the storage account.
4. You have an Azure Cognitive Search solution. You have a custom skillset that receives a complex set of fields in a response. What should you use to simplify and restructure the fields?
Answer: A shaper skill should be used to simplify and restructure the fields.
Explanation: A shaper skill can be used to transform and restructure data received from another skill. It can be used to simplify complex data structures and extract only the relevant information for the search index.
Group 10: Create a custom skill for Azure Cognitive Search
1. You are building an Azure Cognitive Search solution that will search images and PDF documents. Users of the solution will search for text, captions, key phrases, identified objects, or named entities. Each day, you will need to index and enrich 5,000 images and 10,000 PDFs. Which type of resource should you create for the planned solution?
Answer: d. Azure Cognitive Services
Explanation: For searching images and PDF documents, you can use Azure Cognitive Services. Azure Cognitive Services includes pre-built APIs for vision, speech, language, and decision-making. It provides advanced algorithms for image processing and natural language processing.
2. You have an Azure Cognitive Search solution. You have 2,000 customer service calls stored in the WAV format. You plan to receive approximately 100 more service calls each day for several years. You need to include the contents of the calls in searches by using a custom skill. Which Azure service should you use?
Answer: e. Speech
Explanation: To include the contents of the calls in searches by using a custom skill, you can use Azure Speech. Azure Speech includes speech-to-text and text-to-speech capabilities. You can use the speech-to-text capability to transcribe the WAV files into text, which can then be indexed and searched.
Group 11: Create a knowledge store with Azure Cognitive Search
1. You have an enrichment pipeline that uses Azure Cognitive Search for images. You need to persist the normalized images. Which three strings should you include in the knowledge store definition?
Answer: b. storageConnectionString, c. storageContainer, e. generatedKeyName
Explanation: To persist the normalized images, you need to include the storageConnectionString, storageContainer, and generatedKeyName strings in the knowledge store definition.
2. You have an enrichment pipeline that uses Azure Cognitive Search. The pipeline ingests Azure Cosmos DB data and identifies key phrases. The application that generates the data must be able to retrieve key phrases and confidence by querying an API. What should you use to persist the data?
Answer: c. an object store
Explanation: To persist the data, you can use an object store. An object store is a storage service that provides a way to store and retrieve data as objects. It is designed to store large amounts of unstructured data, such as images, videos, and other multimedia files.
Owner, Sana A. Khan, LLC | Cloud, Data, AI, Automation | Conscious Transformation & Ascension of Humanity
1 年Great resource. Thank you Aritra Ghosh!!