Azure OpenAI API Governance

Azure OpenAI API Governance

?

What is Generative AI?

Generative AI is a groundbreaking form of artificial intelligence that swiftly creates content in response to prompts. It offers enormous productivity benefits for individuals and organizations, and while it also presents very real challenges and risks, businesses are forging ahead, exploring how the technology can improve their internal workflows and enrich their products and services.

Generative AI is extremely important for several reasons:

1.? Innovation and Creativity: Generative AI is fundamentally reshaping our approach to innovation and creativity3. It goes beyond the traditional roles of AI, venturing into realms where it can autonomously generate novel content, from intricate designs to sophisticated code algorithms.

2.? Speed and Efficiency: Generative AI brings unprecedented speed and creativity to areas like design research and copy generation. It takes business process automation to a transformative new level, catalyzing a new era of efficiency in both the back and front offices.

3.? Product Development: Generative AI can lead to faster product development.

4.? Enhanced Customer Experience: It can enhance customer experience.

5.? Improved Employee Productivity: Generative AI can improve employee productivity.

6.? Deep Learning Models: Generative AI relies on sophisticated machine learning models called deep learning models. These models work by identifying and encoding the patterns and relationships in huge amounts of data, and then using that information to understand users' natural language requests or questions and respond with relevant new content.

?

Generative AI Providers

There are many companies focus on developing and providing generative AI technologies and make them available to other developers, businesses, and the general public through APIs or commercial licenses. They offer access to commercial or open-source foundation models such as Large Language Models (LLMs) and other types of generative algorithms.

There are several leading providers of Generative AI technologies. Here are some of them:

1.? OpenAI: OpenAI is a leading provider of Generative AI models and platforms. They have developed advanced AI models such as GPT-4, ChatGPT, DALL-E 3, and Sora. OpenAI leads in the share of the foundational model and platform vendors market with 39%.

2.? Microsoft: Microsoft is another major player in the Generative AI space. They offer products like Microsoft Copilot, Copilot for Microsoft 365, Microsoft Copilot Studio, and Microsoft Copilot in Bing. Microsoft holds a 30% market share in this sector.

3.? Amazon (AWS): Amazon Web Services (AWS) provides Generative AI services through products like Amazon Bedrock, Amazon Q, Amazon CodeWhisperer, and Amazon SageMaker. AWS has an 8% share of this market.

4.? Google (Alphabet): Google, under its parent company Alphabet, offers Generative AI solutions through products like Gemini, Vertex AI, and Gemini for Google Workspace.

5.? NVIDIA: NVIDIA is known for its AI capabilities, including NVIDIA AI, NVIDIA NeMo, NVIDIA BioNeMo, NVIDIA Picasso, and various chips and GPUs.

6.? Anthropic: Anthropic is a newer player in the Generative AI space, offering products like Claude 3 and Claude API.

7.? Cohere: Cohere offers a range of Generative AI products including Command, Embed, Chat, Generate, and Semantic Search.

?

What is Azure OpenAI Service?

Azure OpenAI Service is a product offered by Microsoft Azure that provides access to advanced language models such as GPT-4 and DALL·E. It's designed to help developers build applications with powerful AI capabilities.

Azure OpenAI Service has the following key features:

1.? Generative AI Models: It includes a diverse set of models from OpenAI, Meta, and beyond, allowing developers to build custom copilots and generative AI experiences.

2.? Conversational AI: It can be used to create AI-generated bots, questions and answers, and contact center solutions to streamline and improve customer service.

3.? Content Creation: It provides AI tools such as GPT-4 and DALL·E for creative ideation, design, and content writing.

4.? Data Grounding: It can run models on your data for greater accuracy and insights.

5.? Security and Compliance: Azure OpenAI Service ensures data security and privacy with built-in responsible AI tools1. Microsoft is investing $20 billion in cybersecurity and has 8,500 security and threat intelligence experts.

6.? Flexible Pricing: It offers the flexibility of both Pay-As-You-Go (PAYG) and Provisioned Throughput Units (PTUs) pricing.

Azure OpenAI Service is the result of a partnership between Microsoft and OpenAI that aims to accelerate breakthroughs in AI. It's enabling customers across industries from health care to financial services to manufacturing to quickly perform an array of tasks. Innovations include generating unique content for customers, summarizing and classifying customer feedback, and extracting text from medical records to streamline billing.

Azure OpenAI Service is a powerful tool that combines the advanced AI capabilities of OpenAI with the security and enterprise promise of Azure.

?

Azure OpenAI Service Access Governance

Azure OpenAI Service has a robust access governance system in place to ensure secure and controlled access to its resources3. Here are some key aspects of it:

1.? Azure Role-Based Access Control (RBAC): Azure OpenAI Service supports Azure RBAC, an authorization system for managing individual access to Azure resources. Using Azure RBAC, you can assign different team members different levels of permissions based on their needs for a given project.

2.? Data Privacy and Security: Azure OpenAI Service processes, uses, and stores data in a manner that ensures data privacy and security. Your prompts (inputs) and completions (outputs), your embeddings, and your training data are not available to other customers, OpenAI, or used to improve any Microsoft or 3rd party products or services1. The Azure OpenAI Service is fully controlled by Microsoft and does not interact with any services operated by OpenAI.

3.? Limited Access Framework: As part of the Limited Access Framework, developers are required to apply for access, describing their intended use case or application before they are given access to the service. However, all Azure customers are eligible for access to Azure OpenAI models, and all uses consistent with the Product Terms and Code of Conduct are permitted.

4.? Responsible AI Practices: Azure OpenAI Service follows Microsoft's Responsible AI practices, which include identifying, measuring, mitigating potential harms, and planning for how to operate the AI system.

Azure OpenAI Service has a comprehensive access governance system that ensures secure, controlled, and responsible use of its resources.

?

Azure OpenAI Access Governance Known Limitations

Azure OpenAI Access Governance has a few known limitations:

1.? Custom API Headers: Azure OpenAI currently allows up to 10 custom headers, which are passed through the pipeline and returned. Some customers exceed this header count, resulting in HTTP 431 errors. There is no solution for this error, other than to reduce header volume. In future API versions, Azure OpenAI will no longer pass through custom headers.

2.? Python Library Compatibility: Azure OpenAI is supported by the latest release of the OpenAI Python library (version>=1.0). However, migration of your codebase using “openai migrate” is not supported and will not work with code that targets Azure OpenAI.

3.? Limited Access Framework: Azure OpenAI is a Limited Access service, and access and use is subject to eligibility criteria determined by Microsoft. Customers are not required to submit a registration form unless they are requesting approval to modify content filters and/or abuse monitoring.

4.? Resource Instances Limit: There is a limit of 30 Azure OpenAI resource instances per region.

5.? Image Input: GPT-4 is designed by OpenAI to be multimodal, but currently only text input and output are supported.

6.? Data Usage: Azure OpenAI doesn't use customer data to retrain models. Your embeddings and training data are not available to other customers, not available to OpenAI, not used to improve any Microsoft or third-party products or services, and not used to fine-tune Azure OpenAI models for your use in your resource unless you chose to fine-tune models with your own training data.

7.? Token Limitations: With Azure OpenAI, there are general limits, Tokens per Request (TPR) limits per model, and Tokens per Minute (TPM) limits per model and region availability.

?

These limitations are part of the current design and operation of Azure OpenAI Service and are subject to change as the service evolves.

?

What are System Messages?

?

System messages, also known as metaprompts or system prompts, are extremely important in the field of AI for several reasons:

1.? Guiding AI Behavior: System messages serve as a guiding light that directs the AI’s actions, shapes its output, and facilitates more meaningful and controlled interactions. They can provide context, guidance, and directions to the AI model, greatly impacting responses2.

2.? Tailoring AI Responses: Their importance lies in their ability to help users tailor AI responses to their needs. They can be used to define the AI's profile, capabilities, and limitations for a specific scenario.

3.? Improving System Performance: System messages can be used to guide an AI system’s behavior and improve system performance. They can help increase the accuracy and grounding of responses generated with a Large Language Model (LLM).

4.? Prompt Security: System messages also contribute to prompt security. They can be used to reinforce boundaries and guidelines, reducing the risk of undesired outputs or behavior from the AI system.

5.? Communication Effectiveness: System messages play a crucial role in shaping the AI's responses and facilitating more effective communication.

In essence, system messages are a powerful tool for guiding AI behavior, tailoring AI responses, improving system performance, ensuring prompt security, and enhancing communication effectiveness.

?

System Messages Known Limitations

While system messages are a powerful tool for guiding AI behavior, they do have limitations and must be used thoughtfully and strategically.

System messages have some known limitations:

1.? Effectiveness: The effectiveness of system messages can vary across different models and scenarios1. A carefully crafted system message that works well for a particular scenario doesn't necessarily mean it will work more broadly across other scenarios.

2.? Security: System messages are typically used to guide the AI's behavior with a specific persona or focus. However, in some cases, a user may bypass these guidelines (intentionally or unintentionally). To mitigate this, you can append a system message at the end of the conversation to reiterate your most important constraints.

3.? Context Sensitivity: Providing context via a system message can significantly impact the output of a prompt. However, the impact can vary depending on the specific context and the AI model used.

4.? Resistance to Jailbreaks: Different system messages have distinct resistances to jailbreaks3. Jailbreaks are attempts to make the AI model generate outputs that violate its safety constraints.

5.? Understanding Limitations: Understanding the limitations of Large Language Models (LLMs) and the mechanisms for evaluating and mitigating those limitations is just as important as understanding how to leverage their strengths.

?

?

Resolving Azure OpenAI Known Limitations

To resolve Azure OpenAI Known Limitations the Enterprise Application Governance must be put in place:

1.? Azure OpenAI API Governance: The systematic management of OpenAI APIs through defined protocols and policies to ensure consistency, foster collaboration, comprehensive set of standards, policies, and practices that direct how an organization develop and deploys applications which are using Azure OpenAI API.

2.? System Messages Governance: Create comprehensive set of standards, policies, and practices that define the format and content of System Messages for various applications which are using Azure OpenAI APIs.

3.? Enforce System Messages Injection: Create set of policies to enforce System Message injection for each call the applications make to Azure OpenAI APIs.

4.? Rate Limiting and Throttling: Create set of policies to control Azure OpenAI Tokens Usage.

?

Azure OpenAI API Governance

?

Azure API Manager can be used to implement Azure OpenAI API Governance natively in Azure Public Cloud. The following these steps must be followed in Azure Public Cloud:

1.???? Azure OpenAI Service must have the following configuration:

-?????? No Public Access

-?????? No Application Keys

-?????? Private Endpoint defined in VNET/Subnet reserved to host Private Endpoints

-?????? Azure APIM Managed Identity must be configured to access OpenAI API

2.???? Azure API Manager must have the following configuration:

- ?VNET Integrated in Internal Mode

- Defined in VNET/Subnet reserved for APIM

- Enabled Managed Identity

- APIM OpenAI Endpoints policies must inject System Messages for each call to Azure OpenAI API

3. Azure Application Gateway must have the following configuration:

- Configured in VNET/Subnet dedicated to Application Gateway

- Defined Public and Private Listeners

- Defined Public and Private Routings

- DNS Configuration for Application Gateway Public IP address

4. Azure Firewall must have the following configuration:

- permit traffic from Azure Application Gateway subnet to Azure API Manager Subnet

- block any traffic from outside to Azure API Manager subnet

- permit traffic from Azure APIM Subnet to Private Endpoint Subnet

- block any traffic from outside to Private Endpoint Subnet

?5. Azure Services Integrated with VNET must be configured to call Azure Application Gateway Private IP for OpenAI APIs

6. Azure Services hosted in Azure Public Cloud must be configured to call Azure Application Gateway hostname for OpenAI APIs

7. Azure Front Door must be configured to route the calls from the Internet to Azure Application Gateway hostname for OpenAI APIs

8. Internet Applications must be configured to call Azure Front Door for OpenAI APIs

?

?

?

?

?

?

要查看或添加评论,请登录

Vitaly Scherban的更多文章

  • API Governance - Brief Description

    API Governance - Brief Description

    API Governance 1. Introduction to API Governance API Governance is a critical aspect of managing and controlling APIs…

  • Decoding AI: Markov Chain

    Decoding AI: Markov Chain

    What is Markov Chain? A Markov Chain is a discrete stochastic process (such as a random walk) in which the…

社区洞察

其他会员也浏览了