Azure VNET Integration

Azure VNET Integration

Azure services fall into two categories based on how they connect to the internet:

Public Services: Open doors, ready for anyone! These services, like Azure SQL Database or Storage, have public IP addresses and are directly accessible from the internet. Think convenience for everyday tasks.

Private Services: Behind guarded gates, only invited guests allowed. Deploy these services in your own Virtual Networks (VNets), private, isolated spaces inside Azure. Think secure havens for sensitive data.

But what if you want a public PaaS service like Azure SQL Database, but behind the secure walls of a VNet? Enter VNet integration patterns – special bridges that keep the internet out while letting authorized VNet users in.

There are three main patterns, catering to different PaaS service architectures:

  1. VNet injection: Like a secret tunnel, it redirects your VNet traffic to the service even though it has a public IP.
  2. VNet service endpoints: Instead of a tunnel, it creates a dedicated private IP address for the service within your VNet. No public internet route here!
  3. Private Link: The ultimate fortress, it hides the service completely from the internet and connects it directly to your VNet through Azure's private network backbone.

Choosing the right pattern depends on how the PaaS service is built. This article unravels the three options and helps you pick the perfect bridge for your private Azure oasis.

Cloud Scale: Dedicated vs. Shared Services and VNet Access

Cloud services scale like crazy! They use big resource pools managed by a control plane. Like magic, you request a service, and BAM, it spins up an instance with its own resources. But how that magic happens defines how you can secure it in a VNet.

Two Players:

  • Dedicated Service: Each instance gets its own private room of resources - no sharing! These are your secure fortresses.
  • Shared Service: Resources are like a communal apartment - multiple instances share the space. Great for efficiency, but less private.

VNet Gatekeepers:

Now, let's control who can access these services:

  • Dedicated Services: These fortresses have special bridges:VNet injection: A secret tunnel redirects VNet traffic even though the service has a public door.Private Link: The ultimate moat - the service vanishes from the internet and connects directly to your VNet through a private tunnel.
  • Shared Services: These communal apartments have different locks:VNet service endpoints: Each VNet gets its own private IP address for the service, like a dedicated mailbox. No public access!

Choosing the Lock:

The service's architecture (dedicated vs. shared) determines which bridges and locks you can use. This article unveils the details and helps you pick the perfect VNet gatekeeper for your secure cloud oasis.

VNet injection

Azure Cache for Redis has a dedicated architecture. Each instance runs on a dedicated set of resources.

Azure PaaS services with a dedicated architecture can be made private by deploying the resources dedicated to an instance into a VNet belonging to the owner of that instance. This integration pattern is referred to as VNet injection.

VNet injection is the VNet integration pattern for services whose architecture is based on dedicated resources that can be deployed (aka “injected”) into the instance owner’s VNet, as shown in Figure below.

  • VNet-injected services are usually deployed to a dedicated subnet that cannot contain any other services (such as Virtual Machines) deployed by the user. This is referred to in Azure documentation as subnet delegation, each VNet-injected service requires its own delegated subnet. The minimum number of required IP addresses in the subnet depends on the service. Please refer to the official documentation of each specific service for details.
  • VNet-injected services are exposed over IP addresses that belong to the VNet’s address space. Therefore, at the network layer, these services behave just like Virtual Machines. More specifically, they canInitiate connections to, and receive connections from, Virtual Machines in the same VNet (or in other VNets connected to it via VNet peering or VNet-2-VNet IPSec tunnels);Initiate connections to, and receive connections from, on-prem hosts over VPN tunnels or Expressroute;Initiate connections to internet-routable IP addresses outside the VNet’s address space by leveraging the default Source-NAT functionality provided by the VNet;Receive inbound connections from routable IP addresses outside the VNet’s address space when exposed behind a public IP address. The public IP address used by the service is usually a front-end IP address of an Azure External Load Balancer.

Please note that a specific service might not leverage all the capabilities listed above.

Implications of VNet injection on network security groups and route tables

The control plane of dedicated services does not have a dedicated architecture itself. It manages multiple resources allocated to different instances and users. Therefore, it cannot be injected in any VNets and must rely on endpoints with public IP addresses to interact with the VNet-injected resources it manages. As a result, VNet-injected resources must be able to (see Figure 3 above):

  • Initiate outbound connections to public IP addresses associated to control plane endpoints, for example to send notifications to the management layer or to download configuration updates from a storage facility, such as an Azure SQL Database or an Azure Storage account;
  • Receive inbound connections from public IP addresses associated to their control plane, for example to be notified about events in their lifecycle (provision, deprovision, apply configuration, etc.).

VNet-injected services require inbound and outbound connections from/to platform-managed public IP addresses to interact with their control plane.

The two conditions above are met when the network security groups (NSGs) and the user-defined routes (UDRs) applied to subnets that host VNet-injected resources adhere to the following configuration guidelines:

  • Allow inbound connections from the set of public IP addresses/ports used by the control plane. These IPs depend on the service type and on the region where it is deployed. Furthermore, due to the dynamic nature of the cloud, they may vary over time;
  • Allow outbound connections to the set of public IP addresses/ports used by the service’s dependencies. These IPs depend on the service type and on the region where it is deployed. Furthermore, due to the dynamic nature of the cloud, they may vary over time;
  • Do not change the next hop in the VNet’s system route table for traffic destined to the service’s dependencies. Doing so may cause the connections to be source-NATted behind IP addresses different than the ones expected by the service’s dependencies and, therefore, dropped.

Not all the above constraints necessarily apply to all services. For example, some VNet-injected services are compatible with custom routing configurations whereby all traffic destined to public IP addresses outside the VNet's address range is force-tunneled to on-prem over Site-2-Site or Expressroute connections. Please refer to each service’s official documentation for detailsVNet service endpoint policies

When VNet service endpoints for a specific service type are enabled for a subnet, then clients in that subnet have network-level access to all instances of that service (in that region or in multiple regions, depending on the specific service) – including instances belonging to other users. This introduces the risk of data exfiltration incidents, whereby malicious actors with access to an organization’s VNet can copy that organization’s data from service instances controlled by the organization to service instances not controlled by the organization (see Figure 8, Left).

VNet service endpoint policies have been introduced to address the data exfiltration issue. Service endpoint policies allow VNet owners to control exactly which instances (identified by their unique fully qualified domain name (FQDN)) of a service type can be accessed from their VNet via service endpoints

(see below Figure ):

VNet service endpoints do not prevent data exfiltration attacks. A malicious actor with access to User B’s VNet can read data from User B’s storage account, copy it to a rogue storage account accessible from the public internet and download it from outside User B organization’s network. (Right) VNet service endpoint policies allow VNet owners to specify which service

VNet service endpoint policies are available only for Azure Storage and only in specific regions. Please refer to the official documentation for the most up-to-date information.

Please note that the use of Service Endpoint Policies for Azure Storage provides the additional benefit of supporting the connectivity between PaaS service instances and the authorized VNet(s) across subscription in different Azure Active Directory (AAD) tenants. Please refer to the official documentation for additional details

Private Link

Private Link is the latest integration pattern for services with a shared architecture. It addresses the limitations of VNet service endpoints by exposing select PaaS services with a shared architecture via private IP addresses belonging to a user VNet’s address space. Private Link, in itself, does not prevent internet clients from accessing the service through its public endpoint. However, all services that support Private Link provide a firewall feature that can be configured to block all connections from the internet (or to only accept connections from known, trusted public IPs). This feature, combined with Private Link, enables users to make their PaaS service instance completely private, i.e. exclusively accessible from authorized VNets over private IP addresses belonging to the VNets’ address spaces.

Please refer to the official documentation for up-to-date information about the services that support Private Link.
Private Link is a technology that allows clients in a VNet to consume PaaS services through “private endpoints” with IP addresses belonging to the VNet’s address space. In this example, a VM in User A’s VNet with private IP address 10.57.0.15 accesses User A’s storage account through the private address 10.57.1.5, which belongs to User A VNet’s address space.

The key advantages offered by Private Link over VNet service endpoints are the following:

  • Private Link exposes PaaS instances over private IP addresses belonging to a VNet’s address space, thus removing the need to manage routes and NSG rules for public address ranges;
  • Private Link makes it possible for on-premises clients connected via Expressroute private peering or VPN to consume PaaS service instances via a private IP address;
  • Private Link allows VNet owners to expose select service instances, as opposed to all instances of a service type in a region, to the VNet, thus addressing by design the data exfiltration issue (see paragraph “VNet service endpoint policies”);
  • Private Link does not introduce any limitations about the location of the PaaS service instance and the VNet(s) where it is exposed (i.e. PaaS instances and VNets can be in different Azure regions or geographies);
  • Private Link can be used to privately expose to a VNet not only first-party Microsoft PaaS services, but also 3rd party services running on Azure.

Private Link is a platform-wide functionality, but each PaaS service to which it is applicable must be onboarded. Therefore, while Private Link does provide a more effective approach to securing PaaS services than VNet service endpoints, the two integration patterns are expected to coexist in the short and medium term.

Private Link is exposed to users through two new Azure resource types:

  • Private Endpoints (Microsoft.Network/PrivateEndpoints)
  • Private Link Services (Microsoft.Network/PrivateLinkServices)

They are covered in the following paragraphs.

Private Endpoints

To expose a public service instance (such as an Azure storage account or an Azure SQL Database) in a VNet with Private Link, a private endpoint resource (of type Microsoft.Network/PrivateEndpoints) must be provisioned. A private endpoint resource represents a logical relationship between the public service instance and a NIC attached to the VNet where the service is exposed. The NIC resource is automatically created when the private endpoint is provisioned. Just like any other NIC, the NIC associated to a private endpoint gets an IP address in the address range of the subnet it is attached to. That address becomes the address of the service instance for clients in the VNet (or in remote networks connected via VPN or Expressroute private peering).

The same public service instance can be referenced by multiple private endpoints in different VNets/subnets, even if they belong to different users/subscriptions (including within differing Azure Active Directory (AAD) tenants) or if they have overlapping address spaces, as shown in below Figure

.Multiple users can define private endpoints in their VNet. Each user can only consume their own instances via the private endpoint.

Private Link Services

Private Link also supports exposing 3rd party services deployed on the platform through private endpoints. A 3rd party service exposed with Private Link is referred to as a private link service. Any custom service running in a VNet behind a Standard SKU Azure Internal Load Balancer (ILB) can be exposed through a private endpoint in another VNet. The custom service and the VNet can reside in different regions and belong to different subscriptions that trust different Azure Active Directory tenants.

To create a private link service, a resource of type Microsoft.Network/PrivateLinkServices must be provisioned. It represents a logical relationship between the Azure ILB that exposes the 3rd party service and a NIC attached to the same VNet as the ILB. The NIC resource (of type Microsoft.Network/networkInterfaces) is automatically created as part of the private link service provisioning process. From the service’s perspective, client connections originate from this NIC.

In the consumer VNet, private link services are exposed in the same way as 1st party ones, i.e. as private endpoints (resources of type Microsoft.Network/PrivateEndpoints, see previous paragraph, “Private Endpoints”).

Private Link allows service providers to control their services’ exposure. Access to a Private Link service can be granted to all platform users, restricting to a set of trusted subscriptions, or controlled via Azure RBAC. Also, when an authorized user creates a private endpoint to consume a Private Link service, the service provider must approve the request before traffic from the private endpoint is accepted. Please refer to the official documentation for additional information. Below Figure provides an example of a private link service exposed via a private endpoint.

Private Link can be used to expose custom services running behind an Azure ILB to other VNets. On the service side, the services to be exposed are represented as “private link services”. On the consumer side, they are represented as “private endpoints”. Clients in the service consumer’s VNet connect to the service using the private endpoint’s address (10.57.1.8 in this example). In the service provider’s VNet, traffic from clients originates from the private link service’s IP address (172.16.4.4 in this example). The platform takes care of the required network address translations.

Implications of Private Link on network security groups and route tables

One of the key benefits of Private Link is that traffic to PaaS services goes to IP addresses within the address space of the VNets where private endpoints are defined. Therefore, Private Link completely removes the management overhead for NSG rules and/or UDRs for public IP address ranges that exist with VNet service endpoints.

There are however two caveats that must be considered. Both will be addressed in future releases.

  • Private Link does not currently support NSGs. While subnets containing private endpoints can have NSG associated with them, the rules will not be effective on traffic processed by the private endpoints. Please refer to the official documentation for additional details.
  • When a private endpoint is created in a VNet, a platform managed /32 route (with next hop type = “InterfaceEndpoint”) for the private endpoint’s IP address is added to the VNet’s route table (see Figure 12). Traffic that matches this route is encapsulated by the Azure SDN stack and routed to the target PaaS service instance in outer packets. Just like any system route, the /32 route can be overridden by UDRs. Therefore, the next hop for response traffic “from” private endpoints can be customized. For example, it can be routed to network virtual appliances that inspect traffic between PaaS services and their clients.

Effective routes for a NIC attached to a subnet where a private endpoint has been defined. A platform managed /32 route with next hop type = “InterfaceEndpoint” is added to the VNet’s route table.

Please note that return traffic from the PaaS service (egressing from the Private Endpoint) will bypass any UDR configred on that subnet. Therefore, even though traffic to the PaaS service can be steered to a Network Virtualised Appliance (NVA) via the use of UDR, the return traffic will go directly to the originating Virtual Machine. This may create a requirement to bypass any TCP-state based security checks on the firewall, for flows destined to Private Endpoints

Swathi Bhat

AI Database Architect| Generative AI| Vector DB | Data Science and ML| Databricks|Python| PySpark |SAP Native HANA | Caterpillar| Ex- Shell

4 个月

Wonderfully written!

回复

要查看或添加评论,请登录

Upendra Kumar的更多文章

社区洞察

其他会员也浏览了