Accelerated Networking in Azure Virtual Machines

Accelerated Networking in Azure Virtual Machines

Accelerated Networking provides consistent ultra-low network latency via Azure's in-house programmable hardware and technologies such as SR-IOV. This high-performance path bypasses the host from the datapath, reducing latency, jitter, and CPU utilization, for use with the most demanding network workloads on supported VM types.

Without accelerated networking, all networking traffic in and out of the VM must traverse the host and the virtual switch. The virtual switch provides all policy enforcement, such as network security groups, access control lists, isolation, and other network virtualized services to network traffic.

With accelerated networking, network traffic arrives at the virtual machine's network interface (NIC), and is then forwarded to the VM. All network policies that the virtual switch applies are now offloaded and applied in hardware. Applying policy in hardware enables the NIC to forward network traffic directly to the VM, bypassing the host and the virtual switch, while maintaining all the policy it applied in the host.

The benefits of accelerated networking only apply to the VM that it is enabled on. For the best results, it is ideal to enable this feature on at least two VMs connected to the same Azure virtual network (VNet). When communicating across VNets or connecting on-premises, this feature has minimal impact to overall latency.


Benefits

  • Lower Latency / Higher packets per second (pps):?Removing the virtual switch from the datapath removes the time packets spend in the host for policy processing and increases the number of packets that can be processed inside the VM.
  • Reduced jitter:?Virtual switch processing depends on the amount of policy that needs to be applied and the workload of the CPU that is doing the processing. Offloading the policy enforcement to the hardware removes that variability by delivering packets directly to the VM, removing the host to VM communication and all software interrupts and context switches.
  • Decreased CPU utilization:?Bypassing the virtual switch in the host leads to less CPU utilization for processing network traffic.

Supported operating systems

The following distributions are supported out of the box from the Azure Gallery:

  • Ubuntu 14.04 with the linux-azure kernel
  • Ubuntu 16.04 or later
  • SLES12 SP3 or later
  • RHEL 7.4 or later
  • CentOS 7.4 or later
  • CoreOS Linux
  • Debian "Stretch" with backports kernel, Debian "Buster" or later
  • Oracle Linux 7.4 and later with Red Hat Compatible Kernel (RHCK)
  • Oracle Linux 7.5 and later with UEK version 5
  • FreeBSD 10.4, 11.1 & 12.0 or later

Limitations and Constraints

Supported VM instances

Accelerated Networking is supported on most general purpose and compute-optimized instance sizes with 2 or more vCPUs. On instances that support hyperthreading, Accelerated Networking is supported on VM instances with 4 or more vCPUs.

Support for Accelerated Networking can be found in the individual?virtual machine sizes?documentation.


Custom Images

If you are using a custom image, and your image supports Accelerated Networking, please make sure to have the required drivers to work with Mellanox ConnectX-3 and ConnectX-4 Lx NICs on Azure.


Regions

Available in all public Azure regions as well as Azure Government Clouds.


Enabling Accelerated Networking on a running VM

A supported VM size without accelerated networking enabled can only have the feature enabled when it is stopped and deallocated.


Deployment through Azure Resource Manager

Virtual machines (classic) cannot be deployed with Accelerated Networking.

How to enable Accelerated Networking on an Existing VM


  1. First stop/deallocate the VM

  • az vm deallocate --resource-group myRG --name myVM

2 Once stopped, enable Accelerated Networking on the NIC of your VM:

  • az network nic update --name myNic --resource-group myRG --accelerated-networking true

3Restart your VM. It's that simple, isn't it?

  • az vm start --resource-group myRG --name myVM

4 Confirm the Mellanox VF device is exposed to the VM with the?lspci?command



要查看或添加评论,请登录

Theophilus Bittok的更多文章

  • BGP MTU Discovery.

    BGP MTU Discovery.

    What is MTU and Why is it Important? The Maximum Transmission Unit (MTU) is the maximum size, in bytes, that a packet…

    2 条评论
  • BGP Best External.

    BGP Best External.

    By default, BGP speakers only advertise their best route for a destination. The BGP best external feature allows BGP…

    2 条评论
  • BGP Multihop.

    BGP Multihop.

    External BGP (eBGP) Multihop Support Connections between BGP speakers of different ASs are referred to as External BGP…

    1 条评论
  • IP Time To Live.

    IP Time To Live.

    Time to Live (TTL) is a computer networking term that refers to the lifespan of data on the network. TTL determines how…

  • BGP Max Prefix Limit.

    BGP Max Prefix Limit.

    What is BGP Max Prefix Limit? Border Gateway Protocol (BGP) is essential for routing data across the internet, enabling…

    6 条评论
  • Path Hunting in BGP.

    Path Hunting in BGP.

    BGP is a path vector protocol. This is similar to distance vector protocols such as RIP.

  • BGP Monitoring protocol (BMP).

    BGP Monitoring protocol (BMP).

    What Is BMP? BGP Monitoring Protocol (BMP) is a protocol used for monitoring BGP sessions. Prior to BMP, network…

    5 条评论
  • BGP Slow Peer.

    BGP Slow Peer.

    Update Group A router implementing an Exterior Gateway Protocol (EGP) such as Border Gateway Protocol (BGP), typically…

    5 条评论
  • BGP Add-Path: Enhancing Path Visibility in Networks

    BGP Add-Path: Enhancing Path Visibility in Networks

    BGP routers only advertise the best path to their neighbors. When a better path is found, it replaces the current path.

    7 条评论
  • BGP Multipath.

    BGP Multipath.

    What is BGP multipath By default, BGP does not perform load balancing. BGP will select only a single path for a prefix.

    4 条评论

社区洞察

其他会员也浏览了