Dynamic Multipoint VPN
- DMVPN is a highly scalable ‘Hub and Spoke’ topology model that leverages multipoint GRE tunnels and the ‘Next Hop Resolution Protocol’ to allow remote sites to dynamically build VPN tunnels on demand between each other
- Where traditional Hub and Spoke networks, such as Frame Relay don’t natively support Broadcast or Multicast traffic, the usage of GRE Tunnels in the DMVPN model allows them to carry Broadcast and Multicast Traffic natively. Also, the real beauty of DMVPN is that it’s independent of any specific type of Service Provider access. As long as you have Layer 3 (IP) reachability between a remote site, a Dynamic Tunnel can be formed.
- Pretty much everybody is connected to the global internet these days, therefore the internet is typically used as the “Underlay” network that provides reachability from site-to-site in the DMVPN model. GRE is used as the “Overlay” and a Routing Protocol (such as EIGRP, OSPF or even iBGP) is used in order to advertise and learn private routes within the organization so that resources can be shared. The Next Hop Resolution Protocol facilitates the Public (Internet) IP to Private (Internal) IP mappings, sort of like what ARP does in Ethernet or Inverse ARP for Frame Relay.
- DMVPN is a Client/Server model, so the Hub is the NHRP Server and all of the Spokes (which are the Clients) are configured with the Hub’s public/private IP. Once a Spoke comes online, it advertises its unique Public to Private IP mapping to the Hub and it gets registered in the Hub’s NHRP Database. When a Spoke wants to communicate with another Spoke, he’ll know its Private IP from the Routing Protocol however, in order to find out what its Public Address on the Internet is, he needs to Query the Hub for it and once he gets it, he'll put that mapping in his own NHRP Database and he can then form a GRE Tunnel with the other Spoke on demand.
- One of the cons of GRE Tunnels is that all the data is sent in the clear without any encryption. Since private data will be traversing the internet, the tunnels are typically encrypted using IPSec. One thing about IPSec is that it only supports Unicast traffic however, GRE Tunnels do support Broadcasts and Multicasts, which are required for control traffic (Routing Protocol Updates). So we can use GRE over IPSec to protect all the data flowing inside each VPN Tunnel.
- DMVPN also supports CEF. So packets are not process switched when they're received.
DMVPN can be deployed in any of these 3 distinct Phases. Here’s a high level look:
1. Phase I – a unique Point to Point GRE Tunnel is formed between the Hub and each spoke, which means all traffic would have to flow through the hub.
2. Phase II – The Hub and Spokes are configured with Multipoint* GRE, which allows for direct Spoke to Spoke communication, bypassing the hub once they learn of the other Spoke’s Public Internet Address.
3. Phase III – Multipoint GRE is still used however, we can add an option for the Hub to send a ‘Redirect’ to the Spoke initiating communication that will point it to the other Spoke that it’s trying to communicate with. After the first few packets go through the Hub, the Spoke initiating communication will learn that it can take a ‘Shortcut’ to the other Spoke without even having to go through the Hub. This will allow us to use a Default Static Route to the Hub, so that the spokes don’t need to keep a lot of routing information in their Routing Tables. All initial traffic can just flow through the hub, and then once the hub tells them how to directly reach the other Spoke via the “Redirect” message, they’ll take that path from that point on.
In the following slides below:
- We can see that the Hub (NYC) has formed GRE over IPsec tunnels with its 2 peers (LA and ATL) and they’ve registered their Public to Private IP mappings in the Hub’s NHRP database.
- Taking a look at LA’s Routing Table, we can see that we’re in full convergence. Currently, LA has formed a site to site VPN tunnel with the Hub only, as the output of the “show ip nhrp” and “show dmvpn” shows:
- When we do a Traceroute to a LAN over in ATL (10.123.3.1), we have to hop over to the Hub first before we can get there. After LA queries NYC for ATL’s public IP, then receives it and stores it, the next Traceroute test shows that LA dynamically built a VPN tunnel with ATL and went directly over there by bypassing NYC: (172.16.1.1)
(ATL before the VPN Tunnel with LA was built) Notice there's only 1 peering with the Hub:
- We can confirm this by looking at the output on ATL and see that we now have dynamically formed a tunnel with the LA site and have its public/private IP mappings in our NHRP database for direct communication and it's been up for almost 4 minutes since the traceroute triggered the tunnel build.
- We can also see the IPSec Tunnel Protection Profile applied to our GRE Interface. Tunnel Profiles are typically configured using an ISAKMP Policy, a Transform Set using ESP and the Encryption/Authentication of your choice and instead of applying that to a Crypto Map, it gets applied to the tunnel interface. We can see the Active ISAKMP Security Associations and the IPSec Security Associations as well. Note that the interesting traffic is triggered by GRE; (IP protocol 47) in our “show crypto ipsec sa” output.