Azure: Deploying Palo Alto Networks VM-series Part-3
In this article, we'll explore how to set up the VM-series firewall to protect your Azure resources. We will configure the VM-series to perform destination NAT, allowing internet access to the web server (or anything else you like to have open) behind the VM-series firewall. Note that this setup is meant for multiple demonstrations and is not advisable for a production design due to security concerns.
High Level Network Diagram
Refer to Figure 1. Each virtual machine (including the VM-series) has its own virtual network (vnet). One VM is configured to be the Active Directory Domain Service and Certificate Service resource, while another VM is configured to function as the web hosting resource.
VMs Deployment
For general VM deployment, refer to Azure: Creating a VM. In this demo, a Windows server 2019 is used. Steps to enable Active Directory Domain Service, Active Directory Certificate Server, and Web Server (IIS) can be found on the worldwide web (meaning, plenty of resources available when you Google the key words). Therefore, they will not be repeated here.
For VM-series deployment, refer to Azure: Deploying Palo Alto Networks VM-series Part-1 and Part-2. These two articles will set up the VM-series to allow for outbound connection to the internet.
To facilitate communication between virtual networks, they need to be vnet peered. See Azure: Creating a vnet Peering.
This demo is set up for multiple demonstrations and does not adhere to best practice for production environment. For this demo, All the vnet will be peered to each other. The DC vnet peering to the Web server vnet enables a separate demo (not covered in this article) where the Web Server VM becomes a domain-joined VM.
Azure Route Table
To ensure that all VM traffic passes through the VM-series firewall, a route table for each desired vnet directing traffic through the VM-series is required. Instructions for creating a route table can be found on Azure: Creating Route Table. For this demo only, in addition to the web server vnet, we are also directing the DC vnet traffic through the VM-series to reach the internet. Therefore, the default route's next-hop is the VM-series inside (trust) vNIC IP address of 10.2.2.4 (see Figure 2).
Detail Network Diagram
Figure 3 provides the final illustration of this setup. For this demo, the VM-series have two public IP addresses attached to the same untrust vNIC. Additionally, the untrust vNIC is configured with two private IP addresses, corresponding to each of the public IP addresses. This demo is set up to demonstrate Globalprotect configuration also. Hence, the need for two public IP addresses, one for Globalprotect and the other for the web server. The Globalprotect configuration is not discussed here.
Attaching Public IP Address to vNIC
For attaching the public IP address to a vNIC, see Azure: Attaching Public IP Address to vNIC. Figure 4 shows the result of attaching additional public IP address to the same vNIC.
领英推荐
For this demo, Globalprotect will be using the Primary IP address on this vNIC, and the web server will be using the Secondary IP address on this vNIC.
Interface Configuration on the VM-series:
Figure 5 shows the untrust vNIC (ethernet1/1) configuration on the VM-series. The secondary private IP address occupies the second position in the IP address order on the untrust vNIC (ethernet1/1).
Destination NAT on the VM-series:
Traffic directed towards the secondary public IP address of the untrust vNIC is translated to the second untrust private IP address. The translation is being taken care of by Azure. For traffic directed towards the primary public IP address, Azure will translate that to the primary untrust private IP address.
Figure 6 shows the destination NAT. Here, the rules translate the untrust private IP address (10.2.1.5) to the IP address situated behind the VM-series. Rule 1 takes any HTTP (port 80) traffic and translate that destination to 10.150.1.4, the host housing the web server running. Similarly, rule 2 perform the same translation for HTTPS (port 443) traffic. Rule 3, however, translate all remaining (because NAT rules are executed top down) traffic destined to the secondary public IP address to 10.250.1.4, the host that have the domain controller service running.
These NAT rules are set up for demonstration purposes only. Do not do this in production.
Security Rule
For demo purposes, the security rule(s) are kept pretty open: for example, Outside to Inside for all networks to all destinations at any port. Again, you would not do this for production environment.
Validation
Figure 7 shows the outcomes of a nmap scan conducted on the secondary public IP address. The scan revealed information from both the web service hosted at 10.150.1.4 and domain controller services hosted at 10.250.1.4. Notably, the scan results did not differentiate between the open ports belonging to each service, thus proving that the NAT rules work. Note that the web server was not configured to use TLS (port 443).
Cybersecurity Note
As demonstrated from the nmap scan results, open ports are automatically exposed when services are enabled. Implementing security rules at the network firewall level to control which hosts and ports are accessible from the internet serves as one layer of defense. Defense in depth strategy if applied to this demo would require extending firewalling measures to the client level. Follow by disabling unnecessary services at the host level to mitigate the risk of exposing unused ports. This comprehensive approach ensures a robust defense against potential security threats by reducing the attack surface as much as possible.