A VRF, or Virtual Routing and Forwarding, is a technology that allows multiple instances of a routing table to coexist within the same router simultaneously. This enables network segmentation and ensures that data paths are isolated, thus providing enhanced security and flexibility in managing network traffic.
Every VRF can be seen as an “isolated” routing domain, with potential IP overlapping in place.
In the contemporary landscape of digital transformation, the integration and management of multiple distinct VRFs within Azure infrastructures have become a common request from network administrators.
This article delves into the methodologies and best practices for effectively managing access to multiple VRFs from Azure.
SCENARIO 1: AZURE VMs NEEDING TO ACCESS A SINGLE SPECIFIC VRF
The challenge:
While it is feasible to connect multiple ExpressRoute circuits (and thus multiple VRFs) to a single Hub, once traffic reaches the ExpressRoute Gateway, the concept of multi-VRF is nullified; everything merges into the single large VRF that constitutes our Hub and Spoke (H&S) environment.
If Azure VMs need access to VLANs associated with a specific VRF, a straightforward approach is to create separate, independent H&S "islands":
In the proposed diagram, each ExpressRoute circuit represents an isolated VRF. Each circuit connects to a Hub dedicated to that particular VRF. This setup ensures complete isolation and supports IP overlapping of Azure workloads, though Azure VMs will only access VLANs within their designated VRF.
SCENARIO 2: AZURE VMs NEEDING ACCESS TO MULTIPLE VRFs, NO IP OVERLAP IN AZURE
Key elements of this setup include:
- One Hub per VRF
- One ExpressRoute circuit per VRF
- Spoke VNETs connected to more than one Hub (Note: Gateway transit must be disabled at VNET peering level)
- VMs in spoke VNETs can have multiple NICs, each linked to different subnets, with each subnet associated with a different Route Table (UDR) redirecting traffic from that NIC to the Firewall/Router of the pertinent Hub/VRF.
- In the example provided, each VM must access two different VRFs.
- A Firewall/Router must be deployed in each Hub.
- An Azure Route Server must be deployed in each Hub.
- The Firewall/Router in each Hub will propagate relevant Azure VLAN routes to the local Route Server for on-premises propagation, since Gateway transit is disabled on VNET peering.
Azure VMs must implement source-based routing decisions at the guest OS level to forward traffic through VRF1 or VRF2.
This setup cannot accommodate spoke VNETs with overlapping address ranges connected to the same Hubs.
SCENARIO 3: AZURE VMs NEEDING ACCESS TO MULTIPLE VRFs, NO OVERLAP IN AZURE, SINGLE HUB
With the support of 3rd party appliances (supporting VRF split) deployed in Azure, it's possible to consolidate the previous option into a "single HUB" solution.
In our previous scenario, managing 10 VRFs would entail 10 Hubs, each with its own ExpressRoute Gateway, Network Virtual Appliance (NVA), and Route Server, significantly increasing implementation complexity and costs. Consolidating this into a single-Hub setup is achievable, with some complexity to be taken in consideration.
Key differences here include:
- Utilizing a single ExpressRoute circuit as an underlay for trunking multiple VRFs.
- Leveraging third-party NVAs/routers/firewalls with multi-VRF capabilities to manage the overlay needed to support multi-VRF connectivity.
The central Hub VNET hosts a router/firewall supporting VRF split, linking each logical VRF to two NICs (internal and external). A twin router on-premises terminates the VRFs on that side. Internal NICs create VXLAN/IPSEC tunnels with their on-premises counterparts. VMs in Azure spokes connect to the main Hub via VNET peering with gateway transitivity disabled. Each NIC on different subnets/VLANs forwards traffic to the corresponding external NICs of the central router/firewall for the specified VRF through configured Route Tables.
This configuration, like the previous one, does not support IP overlap for Azure-hosted spoke VNETs. Azure VMs must implement source-based routing decisions at the guest OS level to forward traffic through VRF1 or VRF2.
SCENARIO 4: AZURE VMs NEEDING ACCESS TO MULTIPLE VRFs, IP OVERLAP IN AZURE, SINGLE HUB
This scenario modifies Scenario 3 to support overlapping IP ranges within Azure.
Here:
- No VNET peering exists between the central Hub and spoke VNETs due to potential IP range overlaps.
- Connectivity is achieved by leveraging IPSEC connections between the central Router in the Hub and twin routers in each spoke VNET (let's call them "local" routers).
For example, VM1 in Spoke1 must access both VRF1 and VRF2. VM1 is configured with two NICs aligned with different subnets, each with a Route Table redirecting traffic to the relevant NIC of the local router. The local router has pairs of internal and external NICs for each VRF, creating IPSEC tunnels with the central Hub's router using public IP addresses.
The rest of the scenario is identical to previous one
BONUS SCENARIO: AZURE VMs NEEDING ACCESS TO MULTIPLE VRFs, IP OVERLAP IN AZURE, NO HUB AT ALL!!
Wouldn't it be advantageous to avoid a Hub altogether?
With some adjustments, this is possible.
We have to consider passing from ExpressRoute Private Peering to Microsoft Peering.
Microsoft peering allows for maintaining ExpressRoute-based connectivity benefits while eliminating the central Hub for scenarios with potential IP overlap in Azure.
By setting up Microsoft Peering, you can leverage Route Filtersto receive a subset of Azure public IP ranges advertised through the circuit, facilitating direct IPSEC connectivity between on-premises routers handling VRFs and their counterparts in Azure spoke VNETs.
The Azure VMs will access VRFs similarly to the previous scenario. The only difference is that each spoke's local router connects directly to the Onpremise gateway via public IP.
EXTRA COMPLEXITY: Make it reliable!
For simplicity, this article describes each scenario with virtual appliances managing routing (except scenario 1) showing a single router. In reality, you would need clusters of appliances for high availability (HA), either Active/Passive or Active/Active based on the use case.
Routing decisions would rely on BGP attributes, and special attention is needed for traffic asymmetry management when using stateful appliances like firewalls.
Conclusion
In conclusion, managing access to multiple VRFs from Azure VNETs requires careful planning and implementation of various scenarios to ensure optimal performance, security, and flexibility. The methodologies discussed in this document provide comprehensive solutions for different use cases, including single VRF access, multiple VRFs without IP overlap, multiple VRFs with a single Hub, and multiple VRFs with IP overlap.
Each scenario presents unique challenges and requires specific configurations to achieve the desired outcomes. By leveraging Azure's capabilities, such as ExpressRoute circuits, Network Virtual Appliances (NVAs), and IPSEC/VXLAN connections, organizations can effectively manage their network traffic and maintain isolation between different VRFs, and at the same time providing Azure VMs access to multiple VRFs at the same time when needed.
The key to successful implementation lies in understanding the specific requirements of each scenario and applying the appropriate solutions to meet those needs.
Updated Feb 18, 2025
Version 2.0Danieleg82
Microsoft
Joined March 23, 2023
Azure Networking Blog
Follow this blog board to get notified when there's new activity