Today I would like to present a common issue. Let’s assume the following scenario:
- Customer has a on-premises load balanced application,
- This application consists of several modules,
- Every node of load balancing pool hosts a set of modules,
- Modules operate independently, as they do not rely on any one single host for other modules they may require but use the load balancer as an intermediary for any additional modules required
- Some modules uses services provided by the other modules
- TCP based communication is used. Modules use more than only the HTTPS protocol
- Customer wants to lift and shift their application’s infrastructure to Azure’s Cloud.
A good example of this scenario is outlined in the following KB by CyberArk: CyberArk KB article.
Microsoft describes this case in the following Azure Troubleshooting guide.
Microsoft makes reference of this issue by stating the following:
If an internal load balancer is configured inside a virtual network, and one of the participant backend VMs is trying to access the internal load balancer frontend, failures can occur when the flow is mapped to the originating VM. This scenario isn’t supported.
And of course, this is true. Let’s evaluate Root Cause of this issue.
As stated before, our customer wants to perform a shift and lift approach for their CyberArk infrastructure. In other words, they want to use as many built-in services as possible in the Cloud. An Azure Load Balancer is the purposed solution.
An example of the architectural design by CyberArk is shown below:
First, let me present common Azure Load Balancer features and limitations:
- An Azure Load Balancer works only on Layer 4 of OSI/ISO
- An Azure Load Balancer works by using standard methods for keeping a workload balanced – these are source and destination IPs and TCP ports
- An Internal Azure Load Balancer uses a virtual IP (VIP) from customer’s vNET, which allows traffic to be kept internally
- An Azure VM can be part of only one internal and one external load balancer
- Only the primary network interface (vNIC) of a given Azure VM can be part of the load balancer backend pool. Only this interface can have the default gateway configured
- Azure Load Balancer does not do outbound traffic NAT. Simply put, it doesn’t have an outbound IP address, only inbound. This means that the Azure Load Balancer changes only the destination IP from its own VIP to Virtual Machine Primary IP during inbound NAT or load balancing rules traversal. From an Azure VM perspective, ALB is completely transparent.
Last limitation is the root cause of our problem. Microsoft states that (let’s repeat it, as it is crucial):
failures can occur when the flow is mapped to the originating VM.
This is because after an IP packet leaves the Azure Load Balancer it would have the same source IP address and the same destination IP address. According to the RCF 791 section 3.2 this kind of situation should not happen and azure backbone drops those IP packages.
So, knowing that, we can try to make some workaround. From an architectural point of view, we need to create the following conditions, where the source IP will be different from the destination. This means that our VM must have an additional IP address.
Additional IPs on an Azure vNic are supported but only the primary IP from a given interface can be used as the origin of the IP packet. This means that we cannot leverage this feature in our scenario.
Another possibility is to add vNic to the Azure VM. This is also supported. Azure VM can have one primary vNic and up to eight secondary vNics. Secondary vNics can be assigned to any subnet within Primary vNic’s vNet. So Azure VM cannot belong to more than one Azure vNet. This is enough. We need to create an additional subnet and attach a secondary vNic to it.
Secondary vNics has one more constrain. It cannot have default route. It cannot have any route at all, to be honest. Secondary vNics can gather traffic, but is hardly to send something through it. A secondary vNIC has one more constraint. It cannot have a default route. In essence it cannot have any route at all by default. Secondary vNICs can gather traffic but it is hardly to send something through it. So, this naive approach will not work. The source IP will still be established from the primary vNICs IP address.
The trick is to change VIP of Azure Internal Load Balancer.
This is the only way to enforce passing the IP packet through the secondary network adapter. When the Azure Internal Load Balancer’s VIP is from the same subnet as the secondary vNIC, we can leverage the default IP behavior – local IPs are reached through the local segment of ethernet or another medium. This is for optimization, but in this scenario it is our only choice.
Final architecture is shown on following picture:
Happy Load Balancing!