VMware NSX supports two types of load balancers, the one-armed load balancer and the inline load balancer. You can add both load balancer types to the design canvas of the converged blueprint designer in vRealize Automation, and include a load balancer in your service design. In this article I will provide you with a high level overview of these load balancer types (there are numerous articles explaining how an NSX load balancer is working). I will dive deeper into how to configure both load balancers options in vRealize Automation. Specifically the configuration of the inline load balancer can be a little confusing.
But first, let’s have a look at the architectural differences between the one-armed and inline load balancer.
One-armed load balancer
A one-armed load balancer (also called proxy load balancer) is connected to the network with only one network interface. Network address translation is performed on the traffic that flow through the load balancer. The traffic flow and configuration is detailed in the following diagram:
The load balancer is placed in the same network as the VMs, however this is not strictly required for NSX. You can route the network from the load balancer to the VMs. In an environment with NSX and vRA, the one-armed load balancer has to be in the same network as the pool members.
At the end a one-armed load balancer is an Edge Services Gateway (ESG) that has the load balancing option configured on the ESG.
Inline load balancer
An in-line load balancer (also called transparent load balancer) is connected to the network with two network interfaces. In this scenario the load balancer has an external (user facing) network, and an internal network that’s not directly accessible from the external network. An inline load balancer acts as a NAT gateway for the VMs on the internal network. The traffic flow is displayed in the following diagram:
An inline load balancer is also an ESG, with two network interfaces configured.
More details on the different NSX load balancing types is published here.
Load balancers in the design canvas
Let’s first evaluate some that apply when using load balancers in the design canvas (source here):
- If the pool network profile is NAT, the VIP network profile can be part of the NAT network profile.
- If the pool network profile is routed, the VIP network profile can only be on the same routed network.
- If the pool network profile is external (existing), the VIP network profile can only be the same external network profile (existing).
The pool network is the network where the VMs (load balancing members) live, the VIP network is where the virtual IP address of the load balancer lives. The virtual IP address is the (user) accessible IP address for the application/service.
A one-armed load balancing requires that the pool and VIP network are the same, the inline load balancer requires a different network for the pool and VIP network. Thus, option one can be used to configure an inline load balancer, while option two and three are talking about a one-armed load balancer.
The prerequisites for the two different load balancer types from a network profile perspective are:
- A one-armed load balancer can be configured on a NAT, routed and/or external network. Both the pool and VIP network must be connected to the same network;
- An inline load balancer requires a NAT networkprofile for the pool network and an external/existing network for the VIP network.
So in the vRA design canvas this would look like:
One-armed load balancer
The one-armed load balancer VIP and pool network are connected to the same existing, routed or NAT network. This configuration will deploy one ESG to take care of the load balancing. The ESG is connected to the same netowrk as the pool members.
Inline load balancer
The inline load balancer requires an existing network voor de VIP network and a NAT network for the pool network. This configuration will deploy one ESG that will be configured as a NAT router with load balancing functionality. The provided VIP IP address will load balance the traffic to the pool members.
Note that each load balancing configuration requires a static IP configuration for the members in the pool. vRA can take care of the IP address management, or you can configure an IP address yourself for the pool members. My advice is to let vRA take care of this, setting the “assignment type” to Static IP and let the “address” option empty.
Configuring multiple load balancers in one canvas
If you’re planning to use more than one load balancer in the vRA design canvas, and these load balancers use the same network profiles, vRA/NSX will deploy only one ESG and configure two load balancers configuration in this ESG.
The following canvas that contains the famous WordPress with external database setup includes two WordPress instances: one for production, one for test. Both instances are load balanced. Because these WordPress instances use the same network profiles only one ESG is deployed:
Want to learn more?
I hope this was helpful for you. If you want to learn more, you’re invited to join my colleague Ronald de Jong and myself on March 20 at the Dutch NLVMUG Usercon in Den Bosch, The Netherlands. Ronald and I will talk about how you can build a private cloud using vRealize Automation for automation & orchestration plus self service in combination with NSX for network virtualization. Our sessions is scheduled from 9:55-10:40 right after the keynote of Pat Gelsinger, VMware’s CEO. The sessions titled is “vRA + NSX…and it all comes together” and will be presented in Dutch. Hope to CU there!