In part 1 of this series I introduced the Tanzu Basic (vSphere with Kubernetes) offering. In part 2 we will deploy HA proxy. Part 3 is also available and walks through the enable workload management wizard. A running instance of HA proxy is required before you deploy Tanzu Basic in combination with the vCenter Server Network option.
Before we get started, it’s required to setup the VLANs/portgroups you want to use for Tanzu Basic. There’s a default setup that requires a management network and a single workload network OR you can add an extra frontend network. In this case you will have a management network, frontend network and workload network. You will need at least need two networks, or three if you also want to include the frontend network. Of course al depending on your specific requirements:
Do you need layer 2 isolation between Supervisor Cluster and Tanzu Kubernetes clusters?
- No: the simplest topology with one Workload Network serving all components.
- Yes: the isolated Workload Network topology with a separate Primary and Workload Networks.
Do you need further layer 2 isolation between your Tanzu Kubernetes clusters?
- No: isolated workload network topology with a separate Primary and Workload Networks.
- Yes: multiple workload networks topology with a separate Workload Network for each namespace and a dedicated Primary Workload Network.
Do you want to prevent your DevOps users and external services from directly routing to Kubernetes control plane VMs and Tanzu Kubernetes cluster nodes?
- No: two NIC HAProxy configuration.
- Yes: three NIC HAProxy configuration.
So the only reason to select a three NIC configuration is if you want to prevent DevOps users from directly routing to the Kubernetes control plane VMs and Tanzu Kubernetes cluster nodes. The above comes from the official documentation, check out this chapter for more information about the different topologies.
In this article I’m deploying the three NIC configuration, that requires three different VLANs and DVS portgroups. The topology looks likes this:
In this example I’m routing all the networks – in a real world scenario you would need to isolate the WorkLoad (WL) network from the FrontEnd (FE) network because you want it to have it inaccessible for your DevOps users.
Create the L2 segments and the portgroups. DVS-VLAN0 is configured a network trunk because of some nested ESXi servers I’m running, this has nothing to do with the setup discussed in this article (so network trunking is not required).
Deploy HA Proxy
Now we have our networks available and before we can get started with Workload Management in the vSphere client, we need to deploy the HA proxy appliance. The HA proxy appliance is required if you want to use the vCenter Networking option. The OVA appliance is available as download at the VMware HA Proxy GitHub. Just follow the deployment wizard and select Frontend Network as deployment configuration:
Also connect the portgroups to the appropriate Source Networks:
You have to specify an IP address for the HA proxy in each of the three segments, and you also have to configure the IP range(s) the load balancer will use for Kubernetes Services and Control Planes.
Because I’m using an FE network, the Load Balancer IP ranges will be deployed to this segment (172.16.101.0/24). In this example I’ve configured 172.16.101.128/25 (172.16.101.129-172.16.101.254) to not overlap with the HA proxy management IP and the gateway on the network. You can now deploy the HA Proxy appliance and check if it’s up and running via https://<ha-proxy-management-ip>:5556/v2.
Now that we have HA Proxy up and running, the next step is to enable Workload Management. This step is discussed in part 3 of this series of articles.
1 Comments
Pingback: VMware: Despliegue automático de Workload Management (VMware Tanzu) sobre VMware vSphere - El Blog de Jorge de la Cruz