I was really looking forward to get my hands on vSphere 7 with Kubernetes. Although a minimum of 3 hosts is required in a VCF setup, the vSphere Web Client told me a two node cluster should also suffice…and in the meantime I’ve learned even a one node (lab) deployment is an option. In this article I will share my experiences deploying a two node (nested) setup.
Again, this is lab setup. In a production environment please follow the official guidelines and requirements.
My setup includes the following components:
- vCenter Server 7.0 (2 CPUs, 16 GB RAM, 290 GB of storage).
- 2 (nested) ESXi hosts (2 vCPUs, 32 GB of RAM).
- 1 NSX Manager (normally this would be 3), medium configuration (6 vCPUs, 24 GB of RAM, 200 GB of storage).
- 1 Edge Transport Node (normally you would need 2 Edge Transport Nodes), large deployment (8 vCPUs, 32 GB of RAM, 200 GB of storage).
- NFS as shared storage for the nested ESXi hosts to deploy the supervisor cluster. Normally this is 3 VMs, but after a little tweak you can bring this down to two (see below).
You will also need a VCF 4 license to enable the Workload Management option in the vSphere WebClient. Workload Management is the option you need to configure vSphere with Kubernetes. To get the configuration in place I used the following resources:
- The official documentation, available here. The “Configure vSphere with Kubernetes to Use NSX-T Data Center” will walk you through the NSX-T prerequisites and configuration steps.
- Williams Lam already published several posts (here and here) on how to deploy vSphere with Kubernetes in a lab environment in an automated way. I studied his scripts to better understand the whole process.
The high level networking architecture of my setup is as follows:
Check out this page to fully understand the network architecture of a vSphere with Kubernetes environment. The Tier0 router lives on the Edge VM and is responsible for the North/South traffic in to/out of the Kubernetes environment. vSphere with Kubernetes will deploy additional Tier1 routers and Load Balancers to run the environment. The 192.168.178.0/24 network is the Management Network for the VCSA, NSX-T manager/Edge Transport Node while the 172.16.200.0/24 network is configured as Workload Network, with 172.16.200.32/27 and 172.16.200.64/27 being the ingress- and egress CIDRs.
After the initial deployment of the NSX-T manager you have to configure:
- A vSphere Distributed Switch.
- Connect your vCenter to your NSX-T manager.
- Create your transport zones: an overlay- and a VLAN transport zone.
- Create a host uplink, edge uplink and transport node profile (or use the profiles that are available out-of-the-box).
- Create a host IP pool for the host tunnel IP endpoint addresses.
- Create a large Edge Transport Node and create an NSX Edge Cluster.
- Create the Tier0 segment network , in my case the 172.16.200.0/24 network.
- Create the Tier0 gateway including routing options (static or BGP).
In my setup I’m running 2 vSphere clusters. Cluster01 includes the “physical” ESXi host that is running the VCSA, NSX Manager and NSX Transport Node/Edge VM. Cluster02 includes two virtual ESXi nodes. This is depicted as a “Topology with Separate Management and Edge Cluster and Workload Management Cluster” in this document.
By default three VMs are deployed to accommodate the SuperVisor cluster. In this article by William Lam at virtuallyghetto.com a small (not supported) tweak is provided to bring down the number of VMs in SuperVisor cluster to two. SSH to your VCSA and edit the /etc/vmware/wcp/wcpsvc.yaml, update the minmasters and mixmasters values both to two.
minmasters: 2 maxmasters: 2
Now restart the WCP service: service-control –restart wcp.
You’re ready now to enable vSphere with Kubernetes through the Workload Management option in the vSphere Webclient:
As you can see both the single node and two node cluster are showing up (apparently there’s no check here on the number of ESXi nodes in a cluster, also the one node cluster is tagged as “compatible”). Cluster02 will host the container workloads, so I’ve selected this one.
A tiny Control Plane Size will do the trick in my lab environment.
Now it’s time to configure the networking. You have to setup both the Management Network and Workload Network configuration. The management network can be the same as the network where the VCSA and NSX components reside, although this is not mandatory. The components on the management network segment should be able to connect to VCSA/NSX nevertheless. In this example I’m connecting all the new components to the 192.168.178.0/24 network:
Storage policies are used to select the storage for the Control Plane Nodes, Empheral Disks and Image Cache. For simplicity I’m using the NFS datastore(s) for all these components. The datastores are tagged with the “vSphere with Kubernetes” tag.
Click Next and Finish to configure Workload Management, which will take between 20-60 minutes based on my personal experience. If you’re presented with the following screen, you know you’re good to go and ready to configure your first namespace.
2 Comments
kurthv71
Great article!
Can’t wait to deploy my first K8s cluster on vSphere 7.
One Question: Why would I need a large Edge Node?
(8 vCPUS and 32GB RAM is quite a lot for a HomeLab deployment)
Best regards,
Volker
viktorious
This is because load balancers are deployed to the Edge Node. Less than 32 GB will result in failed load balancer deployments (and a non functioning Kubernetes environment). Also check vSphere with Tanzu Basic (new offering), this allows you to run vSphere with Kubernetes without NSX (using open source networking components).