A while ago I published several posts on vSphere with Tanzu (here, here and here). vSphere with Tanzu, also sometimes called Tanzu Kubernetes Grid Service (TKGs), is the Tanzu Kubernetes Grid version that is fully integrated into vSphere 7 (as opposed to TKGm that is running more or less on top of vSphere and/or on other clouds). This article focusses on the following topics:
- vSphere with Tanzu networking options.
- The vAlpha2 API.
- The vSphere Namespace Service.
- The VM service.
vSphere with Tanzu networking options
If you want to get started with vSphere with Tanzu, there are a few prerequisites in place for the cluster you want to use for vSphere with Tanzu: HA & DRS should be enabled, a storage policy is required to for K8S control plane VM placement and a content library is required. The content library is used to serve the VM image that is pulled to create TKG clusters.
In terms of networking there are three options:
- Use NSX-T;
- Use NSX Advanced Load Balancer (fka AVI) – available since vSphere 7.0 update 2;
- Use HAProxy.
The initial version of vSphere with Tanzu required NSX-T. Since 7.0 update 1 HAProxy is supported and since 7.0 update 2 the NSX Avanced Load Balancer (ALB) is also an option.
In case of scenario 1, NSX should be configured according to the process described here. Scenario 2 and 3 require the load balancer to be pre-deployed before you start Workload Management configuration wizard. NSX ALB configuration proces here, HA proxy configuration proces here.
Scenario 1 leverages NSX segments for networking, while scenario 2 uses regular VLANs. Only the NSX networking option supports so called vSphere Pods (containers directly running on the ESXi hypervisor). In all the scenarios you can deploy TKG workload clusters using the TKG management cluster (called the supervisor cluster) that is deployed by the Workload Management wizard. With vSphere with Tanzu you also have the option to depoy VMs using Kubernetes objects, more on this later in this article.
v1Alpha2 API
With vSphere 7.0 update 3 or later you can use the v1alpha2 API to deploy workload clusters user Cluster API – take a look at the full requirements before you can use this newer version. Cormac Hogan wrote an excellent article on the differences between these APIs. The vAlpha2 API requires a slightly different YAML format. Some important improvements are: you have the ability to specify a namespace in the metadata, Kubernetes nodepools are supported and you can use the vmClass parameter to specify the Kubernetes node size.
This also means you need to associate the VM classes in your vSphere namespace before you can use them in the YAML specification:
On the commandline you can use kubectl get vmclass and get storageclass to get an overview of the available VM & storage classes. Kubectl get vmclassbinding -n <vSphere namespace> shows which VM classes are bound to a specific vSphere namespace.
vSphere Namespace Service
One of the newer features (not new to vSphere 7.0u3d) is the vSphere namespace service. The namespace service allows the creation of vSphere namespaces through kubectl. The namespace service is disabled by default, but can be enabled on the supervisor cluster: Configure –> Supervisor Cluster –> General.
After you enable the service, you have to configure a namespace template:
This template determines the size of created vSphere namespaces through kubectl. In the next step you can give permissions to groups/users that can create namespaces. Logon to your supervisor cluster and simply enter:
kubectl create namespace wkld03
to get things started and create a vSphere Namespace.
The VM service
Another interesting component in vSphere with Tanzu is the VM service (available since 7.0 U2a). The VM service is available under the services tab in the Workload Management section.
With the VM service you can deploy virtual machines using the kubectl apply command. With the VM service a virtual machine becomes a Kubernetes object that you deploy to a namespace on the supervisor cluster using a YAML definition.
To deploy a VM three things are important:
- You will need an image (the VM image) that is used to deploy the actual virtual machine;
- You will need to define a VM class, this determines the actual size of the VM in terms of CPU and memory;
- You will need to define a storage class, this determines where and how the files of the VM are deployed.
These three things (and some additional parameters) make up a VM specification.
Currently the VMware Marketplace offers a Ubuntu 20 and CentOS 8 image, these images should be uploaded to a content library in your vSphere environment.
After the required are uploaded, you have to link the content library to the (vSphere) namespace where you want to deploy your virtual machines to.
After you’ve associated both the required VM class(es) and storage class(es) to a namespace in vSphere, you’re ready to deploy a VM using kubectl.
You can use cloud init for the initialization of your VM/cloud-instance.
On instance boot, cloud-init will identify the cloud it is running on, read any provided metadata from the cloud, and initialize the system accordingly. This may involve setting up the network and storage devices, configuring SSH access keys, and setting up many other aspects of a system. Later, cloud-init will parse and process any optional user or vendor data that was passed to the instance.
The first step is to create a ConfigMap that contains the cloud init information for the VM. The ConfigMap contains the information that you would normally find in the cloud-config file. My cloud-config file looks as following:
#cloud-config ssh_pwauth: yes users: - default - name: viktor ssh_authorized_keys: - ssh-rsa <put your public ssh key here> sudo: ALL=(ALL) NOPASSWD:ALL groups: sudo shell: /bin/bash network: version: 2 ethernets: ens192: dhcp4: true
Now encode this information using base64 < cloud-config-file. The output of this command is used to create the ConfigMap:
apiVersion: v1 kind: ConfigMap metadata: name: configmap-centos namespace: wkld02 data: user-data: | hostname: centos01
Kubectl apply -f will create a configmap called configmap-centos in namespace wkld02.
Now create a YAML for the virtual machine:
apiVersion: vmoperator.vmware.com/v1alpha1 kind: VirtualMachine metadata: name: centos01 namespace: wkld02 spec: networkInterfaces: - networkName: "user-workload" networkType: vsphere-distributed className: linux-small imageName: centos-stream-8-vmservice-v1alpha1-1638306496810 powerState: poweredOn storageClass: vc01cl01-t0compute vmMetadata: configMapName: configmap-centos transport: OvfEnv
Kubectl apply -f will create the virtual machine for you and will show up under the wlkd02 namespace:
As you can see, this VM is developer managed because it’s deployed through kubectl.
A simple kubectl get vm wil show a list of available VMs in the namespace, with kubectl delete vm <vm-name> you can delete the VM.
That’s it for now, feel free to leave a comment below.