In this article I will provide you some tips how to navigate around your management- and workload cluster in TKG 2.1. It will show some basic commands to create the initial management cluster and how to navigate around it after creation, using your bootstrap machine as well as from another workstation. We will also look at how to create a first workload cluster and navigate around this cluster as well.
I will not dive into too much detail about how to create these clusters, main focus will be around accessing the available management- and workload clusters form different workstations.
Create a TKG Management Cluster
Creating a TKG management cluster is done by executing:
tanzu mc cluster create --ui
This will open a browser that allows you to configure your TKG management cluster and deploy. If your bootstrap workstation doesn’t have a webbrowser (for example if you’re using a Linux box without a GUI) you can also use:
tanzu mc cluster create --ui -b <IP-ADDRESS-OF-YOUR-SYSTEM:PORT>
This makes it possible to access the Tanzu deployment wizard from an external system that has a webbrowser installed.
Another option is to use/create a config.yaml that defines your TKG management cluster configuration and deploy the management cluster using this file on another system. You can use the UI to create this config file, or use this example file and adapt the settings to your needs.
Now deploy your TKG management cluster using:
tanzu mc cluster create -f config.yaml
The deployment of the management cluster will take about 10-20 minutes.
Access your TKG Management Cluster from your bootstrap machine
After the TKG Management Cluster is installed, the contexts are automatically added to your bootstrap machine. You can explore the new TKG management cluster right away:
tanzu mc get
And also use kubectl if required, for example:
kubectl get nodes
Access your TKG management cluster from another workstation
Accessing your management cluster from a computer other than the bootstrap machine requires you to first export the (admin) context/cluster settings:
tanzu mc kubeconfig get --admin --export-file MC-ADMIN-KUBECONFIG
If you don’t need the admin kubeconfig, leave the –admin out of the command. This will leave the admin context out of the file, requiring you to logon to the cluster through OIDC or LDAPs.
Copy this file to another computer and logon to the management cluster on this new computer:
tanzu login --kubeconfig MC-ADMIN-KUBECONFIG --context mgmt-cluster-admin@mgmt-cluster --name mgmt-cluster
The values of context and name depend of the configuration of your management cluster. You can find these values in the MC-ADMIN-KUBECONFIG file.
Check if the management cluster has been succesfully added:
tanzu mc get
Now add the context of the management cluster to your kubeconfig (~/.kube/config).
tanzu mc kubeconfig get --admin
Now you can access the cluster with kubectl using:
kubectl config use-cluster <context of your management cluster>
Or use a tool like kubectx to navigate around your clusters.
Create & access TKG workload clusters
First create a new TKG workload cluster using
tanzu cluster create -f workload-cluster.yaml
After creation your new workload cluster will show up in the list on your TKG management cluster:
tanzu cluster list
Now add the Kubernetes context to your ~/.kube/config:
tanzu cluster kubeconfig get <clustername> --admin
This command will add the admin context of cluster tkc01 to your kubectl configuration. If you don’t want the admin context remove the –admin part, requiring you to logon to the cluster through OIDC or LDAPs.
Change clustername, to the name you have configured for this cluster.
Now you can switch to the context of this cluster using:
kubectl config use-context tkc01-admin@tkc01
If you want to access your workload cluster from another workstation that hasn’t got access to the Tanzu management cluster, you can use:
tanzu cluster kubeconfig get <clustername> --admin --export-file WL-CLUSTER-KUBECONFIG
Remove the –admin if you don’t need administrative access.
Now copy this file to another workstation and run a kubectl command like:
kubectl get nodes --kubeconfig WL-CLUSTER-KUBECONFIG
You can of course also add this file to your ~/.kube/config. The procedure for this is as following:.
- First copy your existing config file if anything goes wrong.
cp ~/.kube/config ~/.kube/config.backup
- Now create an environment variable called KUBECONFIG and add both the existing config file and the WL-CLUSTER-KUBECONFIG file, for example:
- Now run
kubectl config view --merge --flatten
and check if the output show the different kubernetes clusters/contexts.
- Now merge this configuration to a file:
kubectl config view --merge --flatten > ~/.kube/config.new
- Copy the file:
cp ~/.kube/config.new ~/.kube/config
This activates the new configuration, check if everything is available as expected:
kubectl config get-contexts
kubectl config get-clusters
That’s it, I hope this was helpful.