Recently I spent some time on managing the lifecycle of Elastic Kubernetes Service (EKS) clusters using Tanzu Mission Control (TMC). You can manage new and existing AWS EKS clusters and perform cluster lifecycle management operations including create, update, upgrade, and delete directly from Tanzu Mission Control…that’s cool right?
The whole process to get you started are outlined in the documentation and pretty straight forward. The most important steps are:
- Create a VPC with Subnets for EKS Cluster Lifecycle Management.
- Create an Account Credential for EKS Cluster Lifecycle Management (in TMC).
Some Cloud Formation templates are provided so it’s even easier to setup the whole integration. After you have completed these steps, you’re able to create a new EKS cluster from TMC that will show up in TMC.
Maybe you want to (also) access your cluster from the AWS interface, and/or you want to be able to access your cluster through kubectl. Initially you will see the following message in the AWS console when you’re logged on with your regular AWS account:
You will also not be able to use kubectl to access your cluster (using your AWS account). This is the result of the fact that TMC creates a services account that has full permissions, other accounts don’t have permissions initially. Of course you can use the TMC integrated authentication & authorization capabilities to access your cluster, as long as your cluster is accessible through a private connection. By default public access (through the internet) is not allowed on the pinniped level, read more about this here.
In this article I am going to explain how to configure the following:
- Enable public/internet access to your EKS cluster.
- Set additional permissions on your AWS account (and/or other accounts) so you’re able to access the cluster using kubectl and/or the AWS console.
- Configure Pinniped on the cluster, so authentication/authorization through a public connection (the internet) is allowed.
Of course there’s a question if you want to configure this for a production environment, but for a lab setup this would make sense.
Enable public/internet access to your EKS cluster
Again, you first need to consider if you want allow public access. For a demo environment this should to be to big of a problem. Configuring public access to your cluster can be achieved directly through TMC and/or directly on AWS. Because the cluster is managed by TMC, I prefer to use TMC for this.
The option is available under action–>edit on your EKS cluster. You choose public, private or public and private. Don’t forget to enter a CIDR block are IP address that’s/are allowed to connect.
Assume the EKS Cluster Lifecycle Role of Tanzu Mission Control
As part of the EKS onboarding, TMC creates a EKS Cluster Lifecycle Role in your EKS environment. This role is called something like clusterlifecycle.<key>.eks.tmc.cloud.vmware.com. You have to add the ARN of your account to this role, so you’re allowed to assume this role.
{ "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::xxxxxxxxxx:user/" }, "Action": "sts:AssumeRole" }
This will look something like:
You’re now able to assume this role with your user account using the AWS CLI. First logon to AWS from the CLI.
Now enter this command:
aws sts assume-role --role-arn "arn:aws:iam::xxxxxxxxx:role/clusterlifecycle.key.eks.tmc.cloud.vmware.com" --role-session-name AWSCLI-Session
Add the arn of your account to this command.
Now export the AWS Access Key, Secret Access Key and Token based on the information you get from the aws sts assume-role command. Export these values preferable to a separate console window.
export AWS_ACCESS_KEY_ID=<ACCESS_KEY_ID> export AWS_SECRET_ACCESS_KEY=<SECRET_ACCESS_KEY> export AWS_SESSION_TOKEN=<SESSION_TOKEN>
You’re now assuming the TMC cluster lifecycle management role and are able to access the EKS cluster and its configuration:
aws eks update-kubeconfig --region <region-name> --name <cluster-name> --kubeconfig file-name
You’re now able to use kubectl to access your EKS cluster.
Configure user permissions
The next step is to edit the AWS IAM config map to add additional user permission who can access your Kubernetes cluster.
eksctl create iamidentitymapping --cluster eks01 --arn <ARN of user and/or role> --username <think of a username> --group system:masters --no-duplicate-arns
Now you can the user account or user accounts you’ve added with this command to directly access your EKS cluster.
The details of the EKS cluster are now also showing up in the AWS interface:
Updating Pinniped configuration
Because you have access to the cluster you can also update the pinniped configuration and allow TMC integrated access through a public internet connection. Follow this procedure to get things started.
Notice that removing the line contains AWS (as detailed in the documentation) that isn’t working, just change:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
to
service.beta.kubernetes.io/aws-load-balancer-internal: "false"
After deleting/restarting the pinniped services, you get a public load balancer that is used for accessing the cluster using TMC authetication/authorization through the internet using a public connection.
That’s it, I hope this is helpful!