Harbor is a perfect option if you want to run a private registry. In this post we will look at the deployment of Harbor on Kubernetes, specifically on vSphere with Tanzu (TKGs). We will expose the Harbor instance to the outside world through ingress and we will leverage signed certificates using Cert-Manager in combination with Let’s Encrypt and AWS Route 53. For the installation of the different components we will use Helm and use Bitnami as an image registry to pull the required images.
Note: Private registry in this case that you ‘own’ the registry, it doesn’t necessarily mean the registry is only available on the private network. In this example I’m using Let’s Encrypt i.c.w. AWS Route 53 to request (public) signed certificates. This is configured through a so called issuer within cert-manager. Of course you’re free to use another certificate issuer: cert-manager integrates with AD, and certificate authorities running on AWS or Google Cloud for example.
Before we get started, make sure you have a Kubernetes cluster available (v1.10+) as well as Helm (v2.8.0+). In this example I’m using TKGs, for the deployment of a Kubernetes cluster you can use this YAML file. Don’t forget to define a default storage class as configured in this file (or define the storage class to be used in the Harbor configuration file).
Important: Only when running vSphere with Tanzu you must configure the pod security policies, otherwise your installation will fail. Execute this command in the content of your Kubernetes cluster:
kubectl create clusterrolebinding default-tkg-admin-privileged-binding --clusterrole=psp:vmware-system-privileged --group=system:authenticated
The steps we will be going through are:
- Install and configure Contour ingress.
- Install and configure Cert-Manager and Route 53.
- Install and configure Harbor registry.
For the installation of Contour, Cert-Manager and Harbor we will leverage the Bitnami images. On top of the ‘default installation’, Bitnami packages can include additional components that will simplify installation of a solution or add additional configuration options. For example with Bitnami Habor the required databases (Postgresql and Redis) are added as chart dependencies.
About Bitnami
Bitnami (part of VMware since 2020) makes it easy to get your favorite open source software up and running on any platform, including your laptop, Kubernetes and all the major clouds.
Bitnami closely tracks upstream source changes and promptly publishes new versions of this image using our automated systems:
- With Bitnami images the latest bug fixes and features are available as soon as possible.
- Bitnami containers, virtual machines and cloud images use the same components and configuration approach – making it easy to switch between formats based on your project needs.
- All our images are based on minideb a minimalist Debian based container image which gives you a small base container image and the familiarity of a leading Linux distribution.
- All Bitnami images available in Docker Hub are signed with Docker Content Trust (DCT). You can use DOCKER_CONTENT_TRUST=1 to verify the integrity of the images.
Bitnami container images are released on a regular basis with the latest distribution packages available.
For Kubernetes based images Helm is used. On the Bitnami Github website you can find an overview and additional details of the available charts. There’s also a commercial/supported offering that is based on the Bitnami solution called VMware Application Catalog(VAC – fka Tanzu Application Catalog). VAC offers a curated image catalog containing trusted, pre-packaged application components that are continuously maintained and verifiably tested for use in production environments.
Let’s now start with the installation process and deploy Contour ingress.
Install and configure Contour
For ingress we’re going to use Project Contour, that can be easily installed using Helm. I’m going to pull the Contour image from the Bitnami repository.
helm repo add bitnami https://charts.bitnami.com/bitnami
Now create a namespace for Contour en install the Helm chart.
kubectl create ns contour
And then:
helm install ingress bitnami/contour -n contour
You can use the following command to check if the pods are running:
kubectl get pods -n contour -w
With the following command you can check if and where your ingress is available:
kubectl get svc ingress-contour-envoy --namespace contour -w
And:
kubectl describe svc ingress-contour-envoy --namespace contour | grep Ingress | awk '{print $3}'
This IP address will be used to expose Harbor services to the world. Now is a good moment to create DNS records. You can choose to create a wildcard DNS domain name (for example *.viktoriouslab.nl) or create two records for the core (e.g. harbor.viktoriouslab.nl) and notary (e.g. notary.viktoriouslab.nl) services. The core service is the main access point for your Harbor registry.
After we’ve verified Contour is up and running, it’s time to install and configure Cert Manager.
Install and configure Cert-Manager and Route 53
Cert-Manager is very powerful solution that helps with automating the whole process around requesting and renewing certificates.
Cert-manager adds certificates and certificate issuers as resource types in Kubernetes clusters, and simplifies the process of obtaining, renewing and using those certificates. It can issue certificates from a variety of supported sources, including Let’s Encrypt, HashiCorp Vault, and Venafi as well as private PKI.
We use Bitnami again, in this case to deploy Cert-Manager. Because we already added the Bitnami repository to helm we can proceed to installing Cert-Manager:
kubectl create ns cert-manager
And:
helm install cert-manager bitnami/cert-manager -n cert-manager --set installCRDs=true
Don’t forget to set the parameter installCRDs=true!
An important concept of Cert-Manager is an Issuer or ClusterIssuer. These are resources that represents certificate authorities (CA) able to sign certificates in reponse to certificate signing requests. A Issuer is bound to a Kubernetes namespace, while a ClusterIssuer is available across all namespaces.
In this example I’m using AWS Route 53 to solve DNS01 ACME challenges. A DNS01 challenge ask you to prove that you control the DNS for your domain by putting a specific value in a TXT record under the domain name. For this to work, Cert-Manager / the (Cluster)Issuer should be able to connect to Route 53 to create these TXT records. You need to create an IAM policy, and attach this policy either a user or role.
The proces to configure this setup is outlined here. This is the policy that I’m using and that is only providing access to my domain viktoriouslab.nl (that’s hosted on Route 53). The blurred ID is the Hosted zone ID, you can find this ID in the Route 53 console.
Now create a new user and attach the policy to this user.
You will need the Access Key ID and Secret Access Key to provide access to Route 53 by cert-manager.
Now let’s get back to our cert-manager installation to setup the ClusterIssuer. First we’re going to generate a Kubernetes secret to store the password of our IAM user. Create a new file password.txt and store the Secret Access Key in this file. Now execute:
kubectl create secret generic route53-credentials-secret --from-file=password.txt -n cert-manager
You can now safely delete the file password.txt and verify the existence of the secret with:
kubectl get secrets -n cert-manager
Now it’s time to create the ClusterIssuer we’re going to use for Harbor. If preferred you can also create an Issuer of course. An example YAML file is available here.
kubectl apply -f route53-clusterissuer.yaml
Will create the ClusterIssuer. With
kubectl logs cert-manager-controller-<ENTER-ID-HERE> -n cert-manager
you can check progress and status.
Install and configure Harbor
We are now ready to install Harbor. There’s an installation proces described at harbor.io, this is based on the Helm chart provided by the Harbor team themselves. I prefer to use the Bitnami Harbor chart for deployment, this chart includes the Helm charts for Postgresql and Redis charts and initial configuration. For the installation we will use the Bitnami Helm repo again. This repo (https://charts.bitnami.com/bitnami) was configured in one of the previous steps.
Before we can install Harbor, we need to configure various things for our installation. We need to setup a values.yaml file that contains the configuration options for Harbor, Postgresql and Redis. I would also recommend to configure a secret to logon to Dockerhub. Without providing this secret you might face the download rate limit that is set for anonymous users. Create the namespace you would like to install Harbor to
kubectl create ns harbor
Now follow this process to configure a secret access Dockerhub as a private registry. Create the secret in the namespace you want to install harbor to. Leave the name of the secret to regcred.
helm show values bitnami/harbor --values bitnami-harbor-values.yaml
There are a lot of parameters that can be configured. An extensive explanation of all these parameters is available here. Some of the important settings are detailed below.
Set a default password and the URL where your harbor instance is available
adminPassword: "YourAdminPassword" externalURL: https://harbor.viktoriouslab.nl
Set the exposureType to ingress (we’re using Contour here).
exposureType: ingress
This setting should be the same as externalURL:
ingress.core.hostname: harbor.viktoriouslab.nl
Set a few annotations. Change the default “ssl-direct” to “force-ssl-direct”, this is the annotation for Contour to redirect http to https. We’re also adding a annotation for cert-manager, this makes it possible to automatically generate required certificates using our previously defined ClusterIssuer.
ingress.core.annotations.ingress.kubernetes.io/force-ssl-redirect: 'true' ingress.core.annotations.cert-manager.io/cluster-issuer: letsencrypt-route53 ingress.core.tls: true
Same setting for the notary service:
ingress.notary.hostname: notary.viktoriouslab.nl ingress.notary.annotations.ingress.kubernetes.io/force-ssl-redirect: 'true' ingress.notary.annotations.cert-manager.io/cluster-issuer: letsencrypt-route53 ingress.notary.tls: true
Also consider the change the default password for the Postgresql database:
postgresql.auth.postgresPasword: "YourHarborPassword"
It’s also recommended to set a password for Redis (otherwise a random password will be generated, this might lead to issues with Harbor):
redis.auth.enabled: true redis.auth.password: "YourRedisPassword"
Note: I had a bit of issues configuring a password for Redis, but I couldn’t put my finger on what was going wrong. You can choose to leave redis.auth.enabled to false and not set a password (for demo/test).
You can find my bitnami-harbor-values.yaml on GitHub with all the settings above included.
helm install harbor bitnami/harbor -f bitnami-harbor-values.yaml -n harbor
Now it’s time to install Harbor, sit back and relax. After a couple of minutes Harbor is up and running and available to use!