WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple
You are here: Home / Docker / Configure non-secure Harbor registry with Tanzu Kubernetes Grid (TKG)

Configure non-secure Harbor registry with Tanzu Kubernetes Grid (TKG)

05.09.2020 by William Lam // 5 Comments

In an earlier blog post, I shared the steps to to configure Harbor with a proper signed SSL certificate that would serve as  private container registry for Tanzu Kubernetes Grid (TKG) CLI running in an air-gapped environment.

Although Harbor can easily be configured to support custom CA signed certificate, self-sign certificate and even just using HTTP, there are several additional steps and dependencies that is required if you wish to use a non-secure container registry with TKG CLI. This definitely was a bunch of trial/error and hopefully this can be made easier in the future to easily enable non-secure registry support with TKG CLI out of the box for development and testing purpose.

I also want to give a huge thanks to Jun Wang from our Modern Application Business Unit (MAPU), he was instrumental in helping me out and ultimately his tip on updating the containerd configuration was the last piece to the puzzle so that the K8s images deployed would use our insecure Harbor registry for pulling container images.

Step 1 - Install Photon OS into a VM that will be used to run your Harbor instance. Internet connectivity is required to initially download the required containers, but it possible to import from another system which has internet access. You can search online for instructions on how to do that. Once Photon OS has been installed, please run the following two commands to update the OS and also install Perl which will be used in a subsequent step.

tdnf -y update
tdnf -y install perl

Step 2 - Enable and start the docker client:

systemctl enable docker
systemctl start docker

Step 3 - To be able to use the Docker client to push containers to our insecure registry, we will need to create the following configuration to allow us to connect and push containers into our insecure registry. In this example, I will be using IP Address of 192.168.2.10 and port 80.

cat > /etc/docker/daemon.json << EOF
{
"insecure-registries": ["http://192.168.2.10:80"]
}
EOF
systemctl restart docker

Step 4 - We need to configure the following firewall rule to allow connectivity from the KinD (Kubernetes in Docker) cluster which we will deploy and will be used as part of the TKG bootstrap process.

iptables -A INPUT -i docker0 -j ACCEPT
iptables-save > /etc/systemd/scripts/ip4save

Step 5 - Download and install KinD:

curl -L https://github.com/kubernetes-sigs/kind/releases/download/v0.7.0/kind-linux-amd64 -o /usr/local/bin/kind

chmod +x /usr/local/bin/kind

Step 6 - Download and install Docker Compose which is required to run Harbor:

curl -L "https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

Step 7 - Download and install Kubectl which is required by TKG CLI:

curl -L https://storage.googleapis.com/kubernetes-release/release/v1.18.1/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl
chmod +x /usr/local/bin/kubectl

Step 8 - Download and extract the Harbor Offline Installer:

curl -L https://github.com/goharbor/harbor/releases/download/v1.10.2/harbor-offline-installer-v1.10.2.tgz -o harbor-offline-installer-v1.10.2.tgz
tar xvzf harbor-offline-installer*.tgz
rm -f harbor-offline-installer-v1.10.2.tgz

Step 9 - Change into harbor directory and edit the harbor.yml configuration file. First, comment out the entire https section as we are just going to be using http. Next, update the following properties with the respective values of your environment and then save the changes and exit.

property value
hostname 192.168.2.10
harbor_admin_password VMware1!
password VMware1!

Step 10 - Run the following command to start the Harbor installation:

./install.sh


Step 11 - Once the installation has successfully completed, we should verify that we can login to our registry by running the following command specifying the admin password you had set during the Harbor installation:

docker login -u admin -p VMware1! 192.168.2.10:80/library

We can also login to the Harbor UI by opening browser to http://192.168.2.10


Step 12 - After we have verified that we can login to our registry, go ahead and download the following mirror_tkg_containers.sh shell script which will automatically download, tag and push the required containers needed by TKG into our Harbor registry. In addition, the script will also update all the respective TKG YAML manifest files to replace the default VMware registry with our Harbor registry.

You will need to edit the script before running it and at the top you will need to update the registry URL which will just be the IP Address and port of the Harbor deployment (e.g. 192.168.2.10:80)

./mirror_tkg_containers.sh

Step 13 - Once the script has completed, all required containers will now be available in Harbor. We can now disconnect or disable internet connectivity and we are now ready to deploy TKG without internet access and using our non-secure Harbor registry.

Step 14 - Unlike the previous article where Harbor uses a valid SSL Certificate and the registry is automatically trusted. Here we need to tell our K8s distribution about our insecure registry and this means we need to "inject" this information prior to the container images being pulled down. To do so, we need to edit the following two TKG plans and append to the containerd configuration starting with "files" section and everything below that. You will need to replace the IP Address and port with your Harbor deployment.

  • .tkg/providers/infrastructure-vsphere/v0.6.3/cluster-template-dev.yaml
  • .tkg/providers/infrastructure-vsphere/v0.6.3/cluster-template-prod.yaml
preKubeadmCommands:
    - hostname "{{ ds.meta_data.hostname }}"
    - echo "::1         ipv6-localhost ipv6-loopback" >/etc/hosts
    - echo "127.0.0.1   localhost" >>/etc/hosts
    - echo "127.0.0.1   {{ ds.meta_data.hostname }}" >>/etc/hosts
    - echo "{{ ds.meta_data.hostname }}" >/etc/hostname
    files:
      - path: /etc/containerd/config.toml
        content: |
          version = 2
          [plugins]
            [plugins."io.containerd.grpc.v1.cri"]
              sandbox_image = "registry.tkg.vmware.run/pause:3.1"
              [plugins."io.containerd.grpc.v1.cri".containerd]
                default_runtime_name = "runc"
                [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
                  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
                    runtime_type = "io.containerd.runc.v2"
                  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.test-handler]
                    runtime_type = "io.containerd.runc.v2"
                [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
                  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."192.168.2.10:80"]
                    endpoint = ["http://192.168.2.10:80"]

Step 15 - In addition, we also need to tell the KinD cluster about our insecure registry and that means we need to manually stand it up as we can not use the default "tkg init" command as-is. Replace just the IP Address and port with your Harbor instance and then run the following command which will create kind-config.yaml file which we will use in the next step.

cat > kind-config.yaml << EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
- |-
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."192.168.2.10:80"]
    endpoint = ["http://192.168.2.10:80"]
kubeadmConfigPatches:
- |
  apiVersion: kubeadm.k8s.io/v1beta2
  kind: ClusterConfiguration
  imageRepository: registry.tkg.vmware.run
  etcd:
    local:
      imageRepository: registry.tkg.vmware.run
      imageTag: v3.4.3_vmware.4
  dns:
    type: CoreDNS
    imageRepository: registry.tkg.vmware.run
    imageTag: v1.6.5_vmware.4
EOF

Step 16 - Next, we will manually create the KinD cluster and specify the KinD Image which we have pulled down already and referencing our configuration file:

kind create cluster --image registry.tkg.vmware.run/kind/node:v1.17.3_vmware.2 --config kind-config.yaml

The creation of the KinD cluster should be pretty fast since we the image already on our local system.


Step 17 - Lastly, we are now ready to setup TKG and we will need to specify the -e flag which will use an existing Kubernetes Cluster, this will be our KinD cluster and referencing the path to the K8s configuration as shown below.

tkg init --infrastructure=vsphere:v0.6.3 --name=vghetto-cluster -e --kubeconfig /root/.kube/config -v 6

As you can see from the steps above, this is definitely not straight forward and we are also deviating from the default TKG workflow as we are manually standing up the KinD cluster which is automatically setup for you the background when using the normal workflow. This may help some folks who may not be able to obtain a proper signed SSL certificate but I do not expect many to go down this path and for me it was less about setting up an insecure registry but rather what I had learned while going down this rabbit hole and the various components that work together to make up the TKG offering.

More from my site

  • Tanzu Kubernetes Grid (TKG) Demo Appliance for VMC and vSphere
  • Tanzu Kubernetes Grid (TKG) Demo Appliance 1.1.3
  • Deploy Harbor in an Air-Gapped environment for Tanzu Kubernetes Grid (TKG)
  • Sneak peek at deploying Tanzu Kubernetes Grid on vSphere & VMware Cloud on AWS
  • Packer reference for VMware Harbor Virtual Appliance

Categories // Docker, Kubernetes, VMware Tanzu, vSphere Tags // Harbor, Kubernetes, Tanzu Kubernetes Grid, TKG, TKG CLI, VMware Tanzu

Comments

  1. *protectedvijay says

    06/13/2020 at 1:55 am

    Hi , We are trying the same setup and have few questions.

    The provider template you modified
    .tkg/providers/infrastructure-vsphere/v0.6.3/cluster-template-dev.yaml
    Where we will put the lines you told to put.
    1) when i am putting the line in top it says kind and apiversion is not defined
    2) when i am putting this lines withing cluster object yaml definition then it says both "files" and "prekubeadmcommands" fields are unknown. Can you please help us in this.

    Reply
  2. *protectedrenauddzielicki says

    06/16/2020 at 2:12 am

    Hello William, great post again ( as always!) I'm trying to go through documentation because i ddnt have yet practice myself with Tanzu, but as far as i understood you can use Tanzu without Kubernetes vSphere 7 integration ( with vcf and so on), is it possible ( and maybe supported) in this context to use other registries ( like Nexus); thanks again and hope to see you again in VMworld when this covid issue will be over!

    Reply
  3. *protectedRajiv Srivastava says

    07/03/2020 at 12:22 pm

    Can we use this same instructions for Ubuntu OS which is installed on VM where Harbor is installed

    Reply
  4. *protectedBarry says

    07/17/2024 at 5:38 am

    Hi William,

    I've used lots of your work over the years so I'll take this opportunity to say thank you and that I appreciate what you do...

    Does TKGI support adding in custom proxies for pulling container images please?

    The main one I'm thinking of is docker here as they rate limit pulls on a per IP address basis.

    We currently run a docker proxy pull through mirror on a VM so would be good to use this to avoid potential rate limit errors.

    This is the config from our local docker daemons of the feature I'm talking about: "registry-mirrors": ["https://docker-proxy.{redacted}:{redacted}"].

    Thanks

    Reply
    • William Lam says

      07/17/2024 at 9:59 am

      I've not touched TKGi for number of years, so I can't comment. Have you looked at the documentation or reach out to your account team? That would be quickest path

      Reply

Thanks for the comment!Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...