I was recently helping out fellow colleague Patrick Kremer who was looking into an issue that one of our users had filed on how to configure the VMware Event Broker Appliance (VEBA) so that it can take advantage of a custom container registry for deploying VEBA functions. If you attempt to specify a container image from a private container registry, especially one that has a self-signed certificate, you will see the following error:
Unable to fetch image "harbor.primp-industries.local/library/veba/kn-py-echo:1.0": failed to resolve image to digest: Get "https://harbor.primp-industries.local/v2/": x509: certificate signed by unknown authority; Get "https://harbor.primp-industries.local:443/v2/": x509: certificate signed by unknown authority
I had assumed that this should have been a pretty trivial configuration change to make the underlying Kubernetes container runtime trust the desired container registry and that there would be an easy to follow tutorial that Patrick could search for. The latest release of VEBA has moved away from using the Docker runtime to containerd and this should have helped narrow down the search results, at least that was our assumption.
Not only are there plenty of resources online, but there seem to be multiple methods depending on the version of Kubernetes and containerd which was pretty overwhelming. After several attempts using various blog articles, Patrick found that the trust error has still not gone away. I finally decided to take a closer look and discovered that there are actually two components that must be updated to properly support a private container registry: containerd & Knative Serving Controller. I eventually found this page in the Knative Serving documentation that provided a hint but ultimately, I was not able to fully grok the details until I came across this Github thread that brought clarity on how to create the required secret for the root CA certificate which would allow the Knative Serving controller to trust the root CA certificate.
Below are the instructions for the required changes and I have also attempted to simplify the steps by providing automation snippets that makes it easy for anyone to consume. In my setup, I am using Harbor registry which was built from my Harbor Virtual Appliance but the steps should apply for any other private container registry.
Step 1 - Copy the root CA certificate from /etc/docker/certs.d/harbor.primp-industries.local/ca.crt to your VEBA Appliance:
scp *protected email*:/etc/docker/certs.d/harbor.primp-industries.local/ca.crt .
Step 2 - Backup the original containerd configuration file and then define the following two variables which will be used to update the containerd configuration file:
cp /etc/containerd/config.toml /etc/containerd/config.toml.bak
PRIVATE_REGISTRY="harbor.primp-industries.local"
PRIVATE_REGISTRY_CERT_PATH="\/root\/ca.crt"
Note: Make sure to properly escape the full path to the root CA certificate from Step 1.
Run the following command which will update the containerd configuration file with the required entries to use your private registry:
sed -i "s/ \[plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors\].*/ \[plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors\]\n \[plugins.\"io.containerd.grpc.v1.cri\".registry.configs.\"${PRIVATE_REGISTRY}\".tls\]\n ca_file = \"${PRIVATE_REGISTRY_CERT_PATH}\"/g" /etc/containerd/config.toml
Containerd needs to be restarted before the changes to go into effect and lastly ensure the service is running before continuing:
systemctl restart containerd
systemctl status containerd
Step 3 - Create a secret in the knative-serving namespace that points to root CA certificate from Step 1 and then save the current Knative Serving controller deployment:
kubectl -n knative-serving create secret generic customca --from-file=ca.crt=/root/ca.crt
kubectl -n knative-serving get deploy/controller -o yaml > knative-serving-controller.yaml
Run the following command to create the required YTT overlay which will be used to automatically apply the required changes to the Knative Serving controller deployment:
cat > overlay.yaml <<EOF #@ load("@ytt:overlay", "overlay") #@overlay/match by=overlay.subset({"kind":"Deployment", "metadata": {"name": "controller", "namespace": "knative-serving"}}) --- spec: template: spec: containers: #@overlay/match by=overlay.subset({"name": "controller"}) - env: #@overlay/append - name: SSL_CERT_DIR value: /etc/customca #@overlay/match missing_ok=True volumeMounts: - name: customca mountPath: /etc/customca #@overlay/match missing_ok=True volumes: - name: customca secret: secretName: customca EOF
Apply the YTT overlay to generate the new Knative Serving controller YAML (new-knative-serving-controller.yaml) and apply the change:
ytt -f overlay.yaml -f knative-serving-controller.yaml > new-knative-serving-controller.yaml
kubectl apply -f new-knative-serving-controller.yaml
Ensure the old Knative Serving controller has successfully terminated and the new configuration is applied by checking the status:
kubectl -n knative-serving get deployment/controller
Step 4 - Finally, you can now deploy a VEBA function and specify an image from your own private registry and should now deploy without any issues!
JRSmile says
Hi, thank you again for writing this article, it helped alot.
Unfortunately i can't get it to work with a registry that has no default port 443.
When using a gitlab registry or a docker/registry2 with and without authentication on port 5050 the image can be corretly found by containerd but the download never happens (ImageBackOff Error) i set up wireshark on my gitlab server and could see traffic on port 5050 AND port 443. i think the image gets pulled from the default port where there is no registry listening. could it be the controller needs to be adjusted for a different port?
iwik says
Hi, does this need to be updated for VEBA 0.8?
There is change "Migrated function container images from Google (GCR) to Github (GHCR)" so I think /etc/containerd/config.toml sed need to be changed, right?