Last week I demonstrated how to take advantage of the new Kubernetes Cluster API Bring Your Own Host (BYOH) Provider with a VM running on ESXi-Arm and managed with Tanzu Community Edition (TCE). The Cluster API BYOH Provider is currently only tested and supported with an Ubuntu OS, but since the only requirements for a linux host was simply: kubeadm, kubelet and containerd, I figured it should also be possible with VMware's Photon OS which also has an Arm edition.
With a TON of trial/error and reverting snapshots, I was able to finally get Cluster API BYOH Provider to successful run on Photon OS as shared in a recent tweet.
π
π€π₯ Uber Hybrid TCE Workload Cluster π₯
β ESXi-Arm
β ESXi-x86
β Ubuntu Arm
β Photon Arm
β Ubuntu x86
β Photon x86 (should work but I'm lazy now haha) pic.twitter.com/dkPXSl4vLB— William Lam (@lamw.bsky.social | @*protected email*) (@lamw) November 21, 2021
What actually made this possible was actually the work I had done with VMware Event Broker Appliance (VEBA) project which also involves Photon OS and Kubernetes. More specifically, I had recently worked on porting VEBA from using the Docker runtime to Containerd with Kubernetes and that prior experience was invaluable while figuring out how to do this with Photon OS (Arm) which also had its own challenges. The instructions below will help setup a Photon OS (Arm) VM that can then be used with Cluster API BOYH Provider and the previous article will still need to be reference for the complete setup.
Build Runc
An Arm version of runc must be built and copied to your Photon OS VM. To do so, you will need to first install Ubuntu (Arm) and for my setup, I was using the latest Ubuntu (21.10) Arm ISO and performing a standard OS installation into an ESXi-Arm VM.
Once the OS installation has completed, you will need to run the following commands to build the runc binary:
apt install gcc make pkg-config libseccomp-dev gcc-aarch64-linux-gnu binutils-aarch64-linux-gnu snap install go --classic git clone https://github.com/opencontainers/runc.git cd runc CGO_ENABLED=1 GOARCH=arm64 CC=aarch64-linux-gnu-gcc make static
Once the build has completed, you will need to copy (SCP) the runc binary which should be located within the working directory to your Photon OS VM under /usr/local/bin directory.
Prepare Photon OS (Arm) VM
Download the Photon OS (Arm) ISO and install that into ESXi-Arm VM. In my setup, I configured two Photon OS VM each configured with 2 vCPU, 4GB memory & 16GB storage running on a single Raspberry Pi 4 (8GB). Since I was not sure about the required configuration for Photon OS, I decided to use Photon OS version 3 to rule out any newer version of Photon. The instructions below should also work newer versions of Photon OS, but for validation purposes I ended up using photon-3.0-58f9c743-aarch64.iso
Once the OS installation has completed, you will need to run the following commands which will prepare the Photon OS VM:
sed -i 's/dl.bintray.com\/vmware/packages.vmware.com\/photon\/$releasever/g' /etc/yum.repos.d/*.repo tdnf -y update photon-repos tdnf -y install tar conntrack socat ethtool ebtables iptables -A INPUT -p tcp -m tcp --dport 6443 -j ACCEPT iptables-save > /etc/systemd/scripts/ip4save ARCH="arm64" K8S_RELEASE="v1.22.0" CRICTL_VERSION="v1.22.0" KUBEPKG_VERSION="v0.4.0" CONTAINERD_VERSION="1.5.2" DOWNLOAD_DIR=/usr/local/bin curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${K8S_RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet,kubectl} chmod +x {kubeadm,kubelet,kubectl} mv {kubeadm,kubelet,kubectl} ${DOWNLOAD_DIR} curl -LO "https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-${ARCH}.tar.gz" tar -zxvf crictl-v1.22.0-linux-arm64.tar.gz -C ${DOWNLOAD_DIR} rm crictl-v1.22.0-linux-arm64.tar.gz curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${KUBEPKG_VERSION}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | tee /etc/systemd/system/kubelet.service mkdir -p /etc/systemd/system/kubelet.service.d curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${KUBEPKG_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf curl -LO https://github.com/kind-ci/containerd-nightlies/releases/download/containerd-${CONTAINERD_VERSION}/containerd-${CONTAINERD_VERSION}.linux-arm64.tar.gz tar -zxvf containerd-${CONTAINERD_VERSION}.linux-arm64.tar.gz -C /usr rm -f containerd-${CONTAINERD_VERSION}.linux-arm64.tar.gz mkdir -p /etc/containerd containerd config default | tee /etc/containerd/config.toml sed -i 's/SystemdCgroup.*/SystemdCgroup = true/g' /etc/containerd/config.toml cat > /usr/lib/systemd/system/containerd.service <<EOF [Unit] Description=containerd container runtime Documentation=https://containerd.io After=network.target [Service] ExecStartPre=-/sbin/modprobe overlay ExecStart=/usr/bin/containerd Restart=always RestartSec=5 KillMode=process Delegate=yes OOMScoreAdjust=-999 LimitNOFILE=1048576 # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNPROC=infinity LimitCORE=infinity TasksMax=infinity [Install] WantedBy=multi-user.target EOF systemctl enable kubelet.service systemctl enable containerd cat <<EOF | tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF cat <<EOF | tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF swapoff -a reboot
The instructions above will replace Step 2 from the Hybrid (x86 and Arm) Kubernetes clusters using Tanzu Community Edition (TCE) and ESXi-Arm article and you should follow the remainder steps as outlined in that blog post.
One thing that I had discovered when using Photon OS instead of Ubuntu for the Linux Arm VM, is that the Container Networking Interface (CNI) needs to be install before Step 7 from the Hybrid (x86 and Arm) Kubernetes clusters using Tanzu Community Edition (TCE) and ESXi-Arm article will be successful and show Running.
The additional step that is needed is that when you observe the following message within the byoh-agent.log that states k8s node has successfully bootstrapped:
controller/byohost "msg"="k8s node successfully bootstrapped" "name"="photon-arm-vm-1" "namespace"="default" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="ByoHost"
You will need to open a new SSH session to the Photon OS VM that is running the control plane function and apply the CNI as described in Step 8 from the Hybrid (x86 and Arm) Kubernetes clusters using Tanzu Community Edition (TCE) and ESXi-Arm article.
mkdir ~/.kube cp /etc/kubernetes/admin.conf ~/.kube/config kubectl apply -f https://github.com/antrea-io/antrea/releases/download/v1.4.0/antrea.yml
Once the CNI has completed installation and is successfully running, you should see that the kubectl get machine command will finally complete and display Running as shown in the screenshot below.
After figuring out the required process, I also had some fun constructing a hybrid TCE Workload Cluster that comprised of both ESXi-x86 and ESXi-Arm VMs running both Ubuntu and Photon OS π
Definitely lots of interesting possibilities with the Cluster API BYOH Provider!
Thanks for the comment!