WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud
  • Tanzu
    • Application Modernization
    • Tanzu services
    • Tanzu Community Edition
    • Tanzu Kubernetes Grid
    • vSphere with Tanzu
  • Home Lab
  • Nested Virtualization
  • Apple
You are here: Home / ESXi-Arm / Cluster API BYOH Provider on Photon OS (Arm) with Tanzu Community Edition (TCE) and ESXi-Arm

Cluster API BYOH Provider on Photon OS (Arm) with Tanzu Community Edition (TCE) and ESXi-Arm

11.22.2021 by William Lam // Leave a Comment

Last week I demonstrated how to take advantage of the new Kubernetes Cluster API Bring Your Own Host (BYOH) Provider with a VM running on ESXi-Arm and managed with Tanzu Community Edition (TCE). The Cluster API BYOH Provider is currently only tested and supported with an Ubuntu OS, but since the only requirements for a linux host was simply: kubeadm, kubelet and containerd, I figured it should also be possible with VMware's Photon OS which also has an Arm edition.

With a TON of trial/error and reverting snapshots, I was able to finally get Cluster API BYOH Provider to successful run on Photon OS as shared in a recent tweet.

πŸ‘Š
🎀

πŸ”₯ Uber Hybrid TCE Workload Cluster πŸ”₯

βœ… ESXi-Arm
βœ… ESXi-x86
βœ… Ubuntu Arm
βœ… Photon Arm
βœ… Ubuntu x86
❔ Photon x86 (should work but I'm lazy now haha) pic.twitter.com/dkPXSl4vLB

— William Lam (@lamw) November 21, 2021

What actually made this possible was actually the work I had done with VMware Event Broker Appliance (VEBA) project which also involves Photon OS and Kubernetes. More specifically, I had recently worked on porting VEBA from using the Docker runtime to Containerd with Kubernetes and that prior experience was invaluable while figuring out how to do this with Photon OS (Arm) which also had its own challenges. The instructions below will help setup a Photon OS (Arm) VM that can then be used with Cluster API BOYH Provider and the previous article will still need to be reference for the complete setup.

Build Runc

An Arm version of runc must be built and copied to your Photon OS VM. To do so, you will need to first install Ubuntu (Arm) and for my setup, I was using the latest Ubuntu (21.10) Arm ISO and performing a standard OS installation into an ESXi-Arm VM.

Once the OS installation has completed, you will need to run the following commands to build the runc binary:

apt install gcc make pkg-config libseccomp-dev gcc-aarch64-linux-gnu binutils-aarch64-linux-gnu
snap install go --classic
git clone https://github.com/opencontainers/runc.git
cd runc
CGO_ENABLED=1 GOARCH=arm64 CC=aarch64-linux-gnu-gcc make static

Once the build has completed, you will need to copy (SCP) the runc binary which should be located within the working directory to your Photon OS VM under /usr/local/bin directory.

Prepare Photon OS (Arm) VM

Download the Photon OS (Arm) ISO and install that into ESXi-Arm VM. In my setup, I configured two Photon OS VM each configured with 2 vCPU, 4GB memory & 16GB storage running on a single Raspberry Pi 4 (8GB). Since I was not sure about the required configuration for Photon OS, I decided to use Photon OS version 3 to rule out any newer version of Photon. The instructions below should also work newer versions of Photon OS, but for validation purposes I ended up using photon-3.0-58f9c743-aarch64.iso

Once the OS installation has completed, you will need to run the following commands which will prepare the Photon OS VM:

sed -i 's/dl.bintray.com\/vmware/packages.vmware.com\/photon\/$releasever/g' /etc/yum.repos.d/*.repo
tdnf -y update photon-repos
tdnf -y install tar conntrack socat ethtool ebtables
iptables -A INPUT -p tcp -m tcp --dport 6443 -j ACCEPT
iptables-save > /etc/systemd/scripts/ip4save

ARCH="arm64"
K8S_RELEASE="v1.22.0"
CRICTL_VERSION="v1.22.0"
KUBEPKG_VERSION="v0.4.0"
CONTAINERD_VERSION="1.5.2"
DOWNLOAD_DIR=/usr/local/bin

curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${K8S_RELEASE}/bin/linux/${ARCH}/{kubeadm,kubelet,kubectl}
chmod +x {kubeadm,kubelet,kubectl}
mv {kubeadm,kubelet,kubectl} ${DOWNLOAD_DIR}

curl -LO "https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-${ARCH}.tar.gz"
tar -zxvf crictl-v1.22.0-linux-arm64.tar.gz -C ${DOWNLOAD_DIR}
rm crictl-v1.22.0-linux-arm64.tar.gz

curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${KUBEPKG_VERSION}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | tee /etc/systemd/system/kubelet.service
mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${KUBEPKG_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:${DOWNLOAD_DIR}:g" | tee /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

curl -LO https://github.com/kind-ci/containerd-nightlies/releases/download/containerd-${CONTAINERD_VERSION}/containerd-${CONTAINERD_VERSION}.linux-arm64.tar.gz
tar -zxvf containerd-${CONTAINERD_VERSION}.linux-arm64.tar.gz -C /usr
rm -f containerd-${CONTAINERD_VERSION}.linux-arm64.tar.gz

mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml
sed -i 's/SystemdCgroup.*/SystemdCgroup = true/g' /etc/containerd/config.toml
cat > /usr/lib/systemd/system/containerd.service <<EOF
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target
[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/bin/containerd
Restart=always
RestartSec=5
KillMode=process
Delegate=yes
OOMScoreAdjust=-999
LimitNOFILE=1048576
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
[Install]
WantedBy=multi-user.target
EOF
systemctl enable kubelet.service
systemctl enable containerd

cat <<EOF | tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

cat <<EOF | tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

swapoff -a

reboot

The instructions above will replace Step 2 from the Hybrid (x86 and Arm) Kubernetes clusters using Tanzu Community Edition (TCE) and ESXi-Arm article and you should follow the remainder steps as outlined in that blog post.

One thing that I had discovered when using Photon OS instead of Ubuntu for the Linux Arm VM, is that the Container Networking Interface (CNI) needs to be install before Step 7 from the Hybrid (x86 and Arm) Kubernetes clusters using Tanzu Community Edition (TCE) and ESXi-Arm article will be successful and show Running.

The additional step that is needed is that when you observe the following message within the byoh-agent.log that states k8s node has successfully bootstrapped:

controller/byohost "msg"="k8s node successfully bootstrapped" "name"="photon-arm-vm-1" "namespace"="default" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="ByoHost"

You will need to open a new SSH session to the Photon OS VM that is running the control plane function and apply the CNI as described in Step 8 from the Hybrid (x86 and Arm) Kubernetes clusters using Tanzu Community Edition (TCE) and ESXi-Arm article.

mkdir ~/.kube
cp /etc/kubernetes/admin.conf ~/.kube/config
kubectl apply -f https://github.com/antrea-io/antrea/releases/download/v1.4.0/antrea.yml

Once the CNI has completed installation and is successfully running, you should see that the kubectl get machine command will finally complete and display Running as shown in the screenshot below.

After figuring out the required process, I also had some fun constructing a hybrid TCE Workload Cluster that comprised of both ESXi-x86 and ESXi-Arm VMs running both Ubuntu and Photon OS πŸ˜€

Definitely lots of interesting possibilities with the Cluster API BYOH Provider!

More from my site

  • Hybrid (x86 and Arm) Kubernetes clusters using Tanzu Community Edition (TCE) and ESXi-Arm
  • Stateless ESXi-Arm with Raspberry Pi
  • vSAN Witness using Raspberry Pi 4 & ESXi-Arm Fling
  • My Raspberry Pi 4 BOM for ESXi-Arm Fling
  • ESXi on Arm Fling is LIVE!

Categories // ESXi-Arm, Kubernetes, VMware Tanzu Tags // Arm, esxi, Photon, Raspberry Pi, Tanzu Community Edition, Tanzu Kubernetes Grid, TKG

Thanks for the comment! Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Infrastructure Business Group (CIBG) at VMware. He focuses on Cloud Native technologies, Automation, Integration and Operation for the VMware Cloud based Software Defined Datacenters (SDDC)

Connect

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Recent

  • Self-Contained & Automated VMware Cloud Foundation (VCF) deployment using new VLC Holodeck Toolkit 03/29/2023
  • ESXi configstorecli enhancement in vSphere 8.0 Update 1 03/28/2023
  • ESXi on Intel NUC 13 Pro (Arena Canyon) 03/27/2023
  • Quick Tip - Enabling ESXi Coredumps to be stored on USB 03/26/2023
  • How to disable the Efficiency Cores (E-cores) on an Intel NUC? 03/24/2023

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2023

 

Loading Comments...