In response to a customer request to add Arm64 support for our VMware Event Router, I have been spending some more time playing with k3s (lightweight Kubernetes distribution for Arm) running on ESXi-Arm using a Raspberry Pi. Not only was this a good learning experience that exposed to me to the broader Arm ecosystem, which is still maturing but it also took me down several 🐰🕳️ which got me exploring new tools that I had never used before such as Buildpacks and Docker buildx to name a few.
This past weekend, I was finally successful in setting up our VMware Event Router for Arm using the Knative processor on a k3s cluster using ESXi-Arm running on a Raspberry Pi 4b 8GB model! As of writing this, the following versions were used:
- Knative Serving v0.20.0
- Knative Net Contour v0.20.0
- Knative Eventing v0.20.1
- RabbitMQ Cluster Operator v0.5.0
Made some more progress w/@KnativeProject + @VMWEventBroker on k3s on @esxi_arm
✅ Knative Serving & Eventing
✅ @RabbitMQ Operator & Eventing
✅ @projectcontour
✅ @VMware Event RouterJust need to figure out @buildpacks_io for Arm64 - https://t.co/ChdkMLSXMp looks promising pic.twitter.com/XFWDiGONSB
— William Lam (@lamw.bsky.social | @*protected email*) (@lamw) January 24, 2021
In addition, I was able to also convert the Knative python echo function that was originally created by my colleague Michael Gasch and build an Arm64 version of the Knative python echo function which demonstrates the integration of VEBA with the Knative processor connected to a vCenter Server as my event source.
🥳 Successfully deployed & verified my arm64 python echo func w/@VMWEventBroker (Event Router) using the @KnativeProject processor!
Awesome for lightweight testing/development purposes on small VM w/k3s on @esxi_arm
Heck, don’t even need real vCenter, can run vcsim locally! pic.twitter.com/DuI16fvXfs
— William Lam (@lamw.bsky.social | @*protected email*) (@lamw) January 24, 2021
For those interested in just the VMware Event Router Arm64 image, you can access it here and we plan to make that an official image shortly. For those interested in setting up a fully functional Arm deployment of VEBA and Knative processor, you can find the detailed instructions below.
Step 1 - Download and install Photon OS for Arm as a VM running on your ESXi-Arm host. In my setup, I have the 8GB Raspberry Pi and I have configured the VM to have 4 vCPU, 4GB of memory and the default storage configuration.
Step 2 - After Photon OS has been installed, login via SSH (you will need to allow root login by editing SSH configuration and restarting the service) and then run the following two commands which will update to the new VMware repository and install all required packages:
sed -i 's/dl.bintray.com\/vmware/packages.vmware.com\/photon\/$releasever/g' /etc/yum.repos.d/*.repo
tdnf -y install tar git open-vm-tools awk
Step 3 - Next, we are going to install k3s. We need to disable the default service load balancer from deploying or we will have a conflict later on when deploying Contour. We also need to create a symlink from the k3s configuration file to ~/.kube/config since the Knative CLI will expect it this default location when we get to that step:
curl -sfL https://get.k3s.io | sh -s - --disable servicelb
ln -s /etc/rancher/k3s/k3s.yaml /root/.kube/config
Step 4 - Install the Knative CLI by running the following commands:
curl -LO https://github.com/knative/client/releases/download/v0.19.1/kn-linux-arm64
chmod +x kn-linux-arm64
mv kn-linux-arm64 /usr/local/bin/kn
Step 5 - Install Golang by running the following commands:
curl -LO https://golang.org/dl/go1.15.6.linux-arm64.tar.gz
tar -C /usr/local -xzf go1.15.6.linux-arm64.tar.gz
rm -f go1.15.6.linux-arm64.tar.gz
export PATH=$PATH:/usr/local/go/bin
Step 6 - Install the ko utility by running the following commands:
go get github.com/google/ko
export PATH=$PATH:$(pwd)/go/bin
Step 7 - To ensure that all utilities have been added to our default search path, we will create .bash_profile using the following commands:
cat > /root/.bash_profile <<EOF
export PATH=$PATH:/usr/local/go/bin:/root/go/bin
alias k=kubectl
EOF
Step 8 - We are now ready to install Knative Serving and Contour for the networking layer by running the following commands:
kubectl apply -f https://github.com/knative/serving/releases/download/v0.20.0/serving-crds.yaml
kubectl apply -f https://github.com/knative/serving/releases/download/v0.20.0/serving-core.yaml
kubectl wait deployment --all --timeout=-1s --for=condition=Available -n knative-serving
kubectl apply --filename https://github.com/knative/net-contour/releases/download/v0.20.0/contour.yaml
kubectl apply --filename https://github.com/knative/net-contour/releases/download/v0.20.0/net-contour.yaml
kubectl patch configmap/config-network --namespace knative-serving --type merge --patch '{"data":{"ingress.class":"contour.ingress.networking.knative.dev"}}'
kubectl wait deployment --all --timeout=-1s --for=condition=Available -n contour-internal
kubectl wait deployment --all --timeout=-1s --for=condition=Available -n contour-external
Step 9 - Now, we will move into installing Knative Eventing by running the following commands:
kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.20.1/eventing-crds.yaml
kubectl apply --filename https://github.com/knative/eventing/releases/download/v0.20.1/eventing-core.yaml
kubectl wait pod --timeout=-1s --for=condition=Ready -l '!job-name' -n knative-eventing
Step 10 - Next, we will install the RabbitMQ Cluster Operator by running the following commands:
curl -LO https://github.com/rabbitmq/cluster-operator/releases/latest/download/cluster-operator.yml
sed -i 's#rabbitmqoperator/cluster-operator:.*#lamw/rabbitmq-operator-arm64:latest#g' cluster-operator.yml
kubectl apply -f cluster-operator.yml
kubectl wait deployment --all --timeout=-1s --for=condition=Available -n rabbitmq-system
Note: Since the RabbitMQ Operator does not have support for Arm64, I have compiled my own image at lamw/rabbitmq-operator-arm64:latest. If you prefer to build it yourself, simply clone the repo and update the Dockerfile as instructed in this PR which requests for Arm64 support.
Step 11 - Since we are using the eventing components for RabbitMQ, we will also need to build and push the Arm64 images to our our Dockerhub account. To do so, specify the username of your Dockerhub account and login with your credentials using the commands below:
export KO_DOCKER_REPO=lamw
docker login -u ${KO_DOCKER_REPO}
Step 12 - Once logged in, run the following command which will clone the RabbitMQ Eventing Components repo and build the required container images for Arm64 by running the following commands:
git clone https://github.com/knative-sandbox/eventing-rabbitmq.git
cd eventing-rabbitmq/
ko --platform=linux/arm64 apply -f config/broker
kubectl wait deployment --all --timeout=-1s --for=condition=Available -n default
Step 13 - Setup RabbitMQ Eventing by running the following two kubectl commands:
kubectl apply -f - << EOF apiVersion: rabbitmq.com/v1beta1 kind: RabbitmqCluster metadata: name: rokn namespace: default spec: resources: requests: memory: 200Mi cpu: 100m replicas: 1 EOF</pre> <pre class="lang:yaml decode:true">kubectl apply -f - << EOF apiVersion: eventing.knative.dev/v1 kind: Broker metadata: name: rabbit annotations: eventing.knative.dev/broker.class: RabbitMQBroker spec: config: apiVersion: rabbitmq.com/v1beta1 kind: RabbitmqCluster name: rokn EOF kubectl wait deployment --all --timeout=-1s --for=condition=Available -n default
Before proceeding further, we verify that everything has been setup correctly thus far. Run the following command and ensure the following two pods are running:
kubectl get pods
Step 14 - To easily access our Knative broker, we need to expose a NodePort by running the following command:
kubectl expose pod $(kubectl get pods | grep "rabbit-broker-ingres" | awk '{print $1}') --type NodePort
Step 15 - We can confirm that we can successfully connect by running simple cURL test. First, we need to retrieve the Knative Broker IP and Port by running the following commands, replace the IP Address with the IP for your Photon OS VM running k3s.
KNATIVE_BROKER_PORT=$(kubectl get svc -n default | grep NodePort | awk '{print $5}' | sed 's/8080://;s/\/TCP//g')
KNATIVE_NODE_IP=192.168.30.177
curl -i ${KNATIVE_NODE_IP}:${KNATIVE_BROKER_PORT} -d '{"hello":"world"}' -H "content-type":"application/cloudevents+json; charset=utf-8"
If Knative was setup correctly, the cURL should return the following output:
Step 16 - Next, we will deploy the VMware Event Router which is at the heart of the VMware Event Broker Appliance (VEBA). Define the following environmental variables that contains the IP Address/Hostname of your vCenter Server and Username/Password
VCENTER_IP=192.168.30.3
VCENTER_USERNAME='*protected email*'
VCENTER_PASSWORD='VMware1!'
Note: If you do not have a real vCenter Server (x86), you can optionally use the vcsim simulator tool. To setup vcsim, you will need to run the following commands and replace the listening address with the IP Address of your k3s VM, which in my example it is 192.168.30.177 and update the VCENTER* variables pointing to our vcsim instance.
go get github.com/vmware/govmomi/govc
go get github.com/vmware/govmomi/vcsim
git clone https://github.com/lamw/govc-recordings.git
vcsim -load govc-recordings/vcsim-vcsa.primp-industries.local/ -l 192.168.30.177:8989
VCENTER_IP=192.168.30.177
VCENTER_USERNAME='*protected email*'
VCENTER_PASSWORD='VMware1!'
Step 17 - Run the following command to create the vmware namespace along with the VMware Event Router configurations which will be stored as a Kubernetes secret:
cat > event-router-config.yaml << EOF apiVersion: event-router.vmware.com/v1alpha1 kind: RouterConfig metadata: name: router-config-knative eventProcessor: name: veba-knative type: knative knative: insecureSSL: false encoding: structured destination: uri: host: ${KNATIVE_NODE_IP}:${KNATIVE_BROKER_PORT} scheme: http path: eventProvider: name: veba-vc-01 type: vcenter vcenter: address: https://${VCENTER_IP}/sdk auth: basicAuth: password: "${VCENTER_PASSWORD}" username: "${VCENTER_USERNAME}" type: basic_auth insecureSSL: true checkpoint: false metricsProvider: default: bindAddress: 0.0.0.0:8082 name: veba-metrics type: default EOF kubectl create ns vmware kubectl -n vmware create secret generic event-router-config --from-file=event-router-config.yaml
Step 18 - Run the following command to deploy the VMware Event Router:
cat > event-router-k8s.yaml << EOF apiVersion: apps/v1 kind: Deployment metadata: labels: app: vmware-event-router name: vmware-event-router spec: replicas: 1 selector: matchLabels: app: vmware-event-router template: metadata: labels: app: vmware-event-router spec: containers: - image: embano1/router-c3dac096cdb7ad9fe1a56bb4ab9fc5d3 imagePullPolicy: IfNotPresent args: ["-config", "/etc/vmware-event-router/event-router-config.yaml"] name: vmware-event-router resources: requests: cpu: 200m memory: 200Mi volumeMounts: - name: config mountPath: /etc/vmware-event-router/ readOnly: true volumes: - name: config secret: secretName: event-router-config --- apiVersion: v1 kind: Service metadata: labels: app: vmware-event-router name: vmware-event-router spec: ports: - port: 8082 protocol: TCP targetPort: 8082 selector: app: vmware-event-router sessionAffinity: None EOF kubectl -n vmware apply -f event-router-k8s.yaml
Step 19 - To verify that VMware Event Router deployment and that it can successfully connect to our vCenter Server or vcsim instance, run the following command:
kubectl -n vmware logs deployment.apps/vmware-event-router
You should not see any connection errors as shown in the screenshot below
Lastly, we are now ready to deploy our sample Knative echo function which will simply take vCenter Event and echo out the Cloud Event payload.
Step 20 - Run the following command to create Knative Service/Trigger:
cat > function.yaml <<EOF apiVersion: serving.knative.dev/v1 kind: Service metadata: name: kn-echo spec: template: metadata: annotations: autoscaling.knative.dev/maxScale: "1" autoscaling.knative.dev/minScale: "1" spec: containers: - image: lamw/kn-python-echo:latest --- apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: veba-echo-trigger spec: broker: rabbit subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service name: kn-echo EOF kubectl apply -f function.yaml
Note: I have pre-built an Arm64 container image lamw/kn-python-echo for the kn-python-echo function. If you prefer to build it yourself, please see my repo https://github.com/lamw/kn-python-echo
Step 21 - To see the VEBA + Knative processor integration in action, we can run the following command which will tail the logs of our kn-echo function and perform any operation in vCenter Server such as powering on a VM and we should see the output "echo" out from our function as shown in the screenshot below:
kubectl logs deploy/$(kubectl get deployment | grep kn-echo | awk '{print $1}') -c user-container -f
Thanks for the comment!