vSphere with Tanzu has received an exciting update with the release of vSphere 8.0 Update 1, which removes the restriction for requiring NSX-based networking to deploy Supervisor Services. This is really cool because customers with only a VDS based Supervisor can now also get the benefits of the various Supervisor Services that vSphere with Tanzu supports!
For those not aware, Supervisor Services are deployed as vSphere Pods, which is a super tiny VM that boots up a Photon OS kernel and is configured with just enough resources to run one or more Linux containers. In earlier releases of vSphere with Tanzu, vSphere Pods required an NSX based Supervisor, but with this restriction removed in vSphere 8.0 Update 1, it seems like deploying vSphere Pods should also be possible with just a VDS based Supervisor? 🤔
I attempted to deploy a container to my Supervisor Cluster in the hopes that it would deploy the workload as a vSphere Pod, but it immediately returned with the following error:
Error from server (workload management cluster uses vSphere Networking, which does not support action on kind Deployment): error when creating "deployment.yaml": admission webhook "default.validating.license.supervisor.vmware.com" denied the request: workload management cluster uses vSphere Networking, which does not support action on kind Deployment
Based on the message, it looks like this is not a technical limitation for deploying vSphere Pods when using a VDS based Supervisor, but that this would only be supported when using an NSX based Supervisor.
I was still curious on how this was working and by default when you login to the Supervisor Cluster, you have limited privileges and so I decided to log into the Supervisor Cluster via vCenter Server by going through the troubleshooting workflow which will put you on one of the Supervisor Control Plane VMs, which I have written numerous blogs about for various scenarios.
I typically prefer to use kubectl from my local desktop for ease of access but also for the nice colored console output. I figured why not just copy the .kube/config file from the Supervisor Control Plane VM to my desktop and inspect it that way. Initially, nothing stood out to on how the the requests were being intercepted and I was about to call it a day. I thought since I have admin context to the Supervisor Cluster, maybe this might do something different if I tried to deploy the container again?
To my complete surprise, it worked and it had successfully deployed a vSphere Pod in the vSphere Namespace that I had created earlier! In fact, the screenshot from above is actually a vSphere Pod running with a VDS based Supervisor Cluster using the WordPress vSphere Pod Example from the VMware documentation. 😆
Disclaimer: This is not officially supported by VMware, use at your own risk.
From an education and exploration standpoint, I think this can be super useful, especially if you want to run a handful of containers without having to spin up a full Tanzu Kubernetes Grid (TKG) Workload Cluster! For example, I recently saw that we had launched a new vSAN Object Viewer Fling, which is provided as a Docker container. Great! We can easily take that container and also deploy into Kubernetes and specifically running as a vSphere Pod, which I have done by creating this basic YAML manifest example as shared in the tweet below.
Just heard about new @vmwarevsan Object Viewer @vmwflings - Noticed its just Docker container ... so figured why not deploy it to k8s but instead of requiring full Guest Cluster, how about as a vSphere Pod! Well, it works & runs on vSAN!🙌 👀gist for YAMLhttps://t.co/txTJMmi3U5 pic.twitter.com/aAIbOUahEq
— William Lam (@*protected email*) (@lamw) May 17, 2023
If you are interested in exploring vSphere Pods and only have access to a VDS based Supervisor Cluster configured with say HAProxy for your network load balancer, then you can follow these steps below:
Step 0 - Enable vSphere with Tanzu using either HAProxy or NSX Advanced Load Balancer (NSX-ALB) on your desired vSphere Cluster.
Step 1 - Create a standard vSphere Namespace using the vSphere UI, this will make it easy to manage your vSphere Pods which should live in a vSphere Namespace that you create using the vSphere UI/API.
Step 2 - SSH to the VCSA and then run the following script to retrieve the Supervisor Cluster Control Plane VM address and credentials:
Step 3 - SSH to the IP Address using the root username and the password provided from the previous command
Step 4 - Copy the contents from .kube/config and store that in your own local .kube directory.
Note: If you already have an existing file, this would override the contents and you may want to back up the original copy in case you wish to revert back to your existing configuration.
Step 5 - Next, we need to make a few edits to the .kube/config file before we can use it locally. First, delete the certificate-authority-data section and replace that withthe insecure-skip-tls-verify flag as shown in snippet below. Secondly, replace localhost address (127.0.0.1) with the IP Address of your Supervisor Control Plane address, it should roughly look like the following snippet below:
apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://10.10.0.65:6443 snip .....
Step 6 - Deploy your Kubernetes manifest and ensure that you are specifying the vSphere Namespace by using the -n parameter as shown in the example snippet below:
kubectl -n primp-industries apply -f [your-manifests].yaml