vSphere with Tanzu has received an exciting update with the release of vSphere 8.0 Update 1, which removes the restriction for requiring NSX-based networking to deploy Supervisor Services. This is really cool because customers with only a VDS based Supervisor can now also get the benefits of the various Supervisor Services that vSphere with Tanzu supports!
For those not aware, Supervisor Services are deployed as vSphere Pods, which is a super tiny VM that boots up a Photon OS kernel and is configured with just enough resources to run one or more Linux containers. In earlier releases of vSphere with Tanzu, vSphere Pods required an NSX based Supervisor, but with this restriction removed in vSphere 8.0 Update 1, it seems like deploying vSphere Pods should also be possible with just a VDS based Supervisor? 🤔
I attempted to deploy a container to my Supervisor Cluster in the hopes that it would deploy the workload as a vSphere Pod, but it immediately returned with the following error:
Error from server (workload management cluster uses vSphere Networking, which does not support action on kind Deployment): error when creating "deployment.yaml": admission webhook "default.validating.license.supervisor.vmware.com" denied the request: workload management cluster uses vSphere Networking, which does not support action on kind Deployment
Based on the message, it looks like this is not a technical limitation for deploying vSphere Pods when using a VDS based Supervisor, but that this would only be supported when using an NSX based Supervisor.
I was still curious on how this was working and by default when you login to the Supervisor Cluster, you have limited privileges and so I decided to log into the Supervisor Cluster via vCenter Server by going through the troubleshooting workflow which will put you on one of the Supervisor Control Plane VMs, which I have written numerous blogs about for various scenarios.
I typically prefer to use kubectl from my local desktop for ease of access but also for the nice colored console output. I figured why not just copy the .kube/config file from the Supervisor Control Plane VM to my desktop and inspect it that way. Initially, nothing stood out to on how the the requests were being intercepted and I was about to call it a day. I thought since I have admin context to the Supervisor Cluster, maybe this might do something different if I tried to deploy the container again?
To my complete surprise, it worked and it had successfully deployed a vSphere Pod in the vSphere Namespace that I had created earlier! In fact, the screenshot from above is actually a vSphere Pod running with a VDS based Supervisor Cluster using the WordPress vSphere Pod Example from the VMware documentation. 😆
Disclaimer: This is not officially supported by VMware, use at your own risk.
From an education and exploration standpoint, I think this can be super useful, especially if you want to run a handful of containers without having to spin up a full Tanzu Kubernetes Grid (TKG) Workload Cluster! For example, I recently saw that we had launched a new vSAN Object Viewer Fling, which is provided as a Docker container. Great! We can easily take that container and also deploy into Kubernetes and specifically running as a vSphere Pod, which I have done by creating this basic YAML manifest example as shared in the tweet below.
Just heard about new @vmwarevsan Object Viewer @vmwflings - Noticed its just Docker container ... so figured why not deploy it to k8s but instead of requiring full Guest Cluster, how about as a vSphere Pod! Well, it works & runs on vSAN!🙌 👀gist for YAMLhttps://t.co/txTJMmi3U5 pic.twitter.com/aAIbOUahEq
— William Lam (@lamw.bsky.social | @*protected email*) (@lamw) May 17, 2023
If you are interested in exploring vSphere Pods and only have access to a VDS based Supervisor Cluster configured with say HAProxy for your network load balancer, then you can follow these steps below:
Step 0 - Enable vSphere with Tanzu using either HAProxy or NSX Advanced Load Balancer (NSX-ALB) on your desired vSphere Cluster.
Step 1 - Create a standard vSphere Namespace using the vSphere UI, this will make it easy to manage your vSphere Pods which should live in a vSphere Namespace that you create using the vSphere UI/API.
Step 2 - SSH to the VCSA and then run the following script to retrieve the Supervisor Cluster Control Plane VM address and credentials:
/usr/lib/vmware-wcp/decryptK8Pwd.py
Step 3 - SSH to the IP Address using the root username and the password provided from the previous command
Step 4 - Copy the contents from .kube/config and store that in your own local .kube directory.
Note: If you already have an existing file, this would override the contents and you may want to back up the original copy in case you wish to revert back to your existing configuration.
Step 5 - Next, we need to make a few edits to the .kube/config file before we can use it locally. First, delete the certificate-authority-data section and replace that withthe insecure-skip-tls-verify flag as shown in snippet below. Secondly, replace localhost address (127.0.0.1) with the IP Address of your Supervisor Control Plane address, it should roughly look like the following snippet below:
apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://10.10.0.65:6443 snip .....
Step 6 - Deploy your Kubernetes manifest and ensure that you are specifying the vSphere Namespace by using the -n parameter as shown in the example snippet below:
kubectl -n primp-industries apply -f [your-manifests].yaml
If you need a reference vSphere Pod workload to deploy, you can either use the WordPress or my vSAN Object Viewer example.
Saadallah Chebaro says
Thank you William. So is this a bug or by design ? Is it going to be fixed and supported anytime soon ?
William Lam says
It’s by design as I’ve already mentioned in post
AA says
Hi William,
According to https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-with-tanzu-services-workloads/GUID-177C23C4-ED81-4ADD-89A2-61654C18201B.html
"Namespaces on a three-zone Supervisor configured with NSX, do not support vSphere Pods"
So when running a three-zone Supervisor configured with NSX, what *is* supported or what is the alternative to running vSphere Pods on the Supervisor? Is it just a TKG cluster, thus the name TKG2 on Supervisor?
Thanks for any insights.
sofianetech says
Hi William:
I have followed the steps on supervisor clusters running on vSphere 8u2 , but no change same issue , is there any important thing that I have missed ?
admission webhook "default.validating.license.supervisor.vmware.com" denied the request: workload management cluster uses vSphere Networking, which does not support action on kind Deployment
sofianetech says
Hi All
I changed rhz .kube/config file with the good parameters and could finally bypassed the NSX restrictions ; Thanks , but I encountered a second problem when deploying a pod : ' None of the nodes have ESX host annotations" , my question is the Vsphere pods need any specialist package must be installed on ESXI to allow PODs , I read from Vmware the he spherlee package will be installed on ESXI hosts by NSX when configuring the Supervisor Cluster
Nathan says
Hello,
I seem to be still having the same error , does this still work with 8 update two?
After updating the config with the full string ➖ the cert should you login again to the cluster.
Steffen Kleven says
Hi William:
I enjoy your articles and finally managed to get Workload management working in my lab environment. I used the fix above so i can create vSphere PODs and everything works as expected apart from one quirk.
When I select one of my deployed containers under my namespace i expect to see a summary page (maybe I am wrong?), but i get:
You have no privileges to view "wordpress-64f754ffd4-4ns9k" object.
Have you encountered this issue?
Best Regards,
Steffen Kleven
Steffen Kleven says
Must have been a bug. Now i get the inventory data.
Ashley Ashley McDonald says
you seen this bastard stealing your content and what looks like rewriting with AI or something even crappier like double translate..
hxxps://www.c-sharpcorner.com/article/vsphere-pods-using-vds-based-supervisor-in-vsphere-with-tanzu/
hopefully the link doesnt render
Ashley McDonald says
primp-industries gives it totally away 😛
William Lam says
Thank you Ashley for reporting ... sadly this isn't the first time I've seen this. I've already sent a DMCA take down notice to their provider. I really do apperciate this!
Kjell Computer says
Hi!
This guide was incredibly helpful for setting up Tanzu for vSphere 8u2 with just one supervisor in my lab. Since I have only one ESXi host with limited CPU resources, I followed the procedure to log in to the supervisor and modify all deployments to contain only one replica.
As a result, all pods are now configured correctly, and the integration with Aria Automation is also working seamlessly so that I can deploy TKG clusters from there. This was an excellent article—thank you!