Last year, when vSphere with Kubernetes (original name of what is now vSphere with Tanzu) was first released, I had shared a process on how to deploy a minimal setup including a detailed write-up for setting up vSphere with Tanzu on an Intel NUC with just 32GB of memory.
I am always looking for ways to simplify and ease the consumption of various VMware technologies within a homelab and I was pretty happy with the tweaks that I could make to reduce the amount of resources needed to run vSphere with Tanzu. Instead of needing to deploy three Supervisor Control Plane VMs, the modification to the vSphere with Tanzu configuration, allowed me to deploy just two Supervisor Control Plane VMs. It was unfortunate that deploying only a single Supervisor Control Plane VM at the time was not possible due to a known issue.
While deploying a pre-release of vSphere 7.0 Update 3 in one of my lab environments, I was going through the process of tweaking the vSphere with Tanzu configuration before enablement and I figure why not try the one node setting, in case it was fixed 🤷 I honestly was not expecting it to work since there was an internal bug that was filed awhile back and I had not seen the bug closed. To my complete surprise, vSphere with Tanzu enabled successfully and there was just a single Supervisor Control Plane VM!
It turns out that someone from Engineering must have fixed the issue and a single Supervisor Control Plane VM is now possible with the upcoming release of vSphere 7.0 Update 3! 🥳
UPDATE (07/02/24) - As of vSphere 8.0 Update 3, you no longer have the ability to configure a single Supervisor Control Plane VM using the minmaster and maxmasters parameters, which have also been removed from /etc/vmware/wcp/wcpsvc.yaml in favor of allowing users to control this configuration programmatically as part of enabling vSphere IaaS (formally known as vSphere with Tanzu). The updated vSphere IaaS API that allows users to specify number of Supervisor Control Plane VM will not be available until the next major vSphere release. While this regressed capability is unfortunate, it was also not an officially supported configuration and for users who wish to specify the number of Supervisor Control Plane VM using YAML method, you will need to use an earlier version of vSphere.
To change the settings, you will need to SSH to the VCSA and edit the following configuration file /etc/vmware/wcp/wcpsvc.yaml and search for minmasters and maxmasters and change the value from 3 to 1.
minmasters: 1
maxmasters: 1
For the changes to go into effect, you will need to restart the vSphere with Tanzu service which is listed as wcp by running the following command:
service-control --restart wcp
In addition, for homelab purposes, you may also want to change the controlplane_vm_disk_provisioning parameter, which defaults the Supervisor Control Plane VM to Thick provisioned rather than Thin, which many folks use in their labs.
controlplane_vm_disk_provisioning: "thin"
M. Buijs - Be-Virtual.net says
William thanks, this is great for Home Lab usage!
In my case the following setting was between quotes:
Your blog entry: controlplane_vm_disk_provisioning = thin
Config file at my VCSA: controlplane_vm_disk_provisioning = "thin"
William Lam says
Fixed
M. Buijs - Be-Virtual.net says
Thanks William!
Small update tested the line as follows:
VCSA: controlplane_vm_disk_provisioning = thin
&
VCSA: controlplane_vm_disk_provisioning = "thin"
In both cases, the Supervisor Control Plane VM was provisioned thick. I am afraid no difference. Maybe a small bug? I am running vCenter Server 7.0 Update 3.
Anybody else tested this already? 🙂
cy says
I successfully deployed using thin provisioning using the "thin" in the yaml
M. Buijs - Be-Virtual.net says
Just checked it again with vCenter Server 7.0 Update 3a and I also can confirm it is working now!
cy says
I successfully enable workload management by using 1 control plane supervisor VM but the tkg-plugin-server pod and the vmware-system-applatform-operator-mgr Stateful Sets will always failed to deploy
aretoojay says
Few pods won't come up due to Pod's node affinity/selector or pod affinity/anti-affinity rules. Atlease one pod from the deployment willbe up and running.
Toan says
Hi, My workload init repeated install and uninstall the last Supervisor Control Plane VM, could you know how to fix it?