When you deploy a Tanzu Kubernetes Grid (TKG) Cluster using the integrated TKG Service in vSphere with Tanzu, you can specify a Virtual Machine Class Type which determines the amount of CPU and Memory resources that are allocated for both the Control Plane and/or Worker Node VMs for your TKG Cluster.
Here is a sample YAML specification that uses the best-effort-xsmall VM class type for both Control Plane and Worker Node, but you can certainly override and choose different classes based on your requirements.
apiVersion: run.tanzu.vmware.com/v1alpha1 kind: TanzuKubernetesCluster metadata: name: william-tkc-01 namespace: primp-industries spec: distribution: version: v1.17.8+vmware.1-tkg.1.5417466 settings: network: cni: name: antrea pods: cidrBlocks: - 193.0.2.0/16 serviceDomain: managedcluster.local services: cidrBlocks: - 195.51.100.0/12 topology: controlPlane: class: best-effort-xsmall count: 1 storageClass: vsan-default-storage-policy workers: class: best-effort-xsmall count: 3 storageClass: vsan-default-storage-policy
Today, the are a total of 16 VM Class types that you can select from, however these are not customizable which is something that has been coming up more recently. The vSphere with Tanzu team is aware of this request and is working on a solution that not only makes customizing CPU and Memory easier but also supporting storage customization. As you can see from the table below, 16GB is only supported configuration today.
In the mean time, if you need a supported path for customizing your TKG Guest Clusters, one option is to use the TKG Standalone / MultiCloud CLI, which can be used with a vSphere with Tanzu Cluster. You will need to deploy an additional TKG Management Cluster (basically a few VMs), but once you have that, you can override CPU, Memory and Storage of both the Control Plane and Worker Nodes using the following environment variables:
- VSPHERE_WORKER_NUM_CPUS
- VSPHERE_WORKER_MEM_MIB
- VSPHERE_WORKER_DISK_GIB
- VSPHERE_CONTROL_PLANE_NUM_CPUS
- VSPHERE_CONTROL_PLANE_MEM_MIB
- VSPHERE_CONTROL_PLANE_DISK_GIB
If you are interested, the easiest way to get started is by using my TKG Demo Appliance Fling which was just recently updated to the latest TKG 1.2 release which has support for K8s v1.19 which is currently not available on vSphere with Tanzu.
Now, you might ask, would it be possible to create your own custom VM class types using vSphere with Tanzu? Well .... keep reading to find out 🙂
Disclaimer: This is not officially supported by VMware, use at your own risk. These custom changes can potentially impact upgrades or automatically be reverted upon the next update or upgrade. You have been warned.
Step 1 - SSH to your vCenter Server and run the following command to retrieve the root password so we can login to the Supervisor VM VIP
/usr/lib/vmware-wcp/decryptK8Pwd.py
Step 2 - SSH to the IP Address provided in the previous step along with the password that was shown on the screen
Step 3 - Create your custom VM Class type YAML specification. Below are two examples called guaranteed-william.yaml and best-effort-william.yaml which shows how you can specify both reserved and non-reserved class type. You can always use kubectl get vmclass <name-of-class> -o yaml to see the existing definitions
guaranteed-william.yaml
apiVersion: vmoperator.vmware.com/v1alpha1 kind: VirtualMachineClass metadata: name: guaranteed-william spec: hardware: cpus: 2 memory: 6Gi policies: resources: requests: cpu: 2000m memory: 6Gi
best-effort-william.yaml
apiVersion: vmoperator.vmware.com/v1alpha1 kind: VirtualMachineClass metadata: name: best-effort-william spec: hardware: cpus: 2 memory: 6Gi
Step 4 - To create our custom VM Class types, we need to run the following command and provide
kubectl appy -f guaranteed-william.yaml
kubectl appy -f best-effort-william.yaml
Step 5 - You can verify it was successfully created by running the following command:
kubectl get vmclass
At this point, you can logout of the Supervisor VM and vCenter and you are now ready to consume your new custom VM class.
Step 6 - If we take our TKG YAML example above and simply replace the class with our newly created VM class type called best-effort-william
apiVersion: run.tanzu.vmware.com/v1alpha1 kind: TanzuKubernetesCluster metadata: name: william-tkc-01 namespace: primp-industries spec: distribution: version: v1.17.8+vmware.1-tkg.1.5417466 settings: network: cni: name: antrea pods: cidrBlocks: - 193.0.2.0/16 serviceDomain: managedcluster.local services: cidrBlocks: - 195.51.100.0/12 topology: controlPlane: class: best-effort-william count: 1 storageClass: vsan-default-storage-policy workers: class: best-effort-william count: 3 storageClass: vsan-default-storage-policy
and then run the following command to deploy our TKG Cluster:
kubectl apply -f william.yaml
We should see the deployment start and if we look at the CPU/Memory, we should now see it matches our custom VM class type
Siva says
Hi William, Is there a way to create a class with additional disks, please?