This post is part of a short series that builds on our minimal VMware Cloud Foundation (VCF) 9.0 deployment (2x Minisforum MS-A2) and showcases how to fully leverage the exciting new capabilities in the VCF 9 platform, all while maintaining a minimal resource footprint, which is ideal for lab and learning purposes.
In this blog post, we will explore one of the foundational vSphere Supervisor services called vSphere Kubernetes Service (VKS), enabling administrators to easily deploy, manage and lifecycle conformant Kubernetes Clusters at scale for their development and platform teams. VKS can be consumed through vCenter Server for single IT organizations, as well as through VCF Automation for organizations that require strong multi-tenancy, including cloud service providers.

Here are some additional VKS Resources that might be of interests if you would like to learn more:
Requirements:
- VCF 9.0 environment deployed
- NSX VPC configured with Centralized Transit Gateway
- vSphere Supervisor configured with NSX VPC Networking
Historically, the primary method for deploying a VKS Cluster was using the command-line and after connecting to your vSphere Supervisor endpoint, you would use the kubectl command to apply a YAML manifest that would describe the VKS Cluster you wish to deploy. While the kubectl method continues to be supported with VCF 9, see the documentation for more details, we will be leveraging another approach using a nice graphical interface that can be accessed directly from the vSphere UI or as a standalone interface called the Local Consumption Interface (LCI).
Step 1 - LCI also runs as a vSphere Supervisor service but it is not installed by default. You can download the free LCI deployment manifest from the Broadcom Support Portal (BSP) by navigating to My Downloads->Free Downloads->vSphere Supervisor Services->Local Consumption Interface (direct URL link).
Step 2 - To install LCI vSphere Supervisor service, navigate to Supervisor Management->Services->Add to upload LCI deployment manifest file.

Once the manifest has been uploaded, go ahead and register the new LCI service.

You should now see a new LCI tile under the list of vSphere Supervisor services and click on the Configure action to choose which vSphere Supervisor Cluster to enable LCI service.

Step 3 - We now need to create a vSphere Namespace, which is required to deploy any vSphere Supervisor workload whether it is VKS Cluster or even a traditional VMs using the VM Service. Navigate to Supervisor Management->Namespaces->New Namespace to begin.

After specifying the name of vSphere Namespace (must be DNS-compliant, so no spaces, upper-case or special characters are allowed), proceed with using the defaults including the vSphere Zone that was created as part of enabling vSphere Supervisor.

Step 4 - Next, we assign resources policies to our vSphere Namespace to control how much compute and storage resources are granted along with instance size configurations.
Under the Storage->Add Storage is where you will assign the desired VM Storage Policy, you can either choose the VCF VM Storage Policy or create a custom one based on your preferences.

Under VM Service->Add VM Class is where you will assign the VM (T-Shirt) sizes which you can either pick from the available system defaults or you can even define your own custom VM classes along with custom labels

At this point, you are now ready to deploy your first VKS Cluster using the LCI interface!
Step 5 - Navigate to Supervisor Management->Namespaces and then select the vSphere Namespace you had created from Step 3 and you should see Resources tab and this is how you access the LCI graphical interface.

In addition to being able to deploy a VKS Cluster, you can use LCI to deploy other vSphere Supervisor services: Virtual Machine, Network (VPC, Load Balancers), Virtual Machine Images (think AWS AMI), Volume (PVC) and Database to just name a few.
You can also connect directly to the LCI interface as a standalone client by clicking on the link or simply opening up a browser to the vSphere Supervisor API Server FQDN (from Step 6 in this configuration)

If you are new to creating VKS Clusters and you want the quick and simple way to provision a VKS Cluster, I highly recommend using the LCI interface. You literally click next AND finish to deploy bsaic VKS Cluster (single Control Plane and Worker Node) without even blinking!
Furthermore and why I think the LCI interface is ideal place to start, whether you are using the simple or advanced part of the wizard is that ready to use YAML deployment manifest is generated as you interact with the various vSphere Supervisor services.

You can see exactly how the YAML is generated and best of all, you can download the YAML manifest and then connect to your vSphere Supervisor using the new VCF CLI to apply the manifest and deploy your workload without requiring LCI UI!
Step 6 - Once your VKS Cluster is up and running, we can now download the kubeconfig file (expand arrow) which will allow us to connect to our VKS Cluster using kubectl.

You can then use the --kubeconfig option with kubetcl to now explore your VKS Cluster and deploy important container applications like Doom 🙂 and request resources like a Kubernetes service load balancer or persistent volume claim (PVC), which will automatically be provisioned by vSphere Supervisor with the built-in resource management defined by your vSphere Namespace!

As of this publishing this blog post, here is a working Kubedoom deployment manifest that is compatible with Kubernetes Cluster v1.32
---
apiVersion: v1
kind: Namespace
metadata:
name: kubedoom
labels:
pod-security.kubernetes.io/enforce: privileged
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubedoom
namespace: kubedoom
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubedoom
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubedoom
namespace: kubedoom
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubedoom
namespace: kubedoom
labels:
app: kubedoom
spec:
replicas: 1
selector:
matchLabels:
app: kubedoom
template:
metadata:
labels:
app: kubedoom
spec:
serviceAccountName: kubedoom
containers:
- name: kubedoom
image: ghcr.io/storax/kubedoom:latest
ports:
- containerPort: 5900
name: vnc
env:
- name: NAMESPACE
value: default
securityContext:
runAsNonRoot: false
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
seccompProfile:
type: RuntimeDefault
---
apiVersion: v1
kind: Service
metadata:
name: kubedoom-svc
namespace: kubedoom
labels:
app: kubedoom
spec:
type: LoadBalancer
selector:
app: kubedoom
ports:
- name: vnc
port: 5900
targetPort: 5900
protocol: TCP
You can then run the following to deploy kubedoom onto your VKS Cluster and request service load balancer which will be serviced by our NSX VPC and provide you with an externally accessible address to connect (e.g. 31.31.0.10:5900) and the passsword to kubedoom is idbehold
kubectl--kubeconfig ~/Desktop/kubernetes-cluster-bkha-kubeconfig.yaml apply -f kubedoom.yaml
kubectl--kubeconfig ~/Desktop/kubernetes-cluster-bkha-kubeconfig.yaml -n kubedoom get svc


Thanks for the comment!