If you are running and deploying Kubernetes (K8s) which includes vSphere with Tanzu and Tanzu Kubernetes Grid (TKG), you might notice vSphere Container Volumes showing up in the vSphere UI under the Monitor tab for a given vSphere-based Datastore. This is normal and expected as new Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) are being requested as part of deploying K8s-based application that require storage.
Typically, when PVs and PVCs are no longer needed, they should be cleaned up within the K8s layer via kubectl either automatically or manually depending on your provisioning process. When you delete a K8s Cluster, these PVs/PVCs are not automatically cleaned up and its for good reason, you may want to reuse them and the way vSphere supports this is by implementing them as First Class Disks (FCD), which means they are lifecycle independent of a VM.
What happens when the K8s Cluster has been deleted and you actually want to clean up these stale FCDs, how do you go about doing that? This is a question I have seen come up more frequently and there are a few options.
Option 1:
If you happen to be on vSphere 7.0 Update 2 (which was just released yesterday), the vSphere UI has been enhanced to allow users to now delete vSphere Container Volume (see screenshot above). Previously, you could only view the FCDs and reapply a storage policy.
Option 2:
Since vSphere Container Volumes are just FCDs and we have FCD APIs, we can use the API to retrieve information as well as clean them up. The easiest way is to use PowerCLI's Get-CnsVolume and Remove-CnsVolume cmdlets.
Here is an example of deleting the 2GB volume:
Get-CnsVolume -Datastore (Get-Datastore "sm-vsanDatastore") -Name "pvc-db6829ad-e1a9-46e8-ace3-7e7c18187a0d" | Remove-CnsVolume
In the case of standalone FCDs, which could have been manually provisioned or through a backup solution, you can also clean them up by using PowerCLI's Get-VDisk and Remove-VDisk cmdlets respectively:
Get-VDisk -Name "fill-me-in" | Remove-VDisk
GP says
Thanks for this, exactly what I need, if I was running vSphere 7 🙁 Unfortunately I'm on 6.7 and looking for similar functionality. Right now I'm in the process of manually cleaning up a couple thousand zombie files and these UUIDs have me seeing stars...
William Lam says
My recommendation is NOT to manually clean up and if you understand the manual process, you should consider automating it. I suspect most of what you need can be pulled via the vSphere API, this is a great learning opportunity 🙂
GP says
Thanks the the reply! This is indeed the plan, and yes, will be a great learning opportunity.
kurthv71 says
Hi William,
I have the issue, that the container volumes are located on a NFS datastore that is "inaccessible". To get rid of the datastore in vCenter I need to delete these container volumes, but "Option 1" is not working in my vSphere 7U2 lab environment.
I would guess, that "Option 2" isn't working either.
My question:
Is there a kind of "force" option to remove these orphaned container volumes?
Best regards,
Volker
Dennis says
Hey William,
your description of FCDs as "independent of the the vms lifecycle" seems to be broken, at least in 7.0.3
If storage vMotion is used for the vm the disk is currently attached to, the container volume gets moved with the vm into the vms folder on the storage and out of the fcd folder.
Same happens with this tool by vmware:
https://github.com/vmware-samples/cloud-native-storage-self-service-manager
I did not find any other information on this, did you experience this too?