WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

How to clean up stale vSphere Container Volumes & First Class Disks?

03.10.2021 by William Lam // 7 Comments

If you are running and deploying Kubernetes (K8s) which includes vSphere with Tanzu and Tanzu Kubernetes Grid (TKG), you might notice vSphere Container Volumes showing up in the vSphere UI under the Monitor tab for a given vSphere-based Datastore. This is normal and expected as new Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) are being requested as part of deploying K8s-based application that require storage.


Typically, when PVs and PVCs are no longer needed, they should be cleaned up within the K8s layer via kubectl either automatically or manually depending on your provisioning process. When you delete a K8s Cluster, these PVs/PVCs are not automatically cleaned up and its for good reason, you may want to reuse them and the way vSphere supports this is by implementing them as First Class Disks (FCD), which means they are lifecycle independent of a VM.

What happens when the K8s Cluster has been deleted and you actually want to clean up these stale FCDs, how do you go about doing that? This is a question I have seen come up more frequently and there are a few options.

Option 1:

If you happen to be on vSphere 7.0 Update 2 (which was just released yesterday), the vSphere UI has been enhanced to allow users to now delete vSphere Container Volume (see screenshot above). Previously, you could only view the FCDs and reapply a storage policy.

Option 2:

Since vSphere Container Volumes are just FCDs and we have FCD APIs, we can use the API to retrieve information as well as clean them up. The easiest way is to use PowerCLI's Get-CnsVolume and Remove-CnsVolume cmdlets.

Here is an example of deleting the 2GB volume:

Get-CnsVolume -Datastore (Get-Datastore "sm-vsanDatastore") -Name "pvc-db6829ad-e1a9-46e8-ace3-7e7c18187a0d" | Remove-CnsVolume

In the case of standalone FCDs, which could have been manually provisioned or through a backup solution, you can also clean them up by using PowerCLI's Get-VDisk and Remove-VDisk cmdlets respectively:

Get-VDisk -Name "fill-me-in" | Remove-VDisk

Categories // Cloud Native, Kubernetes, VMware Tanzu, VSAN, vSphere 7.0 Tags // CNS, CSI, FCD, Kubernetes

ESXi 7.0 Update 2 Upgrade Issue – Failed to load crypto64.efi

03.10.2021 by William Lam // 34 Comments

I started to notice yesterday that a few folks in the community were running into the following error after upgrading their ESXi hosts to latest 7.0 Update 2 release:

Failed to load crypto64.efi

Fatal error: 15 (Not Found)

Upgrading my #VMware #homelab to #vSphere7Update2 is not going so well. 🙁 #vExpert pic.twitter.com/pGOlCGJIOF

— Tim Carman (@tpcarman) March 10, 2021

UPDATE (04/29/2021) - VMware has just released ESXi 7.0 Update 2a which resolves this issue and includes other fixes. Please make sure to read over the release notes and do not forget to first upgrade your vCenter Server to the latest 7.0 Update 2a release which came out earlier this week.

UPDATE (03/13/2021) - It looks like VMware has just pulled the ESXi online/offline depot and has updated KB 83063  to NOT recommend customers upgrade to ESXi 7.0 Update 2. A new patch is actively being developed and customers should hold off upgrading until that is made available.

UPDATE (03/10/2021) - VMware has just published KB 83063 which includes official guidance relating to the issue mentioned in this blog post.

Issue

It was not immediately clear to me on how folks were reaching this state and I had reached out to a few folks in the community to better understand their workflow. It turns out that the upgrade was being initiated from vCenter Server using vSphere Update Manager (VUM) and applying a custom ESXi 7.x Patch baseline to remediate. Upon reboot, the ESXi host would then hit the error as shown above.


Interestingly, I personally have only used Patch baselines for creating ESXi patches (e.g. 6.7p03, 7.0p01) and never for major ESXi upgrades. I normally would import the ESXi ISO and create an Upgrade baseline. At least from the couple of folks I spoke with, it seems like the use of Patch baseline is something they have done for some time and has never given them issues whether it was for a patch or major upgrade release.

Workaround

I also had some folks internally reach out to me regarding this issue and provided a workaround. At the time, I did not have a good grasp of what was going on. It turns out the community also figured out the same workaround, including how to recover an ESXi host which hits this error as you can not just go through recover workflow.

For those hitting the error above, you just need to create a bootable USB key with ESXi 7.0 Update 2 ISO using Rufus or Unetbootin. Boot the ESXi 7.0 Update 2 Installer and select the upgrade option which will fix the host.

To prevent this from happening, instead of creating or using a Patch baseline, create an Upgrade baseline using ESXi 7.0 Update 2 ISO. You will first need to go to Lifecycle Manager Management Interface in vCenter Server and under "Imported ISOs", import your iage.


Then create ESXi Upgrade baseline and select the desired ESXi ISO image and use this baseline for your upgrade.


I am not 100% sure, but I believe the reason for this change in behavior is mentioned in the ESXi 7.0 Update 2 release notes under "Patches contained in this Release" section which someone pointed me to. In any case, for major upgrades, I would certainly recommend using Upgrade baseline as that is something I have always used even when I was a customer back in the day.

Categories // ESXi, vSphere 7.0 Tags // vSphere 7.0 Update 2

VCSA 7.0 Update 2 Upgrade Issue - Exception occurred in install precheck phase

03.09.2021 by William Lam // 34 Comments

Like most folks, I was excited about the release of vSphere 7.0 Update 2 and I was ready to upgrade my personal homelab, which was running on vSphere 7.0 Update 1c. However, after starting my VCSA upgrade in the VAMI UI, it quickly failed with the following error message: Exception occurred in install precheck phase

Joy … I just attempted to upgrade my VCSA (7.0u1c) in my personal homelab to #vSphere70Update2 and ran into “Exception occurred in install precheck phase” … pic.twitter.com/4mkvxHxdRl

— William Lam (@lamw.bsky.social | @*protected email*) (@lamw) March 9, 2021

Given the release had just GA'ed less than an hour ago and everyone was probably hammering the site, I figured I would wait and then try again.

[Read more...]

Categories // VCSA, vSphere 7.0 Tags // VCSA, vSphere 7.0 Update 2

  • « Previous Page
  • 1
  • …
  • 156
  • 157
  • 158
  • 159
  • 160
  • …
  • 565
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VCF 9.0 Single Sign-On (SSO) with Keycloak IdP 06/23/2025
  • Is my NIC supported with Enhanced Data Path (EDP) with VCF 9.0 06/23/2025
  • PowerCLI remediation script for running NSX Edge on AMD Ryzen for VCF 9.0 06/20/2025
  • Failed to locate kickstart on Nested ESXi VM CD-ROM in VCF 9.0 06/20/2025
  • NVMe Tiering with Nested Virtualization in VCF 9.0 06/20/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...