WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Aquantia/Marvell AQtion (Atlantic) driver now inbox in ESXi 7.0 Update 2

03.11.2021 by William Lam // 25 Comments

Last spring, VMware and Aquantia (now part of Marvell) collaborated and delivered their first ESXi Native Driver for their AQtion (Atlantic) based 10GbE network adapters. This new driver was primarily focused on enabling network connectivity for ESXi when running on either an Apple 2018 Mac Mini (8,1) and Apple 2019 Mac Pro (7,1) that included the 10GbE networking option. Consequently, this driver also benefited the broader VMware Community as it enabled additional 10GbE networking through a number of Thunderbolt 3 to 10GbE network adapters that customers could now take advantage in their VMware environments.

With all these benefits, VMware has decided to inbox the Aquantia/Marvell driver with the latest ESXi 7.0 Update 2 release, so that customers no longer had to create a custom ESXi Image Profile that included the driver, which was always required when installing ESXi on either the Apple Mac Mini or Mac Pro that were configured with the 10GbE networking option. For a complete list of supported Aquantia/Marvell AQtion based network adapters, please see the VMware HCL.

Here is a screenshot of an earlier release of ESXi 7.0 Update 2 running on the 2018 Mac Mini which now automatically recognizes the 10GbE network adapter out of the box.

Categories // Apple, ESXi, vSphere 7.0 Tags // apple, Aquantia, ESXi, ESXi 7.0 Update 2, mac mini, mac pro, Marvell, vSphere 7.0 Update 2

How to clean up stale vSphere Container Volumes & First Class Disks?

03.10.2021 by William Lam // 7 Comments

If you are running and deploying Kubernetes (K8s) which includes vSphere with Tanzu and Tanzu Kubernetes Grid (TKG), you might notice vSphere Container Volumes showing up in the vSphere UI under the Monitor tab for a given vSphere-based Datastore. This is normal and expected as new Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) are being requested as part of deploying K8s-based application that require storage.


Typically, when PVs and PVCs are no longer needed, they should be cleaned up within the K8s layer via kubectl either automatically or manually depending on your provisioning process. When you delete a K8s Cluster, these PVs/PVCs are not automatically cleaned up and its for good reason, you may want to reuse them and the way vSphere supports this is by implementing them as First Class Disks (FCD), which means they are lifecycle independent of a VM.

What happens when the K8s Cluster has been deleted and you actually want to clean up these stale FCDs, how do you go about doing that? This is a question I have seen come up more frequently and there are a few options.

Option 1:

If you happen to be on vSphere 7.0 Update 2 (which was just released yesterday), the vSphere UI has been enhanced to allow users to now delete vSphere Container Volume (see screenshot above). Previously, you could only view the FCDs and reapply a storage policy.

Option 2:

Since vSphere Container Volumes are just FCDs and we have FCD APIs, we can use the API to retrieve information as well as clean them up. The easiest way is to use PowerCLI's Get-CnsVolume and Remove-CnsVolume cmdlets.

Here is an example of deleting the 2GB volume:

Get-CnsVolume -Datastore (Get-Datastore "sm-vsanDatastore") -Name "pvc-db6829ad-e1a9-46e8-ace3-7e7c18187a0d" | Remove-CnsVolume

In the case of standalone FCDs, which could have been manually provisioned or through a backup solution, you can also clean them up by using PowerCLI's Get-VDisk and Remove-VDisk cmdlets respectively:

Get-VDisk -Name "fill-me-in" | Remove-VDisk

Categories // Cloud Native, Kubernetes, VMware Tanzu, VSAN, vSphere 7.0 Tags // CNS, CSI, FCD, Kubernetes

ESXi 7.0 Update 2 Upgrade Issue – Failed to load crypto64.efi

03.10.2021 by William Lam // 34 Comments

I started to notice yesterday that a few folks in the community were running into the following error after upgrading their ESXi hosts to latest 7.0 Update 2 release:

Failed to load crypto64.efi

Fatal error: 15 (Not Found)

Upgrading my #VMware #homelab to #vSphere7Update2 is not going so well. 🙁 #vExpert pic.twitter.com/pGOlCGJIOF

— Tim Carman (@tpcarman) March 10, 2021

UPDATE (04/29/2021) - VMware has just released ESXi 7.0 Update 2a which resolves this issue and includes other fixes. Please make sure to read over the release notes and do not forget to first upgrade your vCenter Server to the latest 7.0 Update 2a release which came out earlier this week.

UPDATE (03/13/2021) - It looks like VMware has just pulled the ESXi online/offline depot and has updated KB 83063  to NOT recommend customers upgrade to ESXi 7.0 Update 2. A new patch is actively being developed and customers should hold off upgrading until that is made available.

UPDATE (03/10/2021) - VMware has just published KB 83063 which includes official guidance relating to the issue mentioned in this blog post.

Issue

It was not immediately clear to me on how folks were reaching this state and I had reached out to a few folks in the community to better understand their workflow. It turns out that the upgrade was being initiated from vCenter Server using vSphere Update Manager (VUM) and applying a custom ESXi 7.x Patch baseline to remediate. Upon reboot, the ESXi host would then hit the error as shown above.


Interestingly, I personally have only used Patch baselines for creating ESXi patches (e.g. 6.7p03, 7.0p01) and never for major ESXi upgrades. I normally would import the ESXi ISO and create an Upgrade baseline. At least from the couple of folks I spoke with, it seems like the use of Patch baseline is something they have done for some time and has never given them issues whether it was for a patch or major upgrade release.

Workaround

I also had some folks internally reach out to me regarding this issue and provided a workaround. At the time, I did not have a good grasp of what was going on. It turns out the community also figured out the same workaround, including how to recover an ESXi host which hits this error as you can not just go through recover workflow.

For those hitting the error above, you just need to create a bootable USB key with ESXi 7.0 Update 2 ISO using Rufus or Unetbootin. Boot the ESXi 7.0 Update 2 Installer and select the upgrade option which will fix the host.

To prevent this from happening, instead of creating or using a Patch baseline, create an Upgrade baseline using ESXi 7.0 Update 2 ISO. You will first need to go to Lifecycle Manager Management Interface in vCenter Server and under "Imported ISOs", import your iage.


Then create ESXi Upgrade baseline and select the desired ESXi ISO image and use this baseline for your upgrade.


I am not 100% sure, but I believe the reason for this change in behavior is mentioned in the ESXi 7.0 Update 2 release notes under "Patches contained in this Release" section which someone pointed me to. In any case, for major upgrades, I would certainly recommend using Upgrade baseline as that is something I have always used even when I was a customer back in the day.

Categories // ESXi, vSphere 7.0 Tags // vSphere 7.0 Update 2

  • « Previous Page
  • 1
  • …
  • 151
  • 152
  • 153
  • 154
  • 155
  • …
  • 561
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...