WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

ESXi 7.0 Update 2 enhancement for USB NIC only installations

03.16.2021 by William Lam // 15 Comments

The USB Network Native Driver for ESXi Fling has been an extremely popular Fling that has allowed customers to easily add additional networking capabilities by using a supported USB-based network adapter even though ESXi traffic over USB networking is not officially supported.

In most deployments, the USB network adapter is usually a supplement to the existing onboard network adapter of a system. However, there have been scenarios where the onboard network adapter is either not available or functional and customers would still like to be able to install ESXi and have it running over just the USB network adapter.

Although installing ESXi using just a USB network adapter is possible today, one downside is that an additional workflow is needed to fix the network binding after installing ESXi.

During the interactive ESXi installation, you will see the following error at 81% which will cause installer to get stuck

Exception: No vmknic tagged for management was found.

At this point, the installer has completed and you need to switch to the console (Alt+F1) and just perform a reboot to actually complete the installation.


After ESXi boots up for the first time after the install, you will need to go into the DCUI and manually bind the vusb0 interface for ESXi management for connectivity. To persist this USB NIC binding, you will need to add small snippet to /etc/rc.local.d/local.sh

Standard Virtual Switch (VSS):

vusb0_status=$(esxcli network nic get -n vusb0 | grep 'Link Status' | awk '{print $NF}')
count=0
while [[ $count -lt 20 && "${vusb0_status}" != "Up" ]]
do
    sleep 10
    count=$(( $count + 1 ))
    vusb0_status=$(esxcli network nic get -n vusb0 | grep 'Link Status' | awk '{print $NF}')
done

esxcfg-vswitch -R

Distributed Virtual Switch (VDS):

VDS_0_NAME=vDS
VDS_0_PORT_ID=10
VDS_1_NAME=vDS-NSX
VDS_1_PORT_ID=2

vusb0_status=$(esxcli network nic get -n vusb0 | grep 'Link Status' | awk '{print "v0:" $NF}') && vusb1_status=$(esxcli network nic get -n vusb1 | grep 'Link Status' | awk '{print "v1:" $NF}')
count=0
while [[ $count -lt 40 ]] && [[ "${vusb0_status}" != "v0:Up" || "${vusb1_status}" != "v1:Up" ]]
do
    sleep 5
    count=$(( $count + 1 ))
    vusb0_status=$(esxcli network nic get -n vusb0 | grep 'Link Status' | awk '{print "v0:" $NF}') && vusb1_status=$(esxcli network nic get -n vusb1 | grep 'Link Status' | awk '{print "v1:" $NF}')
done

if [ "${vusb0_status}" = "v0:Up" ]; then
    esxcfg-vswitch -P vusb0 -V ${VDS_0_PORT_ID} ${VDS_0_NAME}
fi

if [ "${vusb1_status}" = "v1:Up" ]; then
    esxcfg-vswitch -P vusb1 -V ${VDS_1_PORT_ID} ${VDS_1_NAME}
fi

Note: The vusbX vmkernel interface may not show up in either ESXi Embedded Host Client and/or vSphere HTML5 UI, this does not mean there is an issue. ESXi was never designed to support USB-based NICs for Management Network and the UI may not properly detect these devices when using the UI. It is recommended to use the ESXi Shell for any operations requiring configuration of vusbX devices.

Obviously, this was not an ideal user experience and I personally had to use this workaround on several occasions, especially for newer hardware platforms where the onboard network adapter may not be recognized by ESXi and being able to use the USB Network Fling definitely came in handy.

With the release of ESXi 7.0 Update 2, we have improved the user experience for installing ESXi with just a single USB NIC. This enhancement was added by Songtao after mentioning the undesirable behavior. A new driver parameter called usbBusFullScanOnBootEnabled has been introduced and can added after the initial installation which removes the need for the workaround mentioned above by editing the local.sh file. This new parameter instructs ESXi to perform a full bus scan to claim all USB NICs that are attached since USB device claiming is slow compared to PCIe devices.

[Read more...]

Categories // ESXi, Home Lab, vSphere 7.0 Tags // ESXi 7.0 Update 2, vSphere 7.0 Update 2

Aquantia/Marvell AQtion (Atlantic) driver now inbox in ESXi 7.0 Update 2

03.11.2021 by William Lam // 25 Comments

Last spring, VMware and Aquantia (now part of Marvell) collaborated and delivered their first ESXi Native Driver for their AQtion (Atlantic) based 10GbE network adapters. This new driver was primarily focused on enabling network connectivity for ESXi when running on either an Apple 2018 Mac Mini (8,1) and Apple 2019 Mac Pro (7,1) that included the 10GbE networking option. Consequently, this driver also benefited the broader VMware Community as it enabled additional 10GbE networking through a number of Thunderbolt 3 to 10GbE network adapters that customers could now take advantage in their VMware environments.

With all these benefits, VMware has decided to inbox the Aquantia/Marvell driver with the latest ESXi 7.0 Update 2 release, so that customers no longer had to create a custom ESXi Image Profile that included the driver, which was always required when installing ESXi on either the Apple Mac Mini or Mac Pro that were configured with the 10GbE networking option. For a complete list of supported Aquantia/Marvell AQtion based network adapters, please see the VMware HCL.

Here is a screenshot of an earlier release of ESXi 7.0 Update 2 running on the 2018 Mac Mini which now automatically recognizes the 10GbE network adapter out of the box.

Categories // Apple, ESXi, vSphere 7.0 Tags // apple, Aquantia, ESXi, ESXi 7.0 Update 2, mac mini, mac pro, Marvell, vSphere 7.0 Update 2

How to clean up stale vSphere Container Volumes & First Class Disks?

03.10.2021 by William Lam // 7 Comments

If you are running and deploying Kubernetes (K8s) which includes vSphere with Tanzu and Tanzu Kubernetes Grid (TKG), you might notice vSphere Container Volumes showing up in the vSphere UI under the Monitor tab for a given vSphere-based Datastore. This is normal and expected as new Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) are being requested as part of deploying K8s-based application that require storage.


Typically, when PVs and PVCs are no longer needed, they should be cleaned up within the K8s layer via kubectl either automatically or manually depending on your provisioning process. When you delete a K8s Cluster, these PVs/PVCs are not automatically cleaned up and its for good reason, you may want to reuse them and the way vSphere supports this is by implementing them as First Class Disks (FCD), which means they are lifecycle independent of a VM.

What happens when the K8s Cluster has been deleted and you actually want to clean up these stale FCDs, how do you go about doing that? This is a question I have seen come up more frequently and there are a few options.

Option 1:

If you happen to be on vSphere 7.0 Update 2 (which was just released yesterday), the vSphere UI has been enhanced to allow users to now delete vSphere Container Volume (see screenshot above). Previously, you could only view the FCDs and reapply a storage policy.

Option 2:

Since vSphere Container Volumes are just FCDs and we have FCD APIs, we can use the API to retrieve information as well as clean them up. The easiest way is to use PowerCLI's Get-CnsVolume and Remove-CnsVolume cmdlets.

Here is an example of deleting the 2GB volume:

Get-CnsVolume -Datastore (Get-Datastore "sm-vsanDatastore") -Name "pvc-db6829ad-e1a9-46e8-ace3-7e7c18187a0d" | Remove-CnsVolume

In the case of standalone FCDs, which could have been manually provisioned or through a backup solution, you can also clean them up by using PowerCLI's Get-VDisk and Remove-VDisk cmdlets respectively:

Get-VDisk -Name "fill-me-in" | Remove-VDisk

Categories // Cloud Native, Kubernetes, VMware Tanzu, VSAN, vSphere 7.0 Tags // CNS, CSI, FCD, Kubernetes

  • « Previous Page
  • 1
  • …
  • 26
  • 27
  • 28
  • 29
  • 30
  • …
  • 42
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...