WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Search Results for: Intel NUC

Quick Tip - Adding a vTPM (Virtual Trusted Platform Module) to a Nested ESXi VM

05.13.2022 by William Lam // 3 Comments

I had an interesting question this morning asking whether it was possible to add a vTPM (Virtual Trusted Platform Module) to a Nested ESXi VM? The user was interested in testing a particular scenario with the new vSphere Trust Authority feature that was introduced in the vSphere 7.0. I personally had not done much with vTPM and I had assumed it should just work as long as you have a physical TPM chip in the underlying hardware and you have setup either a Standard or Native Key Provider within your vCenter Server.

The user observed that adding a vTPM to a Windows VM was possible using the vSphere UI but when attempting to perform the same operation on a Nested ESXi VM, the option to add vTPM device was not available. After spending ~30 minutes asking around for hardware that had a physical TPM, I remember that my Quartz Canyon NUC (NUC 9 Pro) is a Xeon based system and it has TPM 2.0 chip. I was able to take a closer look and quickly found the solution was very pretty straight forward!

[Read more...]

Categories // ESXi, Nested Virtualization, vSphere Tags // Nested ESXi, TPM, vTPM

USB Native Driver Fling for ESXi adds support for Multi-Gig (1G/2.5G/5G) Adapter

09.27.2019 by William Lam // 10 Comments

Today, we have an exciting update to give on our USB Network Native Driver for ESXi Fling which has had two updates since releasing earlier this year and has been extremely well received by the VMware community. As many of you know, I am always on the look out for new and innovative tech that can help enable our customers, especially when it comes to building home labs to learn about the latest and greatest VMware software.

UPDATE (06/08/20) - QNAP has just published the updated firmware for their QNA-UC5G1T USB NIC which resolves some of the performance issue observed with the initial release.

Several months back, I came to learn about a really cool USB-based Multi-Gigabit Network Adapter (QNA-UC5G1T) from QNAP which can negotiate with speeds up to 1Gbps, 2.5Gbps and 5Gbps. I was not familiar with the multi-gig specification but it it looks like it was created as a standard back in 2016 as IEEE 802.3bz. This initially evolved from advancements in wireless technology but more recently it started to make its way into ethernet-based devices.

Although this particular device is from QNAP, the underlying chipset is actually from Aquantia, now part of Marvell. If the name sounds familiar, it should as Aquantia is also the vendor to Apple for their 10GbE NICs in both the 2018 Mac Mini and new iMac Pros. In fact, their chipsets are also used in a number of Thunderbolt 3 to 10GbE NICs which also works with ESXi. Access to 10GbE is certainly more common these days but it certainly is not for everyone and not all platforms can be expanded to support it.


The QNA-UC5G1T device is not only small but because it is USB-based, you are more likely to have spare USB ports on your system than say a traditional PCIe slot or Thunderbolt 3 port. From a cost standpoint, this device is about half the cost of the 10GbE Thunderbolt adapter coming in at $79 USD and can be ordered from Amazon. As far as I know, QNAP is the only vendor who has produced a multi-gig USB adapter, but perhaps in the future, there will be other vendors.

[Read more...]

Categories // ESXi, Home Lab, Not Supported, vSphere Tags // 2.5GbE, 5GbE, Aquantia, ESXi 6.5, ESXi 6.7, multi-gig, native device driver, QNAP, usb ethernet adapter, usb network adapter

vSphere 6.0 Update 2 hints at Nested ESXi support for Paravirtual SCSI (PVSCSI) in the future

03.14.2016 by William Lam // 6 Comments

Although Nested ESXi (running ESXi in a Virtual Machine) is not officially supported today, VMware Engineering continues to enhance this widely used feature by making it faster, more reliable and easier to consume for our customers. I still remember that it was not too long ago that if you wanted to run Nested ESXi, several non-trivial and manual tweaks to the VM's VMX file were required. This made the process of consuming Nested ESXi potentially very error prone and provide a less than ideal user experience.

Things have definitely been improved since the early days and here are just some of the visible improvements over the last few years:

  • Prior to vSphere 5.1, enabling Virtual Hardware Assisted Virtualization (VHV) required manual edits to the VMX file and even earlier versions required several VMX entries. VHV can now be easily enabled using either the vSphere Web Client or the vSphere API.
  • Prior to vSphere 5.1, only the e1000{e} networking driver was supported with Nested ESXi VMs and although it was functional, it also limited the types of use cases you might have for Nested ESXi. A Native Driver for VMXNET3 was added in vSphere 5.1 which not only increased the performance that came with using the optimized VMXNET3 driver but it also enabled new use cases such testing SMP-FT as it was now possible to get 10Gbe interface to Nested ESXi VM versus the traditional 1GBe with e1000{e} driver.
  • Prior to vSphere 6.0, selection of ESXi GuestOS was not available in the "Create VM" wizard which meant you had to resort to re-editing the VM after initial creation or using the vSphere API. You can now select the specific ESXi GuestOS type directly in the vSphere Web/C# Client.
  • Prior to vSphere 6.0, the only way to cleanly shutdown or power cycle a Nested ESXi VM was to perform the operation from within the system as there was no VMware Tools support. This changed with the development of a VMware Tools daemon specifically for Nested ESXi which started out as a VMware Fling. With vSphere 6.0, the VMware Tools for Nested ESXi was pre-installed by default and would automatically startup when it detected that it ran as a VM. In addition to power operations provided by VMware Tools, it also enabled the use of the Guest Operations API which was quite popular from an Automation standpoint.

Yesterday while working in my new vSphere 6.0 Update 2 home lab, I needed to create a new Nested ESXi VM and noticed something really interesting. I used the vSphere Web Client like I normally would and when I went to select the GuestOS type, I discovered an interesting new option which you can see from the screenshot below.

nested-esxi-changes-in-vsphere60u2-3
It is not uncommon to see VMware to add experimental support for potentially new Guest Operating Systems in vSphere. Of course, there are no guarantees that these OSes would ever be supported or even released for that matter.

What I found that was even more interesting was that when select this new ESXi GuestOS type (vmkernel65) is what was recommended as the default virtual hardware configuration for the VM. For the network adapter, it looks like the VMXNET3 driver is now recommended over the e1000e and for the storage adapter the VMware Paravirtual (PVSCSI) adapter is now recommended over the LSI Logic Parallel type. This is really interesting as it is currently not possible today to get the optimized and low overhead of the PVSCSI adapter working with Nested ESXi and this seems to indicate that PVSCSI might actually be possible in the future! 🙂

nested-esxi-changes-in-vsphere60u2-1
I of course tried to install the latest ESXi 6.0 Update 2 (not yet GA'ed) using this new ESXi GuestOS type and to no surprise, the ESXi installer was not able to detect any storage devices. I guess for now, we will just have to wait and see ...

Categories // ESXi, Nested Virtualization, Not Supported, vSphere 6.0 Tags // ESXi, nested, nested virtualization, pvscsi, vmxnet3, vSphere 6.0 Update 2

  • « Previous Page
  • 1
  • …
  • 39
  • 40
  • 41

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...