WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Nested ESXi Enhancements in vSphere 6.5

10.19.2016 by William Lam // 15 Comments

As many of you have probably heard by now, vSphere 6.5 was just announced at VMworld Barcelona this week and it is packed with a ton of new features and capabilities which you can read more about here. One area that is near and dear to me and has not been covered are some of the Nested ESXi enhancements that have been made in vSphere 6.5.

To be clear, VMware has NOT changed its support stance in vSphere 6.5. Both Nested ESXi as well as general Nested Virtualization is still NOT officially supported. Okay, so thats out of the way now, lets see what is new?

  • Paravirtual SCSI (PVSCSI) support
  • GuestOS Customization
  • Pre-vSphere 6.5 enablement on vSphere 6.0 Update 2
  • Virtual NVMe support

Lets take a closer look at each of these enhancements.

Paravirtual SCSI (PVSCSI) support

When vSphere 6.0 Update 2 was released, I had hinted that PVSCSI support might be a possibility in the near future. I am happy to announce that this is now possible with running Nested ESXi and having the guestOS as ESXi 6.5. In vSphere 6.5, a new GuestOS type has been introduced called vmkernel65 which is optimized for running ESXi 6.5 in a VM as shown in the screenshot below.

nested-esxi-enhancements-in-vsphere-6-5-4
As you can see from the VM Virtual Hardware configuration screen below, both the PVSCSI and VMXNET3 adapter are now the recommended default when creating a Nested ESXi VM for running ESXi 6.5.

nested-esxi-enhancements-in-vsphere-6-5-1
Similiar to the VMXNET3 driver, the PVSCSI driver is automatically bundled within the version of VMware Tools for Nested ESXi. That is to say, the drivers are included in the default ESXi image itself and is ONLY activated when ESXi detects that it is running inside of a VM. From a user standpoint, this means there are no additional configurations or installations that is required. You simply select ESXi 6.5 as the GuestOS and install ESXi as you normally would and this will automatically be enabled for you.

The only requirement to leverage this new capability is that BOTH the GuestOS type is ESXi 6.5 (vmkernel65) and the actual OS is running ESXi 6.5. The underlying physical ESXi host can either be ESXi 6.0 or ESXi 6.5. In addition to new virtual hardware defaults, I have also found that the new ESXi 6.5 GuestOS type now uses EFI firmware over the legacy BIOS compared to previous ESXi 6.x/5.x/4.x GuestOS types.

For customers who wish to push their storage IO a bit more for Nested ESXi guests, this is a great addition, especially with lower overhead when using a PVSCSI adapter.

GuestOS Customization

One of the very last capability that has been missing from Nested ESXi is the ability to perform a simple GuestOS customization when cloning or deploying a VM from template running Nested ESXi. Today, you can deploy my Nested ESXi Virtual Appliance which basically provides you with the ability to customize your deployment but would it not be great if this was native in the platform? Well, I am pleased to say this is now possible!

When you go and clone or deploy a VM from template that is a Nested ESXi VM, you will now have the option to select the Customize guest OS option. As you can see from the screenshot below, you can now create a new Customization Spec which is based on the Linux customization spec. The customization only covers networking configuration (IP Address, Netmask, Gateway and Hostname) and only applies it to the first VMkernel interface, all others will be ignored. The thinking here is that once you have your Nested ESXi VM on the network, you can then fully manage and configure it using the vSphere API rather than re-creating the same functionality just for cloning.

nested-esxi-enhancements-in-vsphere-6-5-2
To use this new Nested ESXi GuestOS customization, there are two things you will need to do:

  • Perform two configuration changes within the Nested ESXi VM which will prepare them for cloning. You can find the configuration changes described in my blog post here
  • Ensure BOTH the GuestOS type is ESXi 6.5 (vmkernel65) and the actual OS is running ESXi 6.5. This means that your underlying physical vSphere infrastructure can be running either vSphere 6.0 Update 2 or vSphere 6.5

You can monitor the progress of the guest customization by going to the VM's Monitor->Tasks & Events using the vSphere Web Client or vSphere API if you are doing this programmatically. Below is a screenshot of a successful Nested ESXi guest customization. If there are any errors, you can take a look at /var/log/vmware-imc/toolsDeployPkg.log within the cloned Nested ESXi VM to determine what went wrong.

nested-esxi-enhancements-in-vsphere-6-5-3
I know this will be a very welcome capability for customers who extensively use the guest customization feature or if you just want to quickly clone an existing Nested ESXi VM that you have already configured.

Pre-vSphere 6.5 enablement in vSphere 6.0 Update 2

By now, you probably have figured out what this last enhancement is all about 🙂 It is exactly as it sounds, we are enabling customers to try out ESXi 6.5 by running it as a Nested ESXi VM on your existing vSphere 6.0 environment and specifically the Update 2 release (this includes both vCenter Server as well as ESXi). Although this has always been possible with past releases of vSphere running newer versions, we are now pre-enabling ESXi 6.5 specific Nested ESXi capabilities in the latest release of vSphere 6.0 Update 2. This means when vSphere 6.5 is generally available, you will be able to test drive some of the new Nested ESXi 6.5 capabilities that I had mentioned on your existing vSphere infrastructure. This is pretty darn cool if you ask me!?

Virtual NVMe support

I had a few folks ask on whether the upcoming Virtual NVMe capability in vSphere 6.5 would be possible with Nested ESXi and the answer is yes. Please have a look at this post here for more details.

For those of you who use Nested ESXi, hopefully you will enjoy these new updates! As always, if you have any feedback or requests, feel free to leave them on my blog and I will be sure to share it with the Engineering team and who knows, it might show up in the future just like some of these updates which have been requested from customers in the past 😀

Categories // ESXi, Nested Virtualization, vSphere 6.5 Tags // guest customization, nested, Nested ESXi, nested virtualization, pvscsi, vmxnet3, vSphere 6.0 Update 2, vSphere 6.5

VMXNET3 driver now included in Mac OS X 10.11 (El Capitan)+

10.01.2016 by William Lam // 15 Comments

Yesterday I received a pretty interesting comment from one of my Twitter followers @NTmatter who wrote:

@lamw Just noticed that OSX has a VMXNET3 driver. Have to edit the vmx file to actually use it, but it's there! AppleVmxnet3Ethernet.kext

— Thomas Johnson (@NTmatter) September 30, 2016

This is a pretty neat find because currently today, the only network adapter that is functional with an Apple Mac OS X guest running on either VMware vSphere or Fusion is the e1000{e} driver. This update was definitely news to me and after sharing it internally to see if I could find some more details, it turns out this news also came as surprise to the folks internally. Darius, one of the Engineers who I frequently reach out to on Apple related topics did some digging and found out that Apple started to bundle this VMXNET3 driver starting with Mac OS X 10.11 (El Capitan) release. You can find the driver located in /System/Library/Extensions/IONetworkingFamily.kext/Contents/PlugIns/AppleVmxnet3Ethernet.kext

Disclaimer: Given that this VMXNET3 Mac OS X driver was not developed by VMware nor has it been tested by VMware, it currently would not be officially supported by VMware.

If you wish to try out the VMXNET3 driver, you will need to install Mac OS X 10.11 or newer on a VM running on vSphere or Fusion. By default, the only available network adapter type is e1000{e}. To add a VMXNET3 network adapter, you can either manually tweak the .VMX file or you can easily add it by using either the vSphere Web/C# Client or ESXi Embedded Host Client. Below are the instructions on configuring the VMXNET3 network adapter for your Mac OS X guests.

Step 1 - Remove the existing network adapter and then temporarily change the GuestOS type to "Other" (no need to save setting, just update it in VM reconfigure wizard) so that you will be allowed to add a VMXNET3 network adapter. Once you have added it to the VM reconfigure wizard, go ahead and toggle back the GuestOS type to Mac OS X 10.10 and then save the settings as shown in the screenshots below.

mac-os-x-el-capitan-10-11-vmxnet3-driver-0
mac-os-x-el-capitan-10-11-vmxnet3-driver-1
Step 2 - Open a terminal inside of the Mac OS X guest and run the following command to load the VMXNET3 driver:

sudo kextload /System/Library/Extensions/IONetworkingFamily.kext/Contents/PlugIns/AppleVmxnet3Ethernet.kext

Step 3 - You can verify that the VMXNET3 driver was successfully loaded by running the following command:

kextstat | grep -i vmxnet3

mac-os-x-el-capitan-10-11-vmxnet3-driver-2
Once the driver has been loaded, you should now have networking connectivity to your Mac OS X VM using the VMXNET3 network adapter. Below is a screenshot of the system info showing the VMXNET3 network adapter.

mac-os-x-el-capitan-10-11-vmxnet3-driver-3
In addition to having an optimized networking when using the VMXNET3 driver, the other benefit is being able to get a link speed of 10GbE which is something customers have been inquiring about when virtualizing Mac OS X guests. Below is a screenshot of the media link shown in this Mac OS X 10.11 guests.

mac-os-x-el-capitan-10-11-vmxnet3-driver-4
Although this a great development for Apple customers who uses VMware vSphere and Fusion, it also does raise an interesting question on whether Apple would be officially supporting this VMXNET3 driver going forward? If I do receive any more details on this, I will update the article. Until then, you can play with this new capability if you are running Mac OS X 10.11 or greater on VMware. Big thanks to Thomas for this great find and sharing it with the VMware Community!

Categories // Apple, vSphere 6.0 Tags // apple, el capitan, osx, vmware tools, vmxnet3

vSphere 6.0 Update 2 hints at Nested ESXi support for Paravirtual SCSI (PVSCSI) in the future

03.14.2016 by William Lam // 6 Comments

Although Nested ESXi (running ESXi in a Virtual Machine) is not officially supported today, VMware Engineering continues to enhance this widely used feature by making it faster, more reliable and easier to consume for our customers. I still remember that it was not too long ago that if you wanted to run Nested ESXi, several non-trivial and manual tweaks to the VM's VMX file were required. This made the process of consuming Nested ESXi potentially very error prone and provide a less than ideal user experience.

Things have definitely been improved since the early days and here are just some of the visible improvements over the last few years:

  • Prior to vSphere 5.1, enabling Virtual Hardware Assisted Virtualization (VHV) required manual edits to the VMX file and even earlier versions required several VMX entries. VHV can now be easily enabled using either the vSphere Web Client or the vSphere API.
  • Prior to vSphere 5.1, only the e1000{e} networking driver was supported with Nested ESXi VMs and although it was functional, it also limited the types of use cases you might have for Nested ESXi. A Native Driver for VMXNET3 was added in vSphere 5.1 which not only increased the performance that came with using the optimized VMXNET3 driver but it also enabled new use cases such testing SMP-FT as it was now possible to get 10Gbe interface to Nested ESXi VM versus the traditional 1GBe with e1000{e} driver.
  • Prior to vSphere 6.0, selection of ESXi GuestOS was not available in the "Create VM" wizard which meant you had to resort to re-editing the VM after initial creation or using the vSphere API. You can now select the specific ESXi GuestOS type directly in the vSphere Web/C# Client.
  • Prior to vSphere 6.0, the only way to cleanly shutdown or power cycle a Nested ESXi VM was to perform the operation from within the system as there was no VMware Tools support. This changed with the development of a VMware Tools daemon specifically for Nested ESXi which started out as a VMware Fling. With vSphere 6.0, the VMware Tools for Nested ESXi was pre-installed by default and would automatically startup when it detected that it ran as a VM. In addition to power operations provided by VMware Tools, it also enabled the use of the Guest Operations API which was quite popular from an Automation standpoint.

Yesterday while working in my new vSphere 6.0 Update 2 home lab, I needed to create a new Nested ESXi VM and noticed something really interesting. I used the vSphere Web Client like I normally would and when I went to select the GuestOS type, I discovered an interesting new option which you can see from the screenshot below.

nested-esxi-changes-in-vsphere60u2-3
It is not uncommon to see VMware to add experimental support for potentially new Guest Operating Systems in vSphere. Of course, there are no guarantees that these OSes would ever be supported or even released for that matter.

What I found that was even more interesting was that when select this new ESXi GuestOS type (vmkernel65) is what was recommended as the default virtual hardware configuration for the VM. For the network adapter, it looks like the VMXNET3 driver is now recommended over the e1000e and for the storage adapter the VMware Paravirtual (PVSCSI) adapter is now recommended over the LSI Logic Parallel type. This is really interesting as it is currently not possible today to get the optimized and low overhead of the PVSCSI adapter working with Nested ESXi and this seems to indicate that PVSCSI might actually be possible in the future! 🙂

nested-esxi-changes-in-vsphere60u2-1
I of course tried to install the latest ESXi 6.0 Update 2 (not yet GA'ed) using this new ESXi GuestOS type and to no surprise, the ESXi installer was not able to detect any storage devices. I guess for now, we will just have to wait and see ...

Categories // ESXi, Nested Virtualization, Not Supported, vSphere 6.0 Tags // ESXi, nested, nested virtualization, pvscsi, vmxnet3, vSphere 6.0 Update 2

  • 1
  • 2
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...