Although Nested ESXi (running ESXi in a Virtual Machine) is not officially supported today, VMware Engineering continues to enhance this widely used feature by making it faster, more reliable and easier to consume for our customers. I still remember that it was not too long ago that if you wanted to run Nested ESXi, several non-trivial and manual tweaks to the VM's VMX file were required. This made the process of consuming Nested ESXi potentially very error prone and provide a less than ideal user experience.
Things have definitely been improved since the early days and here are just some of the visible improvements over the last few years:
- Prior to vSphere 5.1, enabling Virtual Hardware Assisted Virtualization (VHV) required manual edits to the VMX file and even earlier versions required several VMX entries. VHV can now be easily enabled using either the vSphere Web Client or the vSphere API.
- Prior to vSphere 5.1, only the e1000{e} networking driver was supported with Nested ESXi VMs and although it was functional, it also limited the types of use cases you might have for Nested ESXi. A Native Driver for VMXNET3 was added in vSphere 5.1 which not only increased the performance that came with using the optimized VMXNET3 driver but it also enabled new use cases such testing SMP-FT as it was now possible to get 10Gbe interface to Nested ESXi VM versus the traditional 1GBe with e1000{e} driver.
- Prior to vSphere 6.0, selection of ESXi GuestOS was not available in the "Create VM" wizard which meant you had to resort to re-editing the VM after initial creation or using the vSphere API. You can now select the specific ESXi GuestOS type directly in the vSphere Web/C# Client.
- Prior to vSphere 6.0, the only way to cleanly shutdown or power cycle a Nested ESXi VM was to perform the operation from within the system as there was no VMware Tools support. This changed with the development of a VMware Tools daemon specifically for Nested ESXi which started out as a VMware Fling. With vSphere 6.0, the VMware Tools for Nested ESXi was pre-installed by default and would automatically startup when it detected that it ran as a VM. In addition to power operations provided by VMware Tools, it also enabled the use of the Guest Operations API which was quite popular from an Automation standpoint.
Yesterday while working in my new vSphere 6.0 Update 2 home lab, I needed to create a new Nested ESXi VM and noticed something really interesting. I used the vSphere Web Client like I normally would and when I went to select the GuestOS type, I discovered an interesting new option which you can see from the screenshot below.
It is not uncommon to see VMware to add experimental support for potentially new Guest Operating Systems in vSphere. Of course, there are no guarantees that these OSes would ever be supported or even released for that matter.
What I found that was even more interesting was that when select this new ESXi GuestOS type (vmkernel65) is what was recommended as the default virtual hardware configuration for the VM. For the network adapter, it looks like the VMXNET3 driver is now recommended over the e1000e and for the storage adapter the VMware Paravirtual (PVSCSI) adapter is now recommended over the LSI Logic Parallel type. This is really interesting as it is currently not possible today to get the optimized and low overhead of the PVSCSI adapter working with Nested ESXi and this seems to indicate that PVSCSI might actually be possible in the future! 🙂
I of course tried to install the latest ESXi 6.0 Update 2 (not yet GA'ed) using this new ESXi GuestOS type and to no surprise, the ESXi installer was not able to detect any storage devices. I guess for now, we will just have to wait and see ...
techvet says
We are currently on ESXi 5.1 u3, largely using the E1000 adapters. Are you saying that we should seriously consider moving to VMXNET3 after moving to vSphere 6.x?
William Lam says
Yes. It has been much improved since its initial introduction in vSphere 5.1, especially with the latest releases of ESXi 6.0.
Jame Dolans says
VMXNET3 was available back in vSphere 4.x
William Lam says
Not for Nested ESXi, it was first introduced in vSphere 5.1 afaik.
James Hess says
That's something very interesting to see.
Perhaps someday VMware would consider efficient full production support for Nested ESXi.
I think it would be useful for setting up VMware View pilot deployment without dedicated physical hardware, and still having a dedicated vCenter for View (Controlling nested ESXi instances).
Also, for standing up "Hosted virtually-dedicated cloud" environments for disaster recovery, where the customer wants to connect it directly to their vCenter and manage something.
Those are some ideas which come to mind immediately, where nested ESXi could be extremely useful; if VMware were to make nested ESXi production-supportable and eliminate the extra overhead.
Other than proof of concepts, development, and various kinds of test lab environments.
AnonymousGuy says
Been playing with the ESXi Embedded Host Client downloaded from Fling v1 on my home lab. The beta version is not able to do a lot. Example Power Up/Down VMs.
I bet this U2 with the ESXi Embedded Host Client is able to do the full suit what the C# client can do?
Also, does the new ESXi Embedded Host Client allow edit of virtual hardware higher then 9 to 11?