WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

How to Enable Support for Nested 64bit & Hyper-V VMs in vSphere 5

07.12.2011 by William Lam // 66 Comments

With the release of vSphere 5, one of the most sought out feature from VMware is the ability to run nested 64bit and Hyper-V guest virtual machines in a virtual ESXi instance. Previous to this, only 32bit virtual machines were supported as the VT-x/AMD-V Hardware Virtualization CPU instructions could not be virtualized and presented to the virtual ESX(i) guest. This feature is quite useful for home and lab setups in testing new features or studying for VMware certifications and running multiple vESX(i) instances.

You will still be required to have a 64bit capable system and CPU and you will need to be running ESXi 5.0, this will not work for ESX(i) 4.x or older.

 
The above diagram depicts the various levels of inception where pESXi is your physical ESXi 5.0 hosts. We then create a vESXi 5.0 host which will contain the necessary Hardware Virtualization CPU instructions to support a 64bit nested guest OS which I've created as another ESXi host called vvESXi.
Note: You will not be able to run a 4th level nested 64bit VM (I have tried by further passing the HV instructions in the nested guest) and it will just boot up and spin your CPUs for hours.
This feature by default is disabled in ESXi 5.0, to enable this virtualized HV (Hardware Virtualization) you will need to add the following string vhv.allow = "TRUE"  to /etc/vmware/config of your Physical ESXi 5.0 host
Once the configuration change has been made, the feature goes into effect right away. A reboot of the system is not necessary. To verify, you should now be able to power on a 64bit guest OS and see that the HV instructions bits are being passed into the guestOS which will then allow you to run a nested 64bit guestOS. You can also verify by looking in the vmware.log file of the virtual machine and grep for the string "monitorControl.vhv" and if you see the following message, then Virtualized HV is not enabled.
In the past to run a virtual ESX(i) instance, a few advanced .vmx configuration entries were needed as documented here. With ESXi 5.0, if you are using virtual hardware version 8, then you do not need to make any additional changes. If you are using hardware version 4 or 7, then you will need to add a few changes to the VM's configuration file.
Creating vESXi 5.0 Instance using Hardware Version 8:
1. To create a virtual ESXi 5.0 instance, start off by just creating a standard RHEL5/6 64bit VM using the vSphere Client
2. Once the VM has been created, edit the settings of the VM and change over to the "Options" and now have the ability to select a new guestOS type: VMware ESXi 5.x or VMware ESXi 4.x under the "Other" section.

Note: I'm not sure why these two additional guestOS type is not available from the default creation menu, but are available after the initial VM shell is created.

3. You are now ready to install ESXi 5.0 in this new vESXi host and then you can create and power on nested 64bit guestOS within that vESXi instance as denoted from the picture below

Creating vESXi 5.0 Instance using Hardware Version 4/7:
1. To create a virtual ESXi 5.0 instance, start off by just creating a standard RHEL5/6 64bit VM using the vSphere Client

2. Now you will need to add the following advanced .vmx parameter:  monitor.virtual_exec = "hardware" which can be done through the vSphere Client and/or editing the .vmx parameter manually.

3. Next you will need to add some cpuid bits, depending if you are running an Intel or AMD CPU, the respective entries are required:

Intel Hosts:

cpuid.1.ecx = "----:----:----:----:----:----:--h-:----"

AMD Hosts:

cpuid.80000001.ecx.amd = "----:----:----:----:----:----:----:-h--"
cpuid.8000000a.eax.amd = "hhhh:hhhh:hhhh:hhhh:hhhh:hhhh:hhhh:hhhh"
cpuid.8000000a.ebx.amd = "hhhh:hhhh:hhhh:hhhh:hhhh:hhhh:hhhh:hhhh"
cpuid.8000000a.edx.amd = "hhhh:hhhh:hhhh:hhhh:hhhh:hhhh:hhhh:hhhh"

4. You are now ready to install ESXi 5.0 in this new vESXi host and then you can create and power on nested 64bit guestOS within that vESXi instance

By using a VM that is hardware version 8, you can easily automate the creation of vESXi 5.0 instance by changing the guestOS string in the .vmx parameter to "vmkernel" and the above configurations other than "vhv" string needed for either an Intel or AMD system are automatically configured.

For proper networking connectivity, also ensure that either your standard vSwitch or Distributed Virtual Switch has both promiscuous mode and forged transmit enabled either globally on the portgroup or distributed portgroup your nested ESXi hosts are connected to.  

Creating a vHyper-V  Instance on physical ESXi 5.0:
1. To create a virtual Hyper-V instance, start off by creating a Windows 2008 Server R2 64bit VM using the vSphere Client

2. If you are using Hardware Version 7, you will need to follow the instructions in "Creating vESXi 5.0 Instance using Hardware Version 4/7" to add the additional parameters to the VM. If you are using Hardware Version 8, you just need to change the guestOS type to VMware ESXi 5.0

3. You need to add one additional .vmx parameter which tells the underlying guestOS (Hyper-V) that it is not running as a virtual guest which in fact it really is. The parameter is hypervisor.cpuid.v0 = FALSE

4. You are now ready to install Hyper-V in a virtual machine and you can also spin up nested 64bit guestOSes in this virtual Hyper-V instance.

As you can see, now you can even run Hyper-Crap, err I mean Hyper-V as a virtualized guest under ESXi 5.0. I did not get a chance to try out Xen, but I'm sure with the ability to virtualize the Hardware Virtualization instructions, you should be able to run other types of hypervisors for testing purposes.

This is a really awesome feature but note that this is not officially supported by VMware, use at your own risk.

Categories // ESXi, Home Lab, Nested Virtualization, Not Supported Tags // ESXi 5.0, hyper-v, nested, vesxi, vhv, vSphere 5.0

The Dreaded fault.RestrictedVersion.summary Error

06.28.2011 by William Lam // 9 Comments

The question about the fault.RestrictedVersion.summary error message comes up pretty often on the VMTN community forums, specifically for new users of VMware vSphere. This particular error message is usually generated when trying to use one of the VMware SDK toolkits such as the vCLI, PowerCLI, VI Java, etc. or directly interfacing with the vSphere API for automation. The reason for this error message is due to the use of the free vSphere Hypervisor license (formally known as free ESXi).

Here is an example of the error trying to use vCLI to put an ESXi host into maintenance mode:

 
Here is an example of the error trying to use PowerCLI to power on a virtual machine:

Though the message is not easy to decipher for new users, it is a VMware license restriction placed on the vSphere Hypervisor license edition. Users only have the ability to perform read-only operations against the vSphere API which includes all the vSphere toolkits/management tools such as the vCLI, vMA, PowerCLI, VI Java, etc. Any operations that requires the change of state such as powering on a virtual machine or putting an ESXi host into maintenance mode is considered a write operation and is NOT allowed using the vSphere Hypervisor license type. When using the vSphere Client to manage an ESXi host, users have complete read and write access to these types of operations, they just can not be automated using the vSphere APIs or the various SDKs.

I personally think the error message should be updated to be a bit more clear and state that this particular license type (free ESXi) does not allow for any changes using the APIs. There is a small VMware KB article regarding the configuration of Jumbo Frames using the vCLI, but as you can see this error is not just applicable to vCLI but across the vSphere API/toolkits.

Now there are three types of licenses I consider an ESXi host can be configured and we'll walk through each of them. In the following example, I will be using the vSphere SDK for Perl script licenseManagement.pl to demonstrate the various license types which is allowed since it is only querying the license information which is a read operation to the APIs. 

1. Evaluation License - When you first install vSphere Hypervisor (free ESXi) and before applying your free license key, the host is automatically configured to be in this mode. In the evaluation state (60 days), you are basically licensed with ALL features just as if you have paid for the highest license type which is Enterprise Plus. You have full read and write access to the vSphere API and all the various vSphere toolkits for 60 days as part of the eval, once the 60 days are up, you will be configured just like the free version of ESXi. 

Here is an example of querying the license information for an evaluation ESXi host:

If we take a look at the License section using the vSphere Client, we'll see all the features included with this particular license type:

2. Free License - This is the free license that is provided from VMware once you have registered and downloaded vSphere Hypervisor (free ESXi)

Here is an example of querying the license information on a free ESXi host:

Notice the editionKey which is the first column and the name of the license in the second column which is "vSphere 4 Hypervisor" which represents the use of the free edition/license of ESXi. Both of these string text can also be used in conjunction to determine programmatically that you are running the free edition of ESXi.

If we take a look at the vSphere Client we also notice the list of features is quite small compared to the evaluation version:

3. All other licenses - This is all other vSphere edition license available which includes Essentials, Essentials Plus, Standard, Advanced, Enterprise and Enterprise Plus. For more details about the various license editions, please take a look a the vSphere Edition Comparison chart.

There is not a specific license feature that allows for the read and write access to the vSphere API and toolkits but by using any of the paid editions, you automatically have complete full read and write access to the vSphere APIs and the various vSphere toolkits.

Here is an example of querying the licensing information on a host with Enterprise Plus license type:

Here is an example of the features available via the vSphere Client:

So now you can easily check the vSphere edition you are running to determine whether or not you have the ability to use some of the awesome vSphere toolkits and management tools to automate your vSphere infrastructure which I highly recommend you do.

Categories // Uncategorized Tags // api, restrictedversion, vSphere

How to Disable a vmnic in ESX(i)

06.24.2011 by William Lam // 8 Comments

There was another interesting thread today on the VMTN community forums about disabling an unused vmnic from an ESXi host due to false alarms being generated from HP SIM. Due to the specific hardware version, disabling the vmnic through the BIOS was not an option and there were no alternatives from the discussion.

I decided to dig a bit to see what I could find and stumbled upon a neat little utility called vmkchdev (VMkernel Change Device?) and from what I can tell provides a method passing a particular device to be controlled by either the VMkernel or as a passthrough device to a virtual machine (think VMDirect Path).

Disclaimer: Please note, this is using an undocumented utility. You should test this out in a development/lab environment before using and you may want to also contact VMware support to get their blessings***

~ # vmkchdev
Usage:
-s (scan device)
-v (give device to vmkernel)
-p (give device to passthru/VM)
-l (list device state)
-L (list device state with details)
[0x][seg:[bus[:slot[.func]]]]

What I found while testing the utility is by passing it over as a passthrough device, the vmnic is actually unrepresented to the VMkernel and does not show up under network adapters or even the unused/unlinked adapter list in the vSwitch configurations. It seems that it is just masks the device away from the VMkernel as you can still see the active configuration in esx.conf and you can see the device listed using vmkchdev.

Here is an example of an unused physical nic vmnic1 that we would like to disable and unpresent to an ESXi host.

Here is the output from esxcfg-nics:

First we need to identify the vmnic's PCI slot, we do by running the "-l" or list operation and searching for the particular vmnice device.

~ # vmkchdev -l | grep vmnic1
000:002:01.0 8086:100f 15ad:0750 vmkernel vmnic1

Next we will pass the device from the VMkernel to passthrough/VM using the "-p" flag and specifying the PCI slot, which in this case it is 000:002:01.0. We will also need to refresh the network section so the changes are reflected in the vSphere Client by using vim-cmd.

~ # vmkchdev -p 000:002:01.0
~ # vim-cmd hostsvc/net/refresh

If we list our vmnic again using vmkchdev, you will notice the device is now owned by passthru versus the VMkernel.

~ # vmkchdev -l | grep vmnic1
000:002:01.0 8086:100f 15ad:0750 passthru vmnic1

Now if we check the output of esxcfg-nics and the vSphere Client, you will notice that vmnic1 is no where to be found

If you would like to enable or re-present the disabled vmnic, you just need to pass the device back over to VMkernel by using the "-v" flag.

~ # vmkchdev -v 000:002:01.0
~ # vim-cmd hostsvc/net/refresh

Your vmnic should now re-appear on all your screens and any existing NIC teams that may have exists is automatically restored. This trick actually works on both a used and unused vmnic

If you are trying to do this on ESX, vmkchdev actually has an additional option called "console" for the Service Console.

[root@esx41-1 ~]# vmkchdev
Usage:
-s (scan device)
-c (give device to console)
-v (give device to vmkernel)
-p (give device to passthru/VM)
-l (list device state)
-L (list device state with details)
[0x][seg:[bus[:slot[.func]]]]

I found that you need to pass the vmnic from VMkernel to Console, passing it to passthru/VM will not work and an error is thrown if you do. Again, you can easily re-enable by passing it back to the VMkernel

vmkchdev -c 000:002:01.0

If you would like to automatically persist this change across reboots, specifically for ESXi as changes are not saved. You will need to add the following lines to /etc/rc.local which will execute the disabling of the vmnic's after bootup.

/sbin/vmkchdev -p 000:002:01.0
vim-cmd hostsvc/net/refresh

You will also need to run /sbin/auto-backup.sh to ensure the changes to /etc/rc.local are saved and reloaded upon the next reboot. For ESX, you can place it in /etc/rc.local without having to do anything extra as the changes persists across reboots for classic ESX

Now you can play hide and seek with your vmnic's without resorting to a system reboot or touching the BIOS. Though ideally, if you do have unused devices, you should definitely disable them in the BIOS if you have the option. On a side note, this might be a fun trick to play on one of your co-workers by hiding all vmnics 😉

Categories // ESXi, Not Supported Tags // ESX 4.0, ESXi 4.1, vmkchdev, vmkdevmgr, vSphere 4.1

  • « Previous Page
  • 1
  • …
  • 526
  • 527
  • 528
  • 529
  • 530
  • …
  • 564
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VCF 9.0 Installer workaround for ESXi hosts with different vendor 06/19/2025
  • NVMe Tiering with AMD Ryzen CPU workaround for VCF 9.0 06/19/2025
  • vSAN ESA Disk & HCL Workaround for VCF 9.0 06/19/2025
  • Disable 10GbE NIC Pre-Check in the VCF 9.0 Installer 06/19/2025
  • Minimal resources for deploying VCF 9.0 in a Lab 06/18/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025