WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Having Difficulties Enabling Nested ESXi in vSphere 5.1?

09.29.2012 by William Lam // 21 Comments

I noticed there were a few folks having some difficulties enabling Nested ESXi (VHV Virtual Hardware Virtualization) in the latest release of ESXi 5.1 and I thought I share some additional info and tips on troubleshooting your setup in case you are running into similar problems.

*** DISCLAIMER **** This is not officially supported by VMware, do not bother asking if it is supported or calling into VMware support for details or help.

If you wish to run nested ESXi or other hypervisors on ESXi 5.1 and run 32-bit nested virtual machines, you must meet the following hardware requirement:

  • CPU supporting Intel VT-x or AMD-V

If you wish to run nested 64-bit virtual machines in your nested ESXi or other hypervisors, in addition to the requirement above, you must also meet the following hardware requirement:

  • CPU supporting Intel EPT or AMD RVI

If you only meet the first criteria, you CAN still install nested ESXi or other hypervisors on ESXi 5.1, BUT you will only be able to run 32-bit nested virtual machines. When you create your virtual machine shell using the new vSphere Web Client, in the expanded CPU view, the "Hardware Virtualization" box will be grayed out. This is expected as you do not have full support for VHV, but you can still continue with your installation of ESXi or other hypervisors.

In ESXi 5.0, you may have been able to run 64-bit nested virtual machines without EPT/RVI support but performance was extremely poor. With ESXi 5.1, VHV now requires EPT/RVI.

Note: During the installation of ESXi, you may see the following message "No Hardware Virtualization Support", you can just ignore it.

If you are using sites such as Intel's ark.intel.com to check your CPU requirements, be aware that it is COMMON even for the hardware vendors to publish incorrect information about their websites. However, there is a quick way you can validate on your ESXi host whether you have full VHV support.

In vSphere 5.1, there is a new capability property called nestedHVSupported which specifies whether your physical ESXi 5.1 host has full VHV support. This property will only be true IF your CPU has both Intel-VT+EPT or AMD-V+RVI. A quick and easy way to validate this is using the vSphere MOB to retrieve the value.

To check nestedHVSupported property, please enter the following into a web browser (substitute the IP Address/hostname of your ESXi host):

https://himalaya.primp-industries.com/mob/?moid=ha-host&doPath=capability

After you login, search for the nestedHVSupported property on the page and you should see a value of either true or false. As mentioned earlier, if it is false, you might still be able to install nested ESXi or other hypervisors but you will not be able to run nested 64-bit virtual machines. I would also recommend taking a look at your system BIOS to ensure things like Intel-VT/EPT and AMD-V/RVI are enabled and sometimes it might just be as simple as a BIOS upgrade (you can always confirm by contacting the hardware vendor if you have further questions).

For proper networking connectivity, also ensure that either your standard vSwitch or Distributed Virtual Switch has both promiscuous mode and forged transmit enabled either globally on the portgroup or distributed portgroup your nested ESXi hosts are connected to.

Additional Resources: 

  • How to Enable Nested ESXi & Other Hypervisors in vSphere 5.1
  • How to Enable Nested ESXi & Other Hypervisors in vCloud Director 5.1

Categories // Uncategorized Tags // ESXi 5.1, hyper-v, nested, vcd, vcloud director 5.1, vesxi, vhv, vsel, vSphere 5.1

Lossless OVF Export in vSphere 5.1 & vCloud Director 5.1

09.27.2012 by William Lam // 5 Comments

Did you know in vSphere 5.1, the new vSphere Web Client now provides a lossless OVF export feature for virtual machines? Previous to this, when you exported a virtual machine only the basic configurations were captured to ensure the virtual machine could be re-imported into another environment that supports OVF. The virtual machine's "virtual hardware personality" such as the BIOS UUID, MAC addresses, VMware specific advanced settings (Extra Configurations), boot order, PCI slot numbers, etc. were not captured as part of the export.

One of the main reasons for doing this is to ensure greater portability of the virtual machine across different environments. By having some of these specific properties tied to the virtual machine, it could limit where a virtual machine could be imported to. However, if you wish to preserve some of these settings during an export, you now have a new advanced option in vSphere 5.1 to specify additional configurations to capture for an OVF export.

When you select the "Enable advanced options" box, you will be provided with a list of configurations that you can preserve and export with the virtual machine.

Note: As the warning message states, only enable the configurations that you absolutely need, else you should stick to the defaults to ensure you have the best portability for your virtual machine export.

There is also a similar feature in vCloud Director 5.1 for vAppTemplates. When you initiate a download of a vAppTemplate, you now have an additional check box to "Preserve identity information" of the vAppTemplate prior to export. This is just a single check box for vCloud Director, you can not specify which configurations you want to preserve.

All these new OVF features can also be accessed with the latest ovftool 3.0.1 release which includes several new enhancements as well as support for both vSphere 5.1 and vCloud Director 5.1. You can find more details in the ovftool 3.0.1 user guide, but here is a summary of what's new in this release.

What Is New in OVF Tool 3.0.1

  • Support for vSphere 5.1 and 5.0.  
  • Support for vCloud Director 5.1, 1.5, and partial support for vCloud Director 1.0
  •    The following new ovftool options: --annotation, --exportFlags, --shaAlgorithm, --sourcePEM, --targetPEM, --vCloudTemplate, --sourceSSLThumbprint, --targetSSLThumbprint, --fencedMode, --noSSLVerify, --vService.

Feature Highlights

  • Full support for vCloud Director 1.5 and 5.1
  • Includes full OVF 1.0, and 1.1 support and backward-compatible mode for importing existing OVF 0.9 packages
  • Supports both import and generation of OVA packages (OVA is part of the OVF standard, and contains all the files of a virtual machine or vApp in a single file.)
  • Directly converts between any vSphere, vCloud Director, VMX, or OVF source format to any vSphere, vCloud Director, VMX, or OVF target format
  • Accesses OVF sources using HTTP, HTTPS, or FTP, or from a local file 
  • Deploys and exports vApp configurations on vSphere 4.0 targets and later and on vCloud Director 1.5 targets and later
  • Provides options to power on a VM or vApp after deployment, and to power off a virtual machine or vApp before exporting (caution advised)
  • Show information about the content of any source in probe mode
  • Provides context sensitive error messages for vSphere and vCloud Director sources andtargets, showing possible completions for common errors, such as an incomplete vCenter inventory path or missing datastore and network mappings
  • Provides an optional output format to support scripting when another program calls OVFTool
  • Uses new optimized upload and download API (optimized for vSphere 4.0 and later)
  • Signs OVF packages and validates OVF package signatures
  • Validates XML Schema of OVF 1.0 and OVF 1.1 descriptors
  • Import and export of OVF packages into a vApprun1.0 workspace. For more information about vApprun, see http://labs.vmware.com/flings/vapprun.

Categories // Automation, OVFTool Tags // export, lossless, ova, ovf, ovftool, vSphere 5.1, vsphere web client

2gbsparse Disk Format No Longer Working On ESXi 5.1

09.26.2012 by William Lam // 4 Comments

I was recently made aware of an issue with my ghettoVCB script that after upgrading to ESXi 5.1, the ability to clone (or in this case backup) using the 2gbsparse disk format with vmkfstools was no longer working. The error that users were seeing was "The system cannot find the file specified." and I also confirmed this behavior by manually creating a VMDK and then trying to clone using the 2gbsparse format.

To give you some background, the 2gbsparse disk format is not a VMFS virtual disk format, it is part of the hosted desktop product (VMware Fusion, Workstation, Server & Player) disk format. This disk format was created to prevent cross-platform file system compatibility issue as pointed out in this VMware KB article. This issue does not exists on VMFS and hence this extra disk format is not necessary.

After some investigation, I found to use the 2gbsparse format in vmkfstools, you will need to load a specific VMkernel module called "multiextent". 2gbsparse was never officially supported on ESXi, you can not run a virtual machine with 2gbsparse disk format on ESXi and that is why a conversion maybe required when moving from a hosted product to ESXi. So by disabling unnecessary VMkernel modules that were not used made sense to help reduce amount of resources needed to load up. This is especially important with stateless deployments, where you want your ESXi host to load up as fast as possible.

Once you have enabled this VMkernel module, the 2gbsparse format will function again with vmkfstools. I also found that this was mentioned in the vSphere 5.1 release notes (yes, you should read the release notes)

To load the multiextent VMkernel module, run the following ESXCLI command:

esxcli system module load -m multiextent

To check whether the multiextent VMkernel module has loaded, run the following ESXCLI command:

esxcli system module list | grep multiextent

If you wish to persist this configuration after a system reboot, I found that you need to add the following command in a start-up script /etc/rc.local.d/local.sh as just setting the "enabled" flag is not sufficient for this particular VMkernel module.

localcli system module load -m multiextent

Note: We are using localcli because hostd may not be completely ready and you can either add a sleep/timer or just use localcli.

Categories // ESXi Tags // 2gbsparse, esxcli, ESXi 5.1, localcli, multiextent, vmkernel module, vmkfstools, vSphere 5.1

  • « Previous Page
  • 1
  • …
  • 9
  • 10
  • 11
  • 12
  • 13
  • …
  • 18
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Ultimate Lab Resource for VCF 9.0 06/25/2025
  • VMware Cloud Foundation (VCF) on ASUS NUC 15 Pro (Cyber Canyon) 06/25/2025
  • VMware Cloud Foundation (VCF) on Minisforum MS-A2 06/25/2025
  • VCF 9.0 Offline Depot using Synology 06/25/2025
  • Deploying VCF 9.0 on a single ESXi host? 06/24/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025