WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Search Results for: nested esxi

How to quickly setup and test VMware VSAN (Virtual SAN) using Nested ESXi

09.02.2013 by William Lam // 48 Comments

Last week at VMworld 2013, VMware announced the release of vSphere 5.5 which includes a variety of exciting new features.  One of the most anticipated feature introduced in this release is VMware Virtual SAN (VSAN) which will be available initially as a public beta. One question that I heard repeatedly throughout the VMworld conference was whether it would be possible to test VSAN in a nested ESXi environment? The answer is absolutely! This is a great way to learn about VSAN and how works from a functional perspective before procuring the necessary hardware.

Disclaimer: Running VSAN in a nested ESXi environment is not officially supported nor is it a replacement for actual testing on actual physical hardware.

Before getting started, I would highly recommend you check out the following resources from my good friend Cormac Hogan which includes a detailed VSAN walk through as well what looks to be an awesome series of articles on how VSAN works:

  • VSAN Walkthrough
  • VSAN Part 1 - A first look at VSAN
  • VSAN Part 2 - What do you need to get started

Requirements:

  • Environment running either vSphere 5.1 or 5.5 and access to the vSphere Web Client.

Configuration:

Nested ESXi VM configured with the minimal resources:

  • 2 vCPU
  • 5GB Memory (ESXi 5.5 now requires a minimum of 4GB vs 2GB as with previous releases but VSAN requires minimum of 5 with recommended 6)
  • 2GB Disk for ESXi 5.5 installation
  • 4GB Disk for an "Emulated" SSD
  • 8GB Disk for HDD

Easy Method:

Instead of having you go through the process of building a Nested ESXi VM with all the prerequisites that includes steps from here and here. I have pre-built a VSAN Nested ESXi VM template (217Kb) that you can just download and import into your environment and being the installation process.

Download either:

  • Single VSAN Nested ESXi VM Template
  • 3-Node VSAN Nested ESXi VM Template
  • 32-Node VSAN Nested ESXi VM Template

and connect to your vCenter Server 5.1 or 5.5 using the vSphere Web Client and import the OVF into your environment (do not use the vSphere C# Client as the import does not persist VHV configuration). Once you have imported the VM, you can then mount the ESXi 5.5 ISO and begin the installation. All three VMDKs have been thin provisioned and you can change the capacity during deployment.

Slightly Harder Method:

If you wish to build the Nested ESXi VM yourself, then you can follow these instructions:

Step 1 - Create a new VM and when you get to the compatibility screen, select either "ESXi 5.1 or greater" or "ESXi 5.5 or greater" depending on the version of vSphere you are running

Step 2 - For the GuestOS select "Other" and "Other (64-bit)"

Step 3 - We will need to customize the following virtual hardware configuration:

  • Change vCPU to 2
  • Click on CPU drop down and enable "Expose hardware assisted virtualization to the guest OS"
  • Change Memory to 4GB
  • Change the initial VMDK to 2GB or whatever value you wish to use for ESXi installation
  • Add second VMDK with 4GB or whatever value you wish to use for "emulated" SSD
  • Add third VMDK with 8GB of whatever value you wish to use for the HDD
  • Click on the VM Options tab at the top and select the "Advanced" drop down box. We will need to add the following entry scsi0:1.virtualSSD = 1 For more details please refer to this article

Step 4 - Click okay to provision the VM and once it has been deployed you will need to re-configure the guestOS to "VMware ESXi 5.x" using the vSphere C# Client for vSphere 5.1 or vSphere Web Client for vSphere 5.5. At this point, you will have the same VM image as in the Easy Method and you are now ready to install ESXi 5.5

When you install ESXi 5.5, you should see the following three disks as shown in the screenshot below, ensure you install ESXi on the 2GB disk:

Prior to enabling VSAN on the particular vSphere Cluster, make sure you enable the new VSAN traffic type on one of your VMkernel interfaces for each of your ESXi hosts, this is required for VSAN communication.

If all the prerequisites have been met, you can now easily enable VSAN by simply checking the VSAN box when editing the vSphere Cluster. In just a few minutes you should see diskgroups automatically created (assuming you selected Automatic mode) consuming both the emulated SSD and HDD and the creation of the vsanDatastore which will be available on all ESXi hosts within that vSphere Cluster.

You can also use the same method for emulating an SSD running in a Nested ESXi to functional test the new VMware Flash Read Cache (vFRC) feature.

Categories // VSAN, vSphere 5.5 Tags // nested, ssd, vflash, vFRC, Virtual SAN, VSAN, vSphere 5.5

How To Enable Nested ESXi Using VXLAN In vSphere & vCloud Director

05.06.2013 by William Lam // 9 Comments

Recently I had received several inquiries asking on how to configure nested ESXi (Nested Virtualization) to function in a VXLAN environment. I have written several articles in the past on configuring nested ESXi in a regular vSphere and vCloud Director environment, but with the use of a VXLAN backed network, there are a few additional steps that are required. These steps include additional configurations of the vCloud Network & Security Manager (previously known as vShield Manager) which ensures that both the required promiscuous mode and forged transmits are automatically enabled for the VXLAN virtual wires (vWires) as they are managed exclusive by the vCNS Manager.

In this article, I will walk you through the configurations that is required when using VXLAN in both a vSphere only environment as well as a vCloud Director environment. If you would like to learn more about how VXLAN works, be sure to check out the multi-part VXLAN series (Part 1/Part 2) by Venky Deshpande.

Disclaimer: This is not officially supported by VMware, please use at your own risk.

Configurations for VXLAN in vSphere Environment

Step 1 - Deploy vCNS Manager and configure it to point to your vCenter Server (do not enable or prepare VXLAN, this must be done after the configurations)

Step 2 - You will need to identify the VDS MoRef ID in your vCenter Server which will be used in the next step. Since the configuration is applied at the VDS level, you may want to consider having a separate VDS serving Nested Virtualization traffic since both promiscuous mode & forged transmits will automatically be enabled for all vWires. To locate the VDS MoRef ID, login to the vSphere Web Client and select the summary view for the VDS.

The VDS MoRef ID will be towards the end of the URL link and it should start with dvs-X where X is some arbitrary number. Record this value down for the next step

Step 3 - Download the enablePromForVDS.sh shell script which will be used to prepare the VDS within the vCNS Manager. The script basically performs a POST to the REST API to the vCNS Manager using cURL and it accepts three input parameters: vCNS Manager IP Address/Hostname, VDS MoRef ID and VDS MTU. The username/password is hard coded in the script to use the default which is admin/default. If you have modified the default password like any good admin, you will want to change the password before running the script. If you take a look at the request body, you will notice only promiscuous mode is enabled to true, but this will also automatically enable forged transmits as well.

In my lab enviroment, I have the vCNS Manager IP to be 172.30.0.196, VDS MoRef ID to be dvs-13 and VDS MTU to be 9000. So the syntax to run the script would be:

./enablePromForVDS.sh 172.30.0.196 dvs-13 9000

Here is a screenshot of executing the script, you should see a response back with 200 to indicate successful execution of the script.

Step 4 - Now, we will proceed with the VXLAN preparation. Start off by logging into the vCNS Manager and selecting the vSphere Datacenter which you wish to enable VXLAN. On the right you should see a tab called "Network Virtualization" go ahead and click on that and then click on the sub-tab called "Preparation". Click on edit and then select the vSphere Cluster and proceed through the wizard based on your environment configuration.

Step 5 - Once the VXLAN preparation has completed, click on the "Segment ID" and configure that based on your environment.

Step 6 - Next, click on "Network Scopes" and you will create a network scope and specify the set of vSphere Clusters the VXLAN network will span.

Step 7 - Lastly, click on "Networks" and this is where you will create your vWires and ensure it the proper network scope is selected.

Step 8 - To confirm that everything has been configured properly. We now log back into our vSphere Web Client and heading over to the VDS settings page. You should now see a new vWire portgroup that is created, if we take a look at it's settings we should see that both promiscuous mode and forged transmits is enabled.

You are now done with the VXLAN configurations in the vCNS Manager and can proceed to the regular instructions for enabling Nested ESXi for vSphere.

Note: If you have already prepared VXLAN in your environment, you can still configure the above without having to un-prepare your VXLAN configurations. You just need to login to the vCNS Manager via the REST API and perform a DELETE on the VDS switch (Please refer to page 153 of the vCNS API Programming Guide) which will just delete the mapping from vCNS but will not destroy any of your VDS configuration. Once that is done, you will be able to use the script to configure the VDS with the proper settings.

Configurations for VXLAN in vCloud Director Environment

A VXLAN network pool is automatically created for you when using vCloud Director 5.1, so the steps for preparing Nested Virtualization for vCloud Director is extremely simple compared to the vSphere only environment.

Note: VXLAN is only supported in vCloud Director 5.1, for previous versions you have the choice of using a VCD-NI or vSphere backed network and the configurations for that can be found here.

Step 1 - Please follow the steps 1-5 from above in the vSphere only environment and then you are done. If you would like a more detailed walk through for configuring VXLAN for a vCloud Director environment, check out this article by Rawlinson Rivera who takes you through the process step by step.

Step 2 - Proceed to the regular instructions for enabling Nested ESXi for vCloud Director.

Step 3 - Lastly, you will go through the vCloud Director setup which is to attach your vCenter Server & vCNS Manager, create a Provider VDC, create an Organization and assign resources to your Organization VDC and ensure that the OrgVDC is consuming the VXLAN network pool that is automatically created for you when you create the Provider VDC. Once that is done, when you deploy your vApp, you will see a vWire that automatically created for you. If we login to the vSphere Web Client and go to the VDS settings, you will see the vWire has both promiscuous mode and forged transmits automatically enabled.

Additional Resources:

  • Nested Virtualization Resources

Categories // Automation, Nested Virtualization, NSX Tags // nested, vcloud director 5.1, vcloud networking and security, vcns, vhv, vSphere 5.1, VXLAN

Having Difficulties Enabling Nested ESXi in vSphere 5.1?

09.29.2012 by William Lam // 21 Comments

I noticed there were a few folks having some difficulties enabling Nested ESXi (VHV Virtual Hardware Virtualization) in the latest release of ESXi 5.1 and I thought I share some additional info and tips on troubleshooting your setup in case you are running into similar problems.

*** DISCLAIMER **** This is not officially supported by VMware, do not bother asking if it is supported or calling into VMware support for details or help.

If you wish to run nested ESXi or other hypervisors on ESXi 5.1 and run 32-bit nested virtual machines, you must meet the following hardware requirement:

  • CPU supporting Intel VT-x or AMD-V

If you wish to run nested 64-bit virtual machines in your nested ESXi or other hypervisors, in addition to the requirement above, you must also meet the following hardware requirement:

  • CPU supporting Intel EPT or AMD RVI

If you only meet the first criteria, you CAN still install nested ESXi or other hypervisors on ESXi 5.1, BUT you will only be able to run 32-bit nested virtual machines. When you create your virtual machine shell using the new vSphere Web Client, in the expanded CPU view, the "Hardware Virtualization" box will be grayed out. This is expected as you do not have full support for VHV, but you can still continue with your installation of ESXi or other hypervisors.

In ESXi 5.0, you may have been able to run 64-bit nested virtual machines without EPT/RVI support but performance was extremely poor. With ESXi 5.1, VHV now requires EPT/RVI.

Note: During the installation of ESXi, you may see the following message "No Hardware Virtualization Support", you can just ignore it.

If you are using sites such as Intel's ark.intel.com to check your CPU requirements, be aware that it is COMMON even for the hardware vendors to publish incorrect information about their websites. However, there is a quick way you can validate on your ESXi host whether you have full VHV support.

In vSphere 5.1, there is a new capability property called nestedHVSupported which specifies whether your physical ESXi 5.1 host has full VHV support. This property will only be true IF your CPU has both Intel-VT+EPT or AMD-V+RVI. A quick and easy way to validate this is using the vSphere MOB to retrieve the value.

To check nestedHVSupported property, please enter the following into a web browser (substitute the IP Address/hostname of your ESXi host):

https://himalaya.primp-industries.com/mob/?moid=ha-host&doPath=capability

After you login, search for the nestedHVSupported property on the page and you should see a value of either true or false. As mentioned earlier, if it is false, you might still be able to install nested ESXi or other hypervisors but you will not be able to run nested 64-bit virtual machines. I would also recommend taking a look at your system BIOS to ensure things like Intel-VT/EPT and AMD-V/RVI are enabled and sometimes it might just be as simple as a BIOS upgrade (you can always confirm by contacting the hardware vendor if you have further questions).

For proper networking connectivity, also ensure that either your standard vSwitch or Distributed Virtual Switch has both promiscuous mode and forged transmit enabled either globally on the portgroup or distributed portgroup your nested ESXi hosts are connected to.

Additional Resources: 

  • How to Enable Nested ESXi & Other Hypervisors in vSphere 5.1
  • How to Enable Nested ESXi & Other Hypervisors in vCloud Director 5.1

Categories // Uncategorized Tags // ESXi 5.1, hyper-v, nested, vcd, vcloud director 5.1, vesxi, vhv, vsel, vSphere 5.1

  • « Previous Page
  • 1
  • …
  • 17
  • 18
  • 19
  • 20
  • 21
  • …
  • 67
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025