WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Search Results for: nested esxi

The Missing Piece In Creating Your Own Ghetto vSEL Cloud

10.31.2011 by William Lam // 21 Comments

Awhile back I discovered an undocumented flag called "esxvm" in the SQL statements of the new vCloud Director 1.5 installer that suggested the possibility of deploying nested ESXi hosts in vCD. However, after further investigation the flag only enables the automated deployment of an ESXi 5 parameter (vhv.allow) which is required to run nested ESXi 4.x/5.x hosts as part of preparing a new ESXi 5 hosts in vCloud Director. There was still a missing piece to the puzzle to enable this functionality within vCloud Director user interface.

The answer eventually came from attending a recent session at VMworld 2011 in Las Vegas CIM1436 - Virtual SE Lab (vSEL) Building the VMware Hybrid Cloud by Ford Donald of VMware. I will not go into detail about what vSEL is, if you would like more information take a look at this blog post The Demo Cloud at VMworld Copenhagen or check out Ford's VMworld presentation online. In one of Ford's slides, he describes the necessary steps to enable nested ESXi called ESX_VM mode in vCloud Director which actually consists of two parts:

  • Enable nested virtualization and 64-bit vVM support in vSphere 5
  • Enable special mode in vCloud Director called ESX_VM to allow for vSphere 4 and 5 hosts as valid guestOS types

There are also some additional steps that are required after enabling ESX_VM mode:

  • Preparing or re-preparing ESXi 5 hosts
  • Allowing for Promiscuous Mode in vCD-NI or VLAN-backed Network Pool

********************* DISCLAIMER *********************
This is not a supported configuration by VMware and this can disappear at any time, use at your own risk 

********************* DISCLAIMER *********************

Note: I will assume the reader has a good understanding of how to install/configure vCloud Director and how it works. I will not be going into any details in configuring or installing vCD, you can find plenty of resources on the web including here, here, here and here. I will also assume you understand how to configure vCD-NI and VLAN-backed network pools in vCloud Director and how they work.

The first part is to enable nested virtualization (nested ESXi) support within the ESXi 5 hosts when they're being prepared by vCloud Director by updating the following SQL statement as noted in my earlier blog post Cool Undocumented Features in vCloud Director 1.5:

UPDATE config SET value='true' WHERE name='extension.esxvm.enabled';

The second part is to update the vCloud Director database to add support for both vSphere 4 and 5 hosts as valid guestOS types:

INSERT INTO guest_osfamily (family,family_id) VALUES ('VMware ESX/ESXi',6);

INSERT INTO guest_os_type (guestos_id,display_name, internal_name, family_id, is_supported, is_64bit, min_disk_gb, min_memory_mb, min_hw_version, supports_cpu_hotadd, supports_mem_hotadd, diskadapter_id, max_cpu_supported, is_personalization_enabled, is_personalization_auto, is_sysprep_supported, is_sysprep_os_packaged, cim_id, cim_version) VALUES (seq_config.NextVal,'ESXi 4.x', 'vmkernelGuest', 6, 1, 1, 8, 3072, 7,1, 1, 4, 8, 0, 0, 0, 0, 107, 40);

INSERT INTO guest_os_type (guestos_id,display_name, internal_name, family_id, is_supported, is_64bit, min_disk_gb, min_memory_mb, min_hw_version, supports_cpu_hotadd, supports_mem_hotadd, diskadapter_id, max_cpu_supported, is_personalization_enabled, is_personalization_auto, is_sysprep_supported, is_sysprep_os_packaged, cim_id, cim_version) VALUES (seq_config.NextVal, 'ESXi 5.x', 'vmkernel5Guest', 6, 1, 1, 8, 3072, 7,1, 1, 4, 8, 0, 0, 0, 0, 107, 50);

To apply these SQL statements to your vCloud Director 1.5 database, you will need to login to either to your Oracle or SQL Server database and manually execute these statements using the account that you originally created.

Here is an example of executing the SQL statements on an Oracle Express 11g database (Oracle Express is not officially supported by VMware):

As you can see, we need we first create a new guest_osfamily type called "VMware ESX/ESXi" and we need to also provide a unique family_id, which from a default installation of vCloud Director 1.5, the next available value will be 6. Next, we need to create the two new guestos_type "ESXi 4.x" and "ESXi 5.x" and again we need to provide a unique guestos_id which from a default installation of vCloud Director 1.5, the next available values will be 81 and 82. If any errors are thrown regarding a constraint being violated, then the ids may already have been used, you can always query to see what the next value is or select a new id.

Once you have executed the SQL statements, you will need to restart the vCloud Director Cell for the changes to take effect and if you already have prepared ESXi 5 hosts, you will need to re-prepare the hosts.

If you prefer not to manually do this, you can take a look at my blog post Automating vCloud Director 1.5 & Oracle DB Installation which has been updated to allow you to enable ESX_VM mode with your vCloud Director 1.5 installation. There is a new flag in the vcd.rsp file called ENABLE_NESTED_ESX which can be toggled to true/false which will automatically perform the SQL statements as part of the post-installation of vCloud Director 1.5 and restart the vCD Cell for you.

Here is a screenshot if you decide to enable this flag:

Finally, the last configuration tweak is to enable both promiscuous mode and forged transmit in either your vCD-NI or VLAN-backed Network Pool which is a requirement to run nested ESXi hosts. You locate the name of your network pool to identify distributed portgroup.

Next you can either use the vCD API or login to your vCenter Server and enable the promiscuous mode for that specific distributed portgroup.

UPDATE: Thanks to @DasNing - You can also enable promiscuous mode by executing the following SQL query: UPDATE network_pool SET promiscuous_mode='1' WHERE name=';

We are finally done with all the configurations!

If you successfully completed the above, when you go and create a new virtual machine in vCloud Director, you should now have a new Operation System Family called "VMware ESX/ESXi"

Within this new OS family, you can now provision a new ESXi 4.x or ESXi 5.x guestOS

Here is an example of my own vGhettoPod which includes vMA5 and vESXi 5 host which I can use to perform various types of testing in my home lab.

Now you can create your own ghetto vSEL cloud using VMware vSphere 5, vCloud Director 1.5 and vShield 5!

Categories // Automation, ESXi, Nested Virtualization, Not Supported, Uncategorized Tags // ESXi 5.0, esxvm, nested, vcd, vcloud director, vsel, vSphere 5.0

How to Install VMware VSA with Running VMs

09.26.2011 by William Lam // 1 Comment

For those of you who want to quickly test out the new VMware VSA (vSphere Storage Appliance) will notice that you can not just throw a few ESXi 5 hosts that have running virtual machines on them. If you try to proceed with the VSA installation, you will see an error message regarding the presence of virtual machines whether they are running or not.

This can make it difficult to evaluate or test the new VSA if you do not have additional hosts that can be easily re-deployed as vanilla ESXi 5 installations. While working on the previous article How to Install VMware VSA in Nested ESXi 5 Host Using the GUI, I decided to test out the behavior of a few other configuration variables found in the dev.properties file for the VSA Manager. It turns out that you can actually disable the host audit check which includes the validation of running virtual machines by changing the host.audit variable from "true" to "false" using the same trick documented here. You will need to restart the VSA Manager and then the vCenter Server service for the change to go into effect.

**** DISCLAIMER: This is not supported by VMware and there maybe specific checks that are now bypassed by disabling the host.audit parameter. Please use at your own risk and test before deploying on actual systems **** 

One interesting observation made while testing this in a nested ESXi configuration is that even though there is a message warning the user that any data found on the local VMFS volumes will be deleted, I did not see any process that was kicked off to do so. This does not mean this was not the original intention, but there was no reformatting of the local VMFS or removal or powering off of the running virtual machines. While testing both a "supported" and "ghetto" installation of the VSA, I found that several advanced settings were updated as part of the VSA installation, you should see the same if you look in the vmkernel.log of one of the ESXi 5 hosts:

2011-09-23T17:36:33.030Z cpu0:3475)Config: 346: "HostLocalSwapDirEnabled" = 0, Old Value: 0, (Status: 0x0)
2011-09-23T17:38:00.971Z cpu0:3258)Config: 346: "HeartbeatPanicTimeout" = 60, Old Value: 900, (Status: 0x0)
2011-09-23T17:38:07.069Z cpu1:2851)Config: 346: "EnableSVAVMFS" = 1, Old Value: 0, (Status: 0x0)
2011-09-23T17:38:07.090Z cpu1:2851)Config: 346: "VmkStressEnable" = 0, Old Value: 1, (Status: 0x0)
2011-09-23T17:44:22.163Z cpu1:3477)Config: 346: "SIOControlFlag2" = 1, Old Value: 0, (Status: 0x0)

One that sparked my curiosity is EnableSVAVMFS which is a hidden setting found on the ESXi host but one can view it using vsish. Per the limited documentation found in vsish, this parameter is to enable some sort of optimization with the local VMFS volume.

Thanks to @VMwareStorage (Cormac Hogan, VMware Technical Marketing for Storage) for the quick answer to my question on twitter, it looks like this parameter does the following:

"Forces linear allocation of VMDKs on local VMFS for VSA. Improves mirroring performance across VSAs apparently" 

There was nothing in the vmkernel.log that would indicate the local VMFS was reformatted or files had to be delete to support the VSA installation. I can understand why VMware wanted a vanilla installation which included no running VMs to simplify the installation process. Another reason that I can think of is by having some initial storage consumption, it can offset the amount of "available" storage that needs to be setup on VSA cluster. The amount of available storage per host must be equal on all two or three node cluster, to ensure there is sufficient space for replication. As long as you understand by having running virtual machines on one or more ESXi nodes, the node with the smallest amount of free physical storage is what the rest of the VSA nodes will be configured to.

You potentially may also find yourself in a chicken and the egg problem if VSA installation fails to install and reverts it's changes, which includes putting the ESXi host into maintenance mode. This will cause it to fail on the node that is running the vCenter Server and VSA Manager, another reason you would want to run the management system outside of the VSA cluster.

Without further ado, I recorded a quick 6minute video demonstrating the installation of the new VMware VSA on ESXi 5 hosts that has running virtual machines which includes the vCenter Server and VSA Manager running on one of the nodes (video is awesome when you bump up the audio):

Installing VMware VSA with Running VMs from lamw on Vimeo.

Not only is this not supported, but it is also NOT a best practice to run the vCenter Server and VSA Manager within the VSA cluster because you may potentially have issues with replication if vCenter and VSA Manager goes down. In my testing, I found that I could take down vCenter and VSA Manager and the NFS volumes continue to function and the cluster continues to churn away. Any virtual machines running on the VSA volumes will automatically be restarted by vSphere HA. Once the VSA manager has recovered, it'll automatically ensure the volumes have all synchronized and re-protect the VSA cluster.

Note: It is important to understand that even though you can install the VMware VSA with running virtual machines using the hack above, the requirement of a vanilla ESXi 5 installation is still 100% mandatory. You MUST still have only a single vSwitch (vSwitch0) with only a single vmnic (vmnic0) connected to the vSwitch with only two default portgruops that must exists: "VM Network" and "Management Network", there is no workaround for this requirement. If you you have a host that you plan on running VMs prior to VSA installation, make sure they are on the "VM Network" portgroup as additional portgroups are not supported prior to installation of the VSA.

I am hoping that some of these requirements are relaxed in a future release of the VMware VSA and possibly a version that would work with the vCVA (vCenter Virtual Appliance). For now if have limited hardware or would like to use existing ESXi 5 host with running virtual machines (needs to be configured like a vanilla installation of ESXi 5) then you can run everything on either a two or three node cluster, just be aware of the caveats.

For more in-depth information and details about the new VSA, please check out the VMware Storage Blog - vSphere Storage Appliance Links and be sure to follow Cormac Hogan on twitter at @VMwareStorage

Categories // Uncategorized Tags // ESXi 5.0, evc, nested, vsa, vSphere 5.0

Cool Undocumented Features in vCloud Director 1.5

09.06.2011 by William Lam // 6 Comments

While working on the updated script in Automating vCloud Director 1.5 & Oracle DB Installation, I did some digging in my lab deployment and noticed a few interesting things about the new vCloud Director 1.5 installation.

The first thing I noticed after configuring a new Provider vDC and the vCloud Agent (stored in /opt/vmware/vcloud-director/agent) is pushed out to the ESXi 5 hosts, a new esxcli module is added for vCloud Director under /usr/lib/vmware/esxcli-vcloud

There are 6 namespaces that ranges from simple configuration query, network fence management, account manage and also something called "esxvm" which I'll go into a little bit later. I am not sure why this is not in the vCloud Director documentation, I was not able to find any reference to the new esxcli operations. You may also notice the use of legacy "vslauser" (Virtual Software Lifecycle Automation) throughout vCloud Director, even though it was re-written from the ground up, it looks like VMware decided to either keep the name or some of the code related to the service account.

Here is an example of running "esxcli vcloud about get" command:

Here is an example of running "esxcli vcloud fence getfenceinfo" command:

Lastly, here is an example of what "esxvm" namespace provides:

As you can see above, there are two operations: disable/enable support for 64-bit nested virtual machines. This is exactly the same configuration as I blogged about in How to Enable Support for Nested 64bit & Hyper-V VMs in vSphere 5 but using esxcli interface with vCloud Director 1.5. Let's take a look at what happens when we run the "enable64bitnested" operation.

No surprise, we see that it automatically appends the required vhv.allow = "TRUE" flag which enables the support of running nested 64-bit virtual machines within a physical ESXi 5 host.

You might be asking, why is this in vCloud Director? Well if you attended VMworld 2011 or previous VMworlds and took part in the hands on labs, you will know that VMware utilizes vPods or nested ESXi to deploy their labs. I suspect, this functionality was added into vCloud Director so that VMware can easily leverage nested ESXi for hands on labs or vSel deployments just like they did with Lab Manager previously.

While look into this, I recall a very interesting article by Jason Boche - Deploy ESX & ESXi With Hidden Lab Manager 4 Switch in which Jason identifies a hidden flag in the Lab Manager database that enables a special feature in deploying nested ESX(i) VMs including customization through the use of a special version of VMware Tools for ESX(i). I was curious to see if something similar existed in the new vCloud Director that provided similar functionality.

Looking at the SQL install scripts located in /opt/vmware/vcloud-director/db/{oracle/mssql}, I noticed an interesting config called "extension.esxvm.enabled" in NewInstall_Data.sql file

As you can see from the insert statement, by default this value is set to "false" and we can also confirm this after vCloud Director has been installed and configured by querying the database. Let's go ahead and update this value to "true" and let's see what happens. 

Once you have verified the value has been successfully updated, I decide to use the same trick that Jason had identified with the special "Uber Admin Screen" to load the changes. To my surprise, the trick still worked but the page was not super Uber .... To enable the screen, you will need to click on the "About" page and then click CTRL+U (ctr + shift + u), which will toggle the "Uber Admin Screen".

The available options are quite limited as you can see but there are some new hidden options such as a new debug and console toggle. When you enable these options, you will see them at the bottom right of your screen including a counter of the amount of memory being used for your vCloud Director deployment.

After toggling the hidden database feature, I was not able to see any additional pages relating to nested ESXi hosts, even after restarting vCloud Director. Through some testing, I found that the "extension.esxvm.enabled" actually controlled whether or not nested 64bit VM was enabled when the vCloud Agent was pushed out to ESXi 5 hosts. Instead of manually adding vhv.allow = "TRUE" or using esxcli vcloud esxvm enable64bitnested, vCloud Director will automatically configure the ESXi hosts for you. I still suspect there is probably a hidden interface in managing vESXi hosts and leveraging a specialized version of VMware Tools to automate the deployment of nested ESXi, but I have not found out yet.

UPDATE: Take a look at this blog post for the full details on building your own vSEL - The Missing Piece In Creating Your Own Ghetto vSEL Cloud

Categories // Uncategorized Tags // esxcli, ESXi 5.0, vcd, vcloud director, vSphere 5.0

  • « Previous Page
  • 1
  • …
  • 58
  • 59
  • 60
  • 61
  • 62
  • …
  • 67
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025