WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

The Missing Piece In Creating Your Own Ghetto vSEL Cloud

10.31.2011 by William Lam // 21 Comments

Awhile back I discovered an undocumented flag called "esxvm" in the SQL statements of the new vCloud Director 1.5 installer that suggested the possibility of deploying nested ESXi hosts in vCD. However, after further investigation the flag only enables the automated deployment of an ESXi 5 parameter (vhv.allow) which is required to run nested ESXi 4.x/5.x hosts as part of preparing a new ESXi 5 hosts in vCloud Director. There was still a missing piece to the puzzle to enable this functionality within vCloud Director user interface.

The answer eventually came from attending a recent session at VMworld 2011 in Las Vegas CIM1436 - Virtual SE Lab (vSEL) Building the VMware Hybrid Cloud by Ford Donald of VMware. I will not go into detail about what vSEL is, if you would like more information take a look at this blog post The Demo Cloud at VMworld Copenhagen or check out Ford's VMworld presentation online. In one of Ford's slides, he describes the necessary steps to enable nested ESXi called ESX_VM mode in vCloud Director which actually consists of two parts:

  • Enable nested virtualization and 64-bit vVM support in vSphere 5
  • Enable special mode in vCloud Director called ESX_VM to allow for vSphere 4 and 5 hosts as valid guestOS types

There are also some additional steps that are required after enabling ESX_VM mode:

  • Preparing or re-preparing ESXi 5 hosts
  • Allowing for Promiscuous Mode in vCD-NI or VLAN-backed Network Pool

********************* DISCLAIMER *********************
This is not a supported configuration by VMware and this can disappear at any time, use at your own risk 

********************* DISCLAIMER *********************

Note: I will assume the reader has a good understanding of how to install/configure vCloud Director and how it works. I will not be going into any details in configuring or installing vCD, you can find plenty of resources on the web including here, here, here and here. I will also assume you understand how to configure vCD-NI and VLAN-backed network pools in vCloud Director and how they work.

The first part is to enable nested virtualization (nested ESXi) support within the ESXi 5 hosts when they're being prepared by vCloud Director by updating the following SQL statement as noted in my earlier blog post Cool Undocumented Features in vCloud Director 1.5:

UPDATE config SET value='true' WHERE name='extension.esxvm.enabled';

The second part is to update the vCloud Director database to add support for both vSphere 4 and 5 hosts as valid guestOS types:

INSERT INTO guest_osfamily (family,family_id) VALUES ('VMware ESX/ESXi',6);

INSERT INTO guest_os_type (guestos_id,display_name, internal_name, family_id, is_supported, is_64bit, min_disk_gb, min_memory_mb, min_hw_version, supports_cpu_hotadd, supports_mem_hotadd, diskadapter_id, max_cpu_supported, is_personalization_enabled, is_personalization_auto, is_sysprep_supported, is_sysprep_os_packaged, cim_id, cim_version) VALUES (seq_config.NextVal,'ESXi 4.x', 'vmkernelGuest', 6, 1, 1, 8, 3072, 7,1, 1, 4, 8, 0, 0, 0, 0, 107, 40);

INSERT INTO guest_os_type (guestos_id,display_name, internal_name, family_id, is_supported, is_64bit, min_disk_gb, min_memory_mb, min_hw_version, supports_cpu_hotadd, supports_mem_hotadd, diskadapter_id, max_cpu_supported, is_personalization_enabled, is_personalization_auto, is_sysprep_supported, is_sysprep_os_packaged, cim_id, cim_version) VALUES (seq_config.NextVal, 'ESXi 5.x', 'vmkernel5Guest', 6, 1, 1, 8, 3072, 7,1, 1, 4, 8, 0, 0, 0, 0, 107, 50);

To apply these SQL statements to your vCloud Director 1.5 database, you will need to login to either to your Oracle or SQL Server database and manually execute these statements using the account that you originally created.

Here is an example of executing the SQL statements on an Oracle Express 11g database (Oracle Express is not officially supported by VMware):

As you can see, we need we first create a new guest_osfamily type called "VMware ESX/ESXi" and we need to also provide a unique family_id, which from a default installation of vCloud Director 1.5, the next available value will be 6. Next, we need to create the two new guestos_type "ESXi 4.x" and "ESXi 5.x" and again we need to provide a unique guestos_id which from a default installation of vCloud Director 1.5, the next available values will be 81 and 82. If any errors are thrown regarding a constraint being violated, then the ids may already have been used, you can always query to see what the next value is or select a new id.

Once you have executed the SQL statements, you will need to restart the vCloud Director Cell for the changes to take effect and if you already have prepared ESXi 5 hosts, you will need to re-prepare the hosts.

If you prefer not to manually do this, you can take a look at my blog post Automating vCloud Director 1.5 & Oracle DB Installation which has been updated to allow you to enable ESX_VM mode with your vCloud Director 1.5 installation. There is a new flag in the vcd.rsp file called ENABLE_NESTED_ESX which can be toggled to true/false which will automatically perform the SQL statements as part of the post-installation of vCloud Director 1.5 and restart the vCD Cell for you.

Here is a screenshot if you decide to enable this flag:

Finally, the last configuration tweak is to enable both promiscuous mode and forged transmit in either your vCD-NI or VLAN-backed Network Pool which is a requirement to run nested ESXi hosts. You locate the name of your network pool to identify distributed portgroup.

Next you can either use the vCD API or login to your vCenter Server and enable the promiscuous mode for that specific distributed portgroup.

UPDATE: Thanks to @DasNing - You can also enable promiscuous mode by executing the following SQL query: UPDATE network_pool SET promiscuous_mode='1' WHERE name=';

We are finally done with all the configurations!

If you successfully completed the above, when you go and create a new virtual machine in vCloud Director, you should now have a new Operation System Family called "VMware ESX/ESXi"

Within this new OS family, you can now provision a new ESXi 4.x or ESXi 5.x guestOS

Here is an example of my own vGhettoPod which includes vMA5 and vESXi 5 host which I can use to perform various types of testing in my home lab.

Now you can create your own ghetto vSEL cloud using VMware vSphere 5, vCloud Director 1.5 and vShield 5!

Categories // Automation, ESXi, Nested Virtualization, Not Supported, Uncategorized Tags // ESXi 5.0, esxvm, nested, vcd, vcloud director, vsel, vSphere 5.0

How to Decrease iSCSI Login Timeout on ESXi 5?

10.14.2011 by William Lam // 6 Comments

VMware recently identified an issue where the iSCSI boot process may take longer than expected on ESXi 5. This can occur when the iSCSI targets are unavailable while the ESXi host is booting up and the additional retry code that was added in ESXi 5 can cause a delay in the host startup. It has been noted that the number of retry iteration has been hard coded in the iSCSI stack to nine and this can not be modified.

UPDATE1: VMware just released the patch for iSCSI delay in ESXi 5 Express Patch 01 - kb.vmware.com/kb/2007108

Here is what you would see in /var/log/syslog.log after ESXi host boots up:

Being the curious person that I am, I decided to see if this hard coded value can actually be modified or tweaked. I believe I have found a way to decrease the delay significantly but this has only been tested in a limited lab environment. If you are potentially impacted by this iSCSI boot delay and would like to test this unsupported configuration change, I would be interested to see if this fact reduces the timing.

*** DISCLAIMER ***

This is not supported by VMware, please test this in a staging/development environment before pushing it out to any critical system
*** DISCLAIMER ***

My initial thought was check out the iscsid.conf configuration file, but noticed that VMware does not make use of this default configuration file (not automatically backed up by ESXi), instead it uses a small sqlite database file located in /etc/vmwre/vmkiscsid/vmkiscsid.db.

Since the vmkiscsid.db is actively backed up ESXi, any changes would persist through system reboot and does not require any special tricks.

UPDATE: Thanks to Andy Banta's comment below, you can view the iSCSI sqlite database by using an unsupported --dump-db and -x flag in vmkiscsid.

To dump the entire database, you can use the following command:

vmkiscsid --dump-db

To execute a specific SQL query such as viewing a particular database table, you can use the following command:

vmkiscsid -x "select * from discovery"

Again, this is an unsupported flag by VMware, use at your own risk but it does save you from copying the iSCSI database to another host for edits.

To view the contents of the sqlite file, I scp'ed the file off to a remote host that has sqlite3 client such as the VMware vMA appliance. After a bit of digging with some trial and error, I found the parameter  discovery.ltime in the discovery table would alter the time it takes for the iSCSI boot up process to complete when the iSCSI targets are unavailable during boot up. Before you make any changes, make a backup of your vmkiscsid.db file in case you need the original file.

To view the sqlite file, use the following command:

sqlite3 vmkiscsid.db
.mode line *
select * from discovery;

The default value is 27 and what I have found is that as long as it is not this particular value, the retry loop is not executed or it exits almost immediately after retrying for only a few seconds.

To change the value, I used the following SQL command:

update discovery set 'discovery.ltime'=1; 

To verify the value, you can use the following SQL command:

.mode line *
select * from discovery;

Once you have confirmed the change, you will need to type .quit to exit and then upload the modified vmkiscsid.db file to our ESXi 5 host.

Next to ensure the changes are saved immediately to the backup bootbank, run the /sbin/auto-backup.sh which will force an ESXi backup.

At this point, to test you can disconnect your iSCSI target from the network and reboot your ESXi 5 host, it should hopefully decrease the amount of time it takes to go through the iSCSI boot process.

As you can see from this screenshot of the ESXi syslog.log, it took only 15 seconds to retry and the continue through the boot up process.

In my test environment, I setup a vESXi host with software iSCSI initiator which binded to three VMkernel interfaces and connected to ten iSCSI targets on a Nexenta VSA. I disconnected the network adapter on iSCSI target and modified the discovery.ltime on ESXi host and checked out the logs to see how long it took to get pass the retry code.

Here is a table of the results:

discovery.ltime iSCSI Bootup Delay
1 15sec
3 15sec
6 15sec
12 15sec
24 15sec
26 15sec
27 7min
28 15sec
81 15sec

As you can see, only the value of 27 causes the extremely long delay (7 minutes in my environment) and all other values all behave pretty much the same (15secs roughly). VMware did mention hard coding the number of iterations to 9 and when you divide 27 into 9 you get 3. I tried using values that were multiple of three and I was not able to find any correlation to the delay other than it not taking as long as using the value 27. I also initially tested with 5 iSCSI targets and doubled it to 10 but it did not seem to be a factor in the overall delay.

I did experiment with other configuration parameters such as node.session.initial_login_retry_max, but it did not change the amount of time or iterations of the iSCSI retry code. Ultimately, I believe due to the hard coding of the retry iterations, by modifying discovery.ltime it bypasses the retry code or reduces the amount of retry all together. I am not an iscsid expert, so there is a possibility that other parameter changes could decrease the amount of wait time.

UPDATE: Please take a look at Andy Banta's comment below regarding the significance of the value 27 and the official definitions of the discovery.ltime and node.session.initial_login_retry_max parameters. Even though the behavior from my testing seems to reduce the iSCSI boot delay, there is an official fix coming from VMware.

UPDATE1: VMware just released the patch for iSCSI delay in ESXi 5 Express Patch 01 - kb.vmware.com/kb/2007108

I would be interested to see if this hack holds true for environments with multiple network portals or great number of iSCSI targets. If you are interested or would like to test this theory, please leave a comment regarding your environment.

Categories // Uncategorized Tags // ESXi 5.0, iscsi, vSphere 5.0

How to Create a vCenter Alarm to Monitor for root Logins

10.12.2011 by William Lam // 7 Comments

Another interesting question on the VMTN forums this week, a user was looking for a way to trigger a vCenter alarm when a someone would login to an ESX(i) host using the root account. By default there are several dozen pre-defined vCenter alarms that you can adjust or modify to your needs, but it does not cover every single condition/event that can be triggered via an alarm. This is where the power of the vSphere API comes in. If you browse through the available event types, you will find one that corresponds to sessions called sessionEvent and within that category of events, you will see a UserLoginSessionEvent.

Now that we have identified the particular event we are interested in, we simply just create a new custom alarm that monitors for this event and ensure that "userName" property matches "root" as the user we are trying to alarm on. I wrote a vSphere SDK for Perl script called monitorUserLoginAlarm.pl that can be used to create an alarm on any particular user login.

The script requires only two parameters: alarmname (name of the vCenter alarm) and user (username to alarm on). Here is a sample output for monitoring root user logins on an ESX(i) host:

The alarm will be created at the vCenter Server level and you should see the new alarm after executing the script.

Note: The alarm action is currently to alert within vCenter, if you would like it to perform other operations such as sending an email or an SNMP trap, you can edit the alarm after it has been created by the script.

Next it is time to test out the new alarm, if you click on the "Alarms" tab under "Triggered Alarms" and login to one of the managed ESX(i) host using a vSphere Client with the root account, you should see the new alarm trigger immediately.

If we view the "Tasks/Events" tab for more details, we can confirm the login event and that it was from someone using the root account.

As you can see even though this particular event was not available as a default selection, using the vSphere API, you can still create a custom alarm to monitor for this particular event.

I do not know what the original intent of monitoring for monitoring root logins, but if there is a fear of the root  account being used, the easiest way to prevent this is to enable vCenter Lockdown Mode for your ESXi host.

Categories // Uncategorized Tags // alarm, api, root, vsphere sdk for perl

  • « Previous Page
  • 1
  • …
  • 509
  • 510
  • 511
  • 512
  • 513
  • …
  • 565
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • PowerCLI remediation script for running NSX Edge on AMD Ryzen for VCF 9.0 06/20/2025
  • Failed to locate kickstart on Nested ESXi VM CD-ROM in VCF 9.0 06/20/2025
  • NVMe Tiering with Nested Virtualization in VCF 9.0 06/20/2025
  • VCF 9.0 Installer workaround for ESXi hosts with different vendor 06/19/2025
  • NVMe Tiering with AMD Ryzen CPU workaround for VCF 9.0 06/19/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025