WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

1200+ undocumented .vmx parameters

10.31.2010 by William Lam // 5 Comments

Recently while performing some skunkworks testing in my personal lab, I came across a slew of documented and undocumented virtual machine .vmx configuration parameters. Using one of my favorite UNIX/Linux utility strings, I was able to uncover some interesting things in the /usr/lib/vmware/bin/vmware-vmx binary which is used to load a virtual machines configuration file.

Here are some of the interesting observations I have made:


vSphere is hypervisor aware?

%s: %s detected by CPUID
%s: VMware detected
Microsoft Hyper-V
%s: Xen detected by hypercall
Xen detected but hypervisor unrecognized (Xen variant?)

I noticed the following strings around detecting certain guest hypervisors, is this a hint that VMware is going to support other virtual "hypervisors", specifically Microsoft and Xen?

vSphere to support Mac OSX?

Linux Host
Windows Host
Mac OS Host

There were some text that listed the various types of host, including Mac OSX.

Make sure that you have installed all available Mac OS X software updates.
@&!*@*@(msg.cdrom.darwindisconnect)Your Mac OS guest is using this CD-ROM device. The safest way to disconnect this virtual CD-ROM is by pressing %s, then ej
ecting the media from inside the guest%s. To continue anyway, press %s.%s
@&!*@*@(msg.Backdoor.OsNotMacOSXServer)The guest operating system is not Mac OS X Server.
@&!*@*@(msg.cpuid.darwinWithBTHV)Mac OS X is not supported with software virtualization. Change the execution mode to automatic.
@&!*@*@(msg.cpuid.darwinWithBT)Mac OS X is not supported with software virtualization. To run Mac OS X you need a host on which %s supports hardware virtuali
zation.
isolation.bios.IsGOS.Darwin

There were some text that listed various messages regarding Mac OSX

sbios
vbios
bios440
efi32
efi64
nvram
lsibios
nbios
nxbios
nx3bios
e1000bios
vmibios
vmmmods
sas1068bios
pvscsibios

As you can see, there is mention of EFI support which is required to boot Mac OSX. Does this mean future version of vSphere will support virtualizing Mac OSX?

New guestOS types?

darwin10
darwin10-64
darwin-64
mandrake-64
opensuse
opensuse-64
winServer2008Cluster-32
winServer2008Cluster-64
winServer2008Datacenter-32
winServer2008Datacenter-64
winServer2008DatacenterCore-32
winServer2008DatacenterCore-64
winServer2008Enterprise-32
winServer2008Enterprise-64
winServer2008EnterpriseCore-32
winServer2008EnterpriseCore-64
winServer2008SmallBusiness-32
winServer2008SmallBusiness-64
winServer2008SmallBusinessPremium-32
winServer2008SmallBusinessPremium-64
winServer2008Standard-32
winServer2008Standard-64
winServer2008StandardCore-32
winServer2008StandardCore-64
winServer2008Web-32
winServer2008Web-64
XenVMMXenVMM

There was a section that I came across which listed all supported guestOS types, here you can see there have been a few more that were added between vSphere 4.0 and 4.1. One interesting thing that I am not sure if a lot of people have noticed, is the VirtualMachineGuestOsIdentifier in the vSphere API. This basically provides the guestos identifier that is supported in each release of VI/vSphere. Interesting enough, a darwin guestos support has been documented as of vSphere 4.0:

Though we all know we can not run Mac OSX on ESX(i) ... at least not just yet from what the above is hinting at.

These were just a few of the interesting things I found while parsing through the strings output when looking at the ESX 4.1's vmware-vmx binary.

Here is a collection of over 1200+ documented and undocumented .vmx configuration parameters.

**** These are not documented by VMware, use at your own risk! ****

http://https://s3.amazonaws.com/virtuallyghetto-download/hidden_vmx_params.html

**** These are not documented by VMware, use at your own risk! ****

Some of these hidden .vmx entries have been shared by the VMware and the community, here are just a few:

  • http://www.virtuallyghetto.com/2010/10/how-to-control-maximum-number-of-vmware.html
    •  snapshot.maxSnapshots = Control the maximum number of VMware snapshots
  • http://www.vcritical.com/2009/05/vmware-esx-4-can-even-virtualize-itself/
    • monitor_control.restrict_backdoor = Run virtual ESX(i) hosts on top of ESX or ESXi
  • http://vinf.net/2009/06/07/vsphere-cannot-enable-ft-for-a-nested-vm/ 
    •  replay.allowBTOnly = Allow FT to be enabled on vVM running on vESX(i)
    •  replay.allowBT = Allow FT to be enabled on vVM running on vESX(i)
  • http://kb.vmware.com/kb/1010184
    • cpuid.coresPerSocket = Specify the number of cores per physical socket
  • http://www.sanbarrow.com/vmx/vmx-advanced.html 

Categories // Uncategorized Tags // vmx, vSphere 4.1

resxtop bug in vCLI 4.1 not vMA 4.1

10.17.2010 by William Lam // Leave a Comment

I recently noticed a thread in the vMA forums regarding an issue using resxtop on vMA 4.1 to view VM disk statistics on an ESXi 4.0 hosts. The thread was started on July 26th 2010 and as far as I could tell, no resolution was ever provided. A recent comment that was left on Oct 14th 2010 by another user experiencing the same behavior got my attention while browsing the VMTN forums. I decided to perform a small test to see if this was in fact an issue and it turns out it maybe a bug in resxtop running on vMA 4.1.

The test environment consists of the following:

  • 1 x vESXi 4.0u2 running 1 VM
  • 1 x vESXi 4.1 running 1 VM
  • 1 x vMA 4.0
  • 1 x vMA 4.1

Here is a screenshot of running esxtop locally within Tech Support Mode (Busybox console) on the ESXi 4.0 hosts and you can see, the VM disk statistics are visible and present:

Here is a screenshot of running esxtop locally within Tech Support Mode (Busybox console) on the ESXi 4.1 hosts and you can see, the VM disk statistics are visible and present:

Now here is a screenshot of running both vMA 4.1 (on top) and vMA 4.0 (on bottom) connecting to an ESXi 4.0 host running a single virtual machine called VM2. I use resxtop to connect to the ESXi 4.0 and select "v" option or VM disk statistics and as you can see from the screenshot, no statistics are being displayed when using vMA 4.1:

I perform the same exact test but now connecting to an ESXi 4.1 host using both vMA 4.1 (on top) and vMA 4.0 (on bottom) and what is actually surprising is, the VM disk statistics shows up for both vMA 4.0 and vMA 4.1:

It looks like something changed in the resxtop binary between vMA 4.0 and vMA 4.1 that causes the the VM disk statistics on ESX(i) hosts running on 4.0 not to be visible. I have not found any VMware KB articles documenting this issue nor found anything in vMA's release notes in which this configuration is not supported. This looks like a bug to me and I will try follow-up with the vMA's product manager to get an official word.

Note: I used ESXi since it was quicker to deploy for this test, but the issue affects both ESX and ESXi 4.0 when using resxtop from vMA 4.1

UPDATE:  After further investigation, I found out the issue is in fact with vCLI 4.1 installation and not with vMA 4.1. To confirm, I spun up a CentOS VM and installed and individually tested vCLI 4.0u2 and vCLI 4.1 and experience the same behavior as in vMA. I have already reported the issue to vMA product manager and hopefully we can get this resolved in either a patch or an updated released.

Categories // Uncategorized Tags // resxtop, vma, vSphere 4.1

Does SIOC actually require Enterprise Plus & vCenter Server?

10.10.2010 by William Lam // 1 Comment

After reading a recent blog post by Duncan Epping, SIOC, tying up some loose ends, I decided to explore whether or not VMware's Storage I/O Control feature actually requires an Enterprise Plus license and vCenter Server. To be completely honest, Duncan's article got me thinking but it was also my recent experience with VMware's vsish and the blog post I wrote What is VMware vsish? that made me think this might be a possibility. vsish is only available on ESXi 4.1 within the Tech Support Mode, but if you have access to debugging rpm from VMware, you can also obtain vsish for classic ESX.

Within vsish, there is a storage section and within that section there is a devices sub-section which provides information regarding your storage devices that includes paths, partitions, IO statistics, queue depth and new SIOC state information.

Here is an example of the various devices that I can view on an ESXi 4.1 host:

~ # vsish
/> ls /storage/scsifw/devices/
t10.F405E46494C45400A50555567414D2D443E6A7D276D6F6E4/
mpx.vmhba1:C0:T0:L0/
mpx.vmhba32:C0:T0:L0/

Here is an example of various properties accessible to a given storage device:

/> ls /storage/scsifw/devices/t10.F405E46494C45400A50555567414D2D443E6A7D276D6F6E4/
worlds/
handles/
filters/
paths/
partitions/
uids/
iormInfo
iormState
maxQueueDepth
injectError
statson
stats
inquiryVPD/
inquirySTD
info

In particular, we are interested in iormState and you can see the value by just using the cat command:

/> cat /storage/scsifw/devices/t10.F405E46494C45400A50555567414D2D443E6A7D276D6F6E4/iormState
1596

This value may not mean a whole lot and I have seen this as the default value when SIOC is disabled as well as 2000 from my limited set of tests. Now, since we can access these particular SIOC parameter, I wanted to see how this value was affected when SIOC is enabled and disabled. To test this, I used the following VMware KB1022091 to enabling additional SIOC logging which goes directly to /var/log/messages with the logger tag "storageRM", this allows you to easily filter out SIOC logs via simple grep.

For testing purposes, you can just enable the logging to level 2 which is more than sufficient to get the necessary output. You will perform the following command to change the default SIOC logging level from 0 to 2 using Tech Support Mode:

~ # esxcfg-advcfg -s 2 /Misc/SIOControlLogLevel
Value of SIOControlLoglevel is 2

Now you will want to open a separate SSH session to your ESXi host and tail /var/log/messages to monitor the SIOC logs:

~ # tail -f /var/log/messages | grep storageRM
Oct 10 18:39:05 storageRM: Number of devices on host = 3
Oct 10 18:39:05 storageRM: Checked device t10.F405E46494C45400A50555567414D2D443E6A7D276D6F6E4 iormEnabled= 0 LatThreshold =30
Oct 10 18:39:05 storageRM: Checked device mpx.vmhba1:C0:T0:L0 iormEnabled= 0 LatThreshold =232
Oct 10 18:39:05 storageRM: Checked device mpx.vmhba32:C0:T0:L0 iormEnabled= 0 LatThreshold =30
Oct 10 18:39:05 storageRM: rateControl: Current log level: 2, new: 2
Oct 10 18:39:05 storageRM: rateControl: Alas - No device with IORM enabled!

You should see something similar to the above. In my lab, it is seeing an iSCSI volume, local storage and CD-ROM and you will notice there is an iormEnabled flag and all three has SIOC disabled and the default latency threshold specified by LatThreshold, which is 30ms by default.

Now we know what these values are when SIOC is disabled, let's see what happens when we enable SIOC from vCenter on this ESXi 4.1 host. I am using evaluation license for the host which supports the Storage I/O Control from a licensing perspective. After enabling SIOC and using the default 30ms on my iSCSI volume, I took a look at the SIOC logs and saw that there were some changes in the logs:

~ # tail -f /var/log/messages | grep storageRM
Oct 10 18:48:56 storageRM: Number of devices on host = 3
Oct 10 18:48:56 storageRM: Checked device t10.F405E46494C45400A50555567414D2D443E6A7D276D6F6E4 iormEnabled= 1 LatThreshold =30
Oct 10 18:48:56 storageRM: Found device t10.F405E46494C45400A50555567414D2D443E6A7D276D6F6E4 with datastore openfiler-iSCSI-1
Oct 10 18:48:56 storageRM: Adding device t10.F405E46494C45400A50555567414D2D443E6A7D276D6F6E4 with datastore openfiler-iSCSI-1

As you can see now, SIOC is enabled and the iormEnabled flag has changed from 0 to 1. This should not be a surprise, now let's take a look at vsish storage property and see if that has changed:

~ # vsish -e get /storage/scsifw/devices/t10.F405E46494C45400A50555567414D2D443E6A7D276D6F6E4/iormState
1597

If you recall from the previous command above, the default value was 1596 and after enabling SIOC, the value has incremented by one. I found this to be an interesting observation and I tried a few other configurations including enabling SIOC on local storage and found that this value was always incremented by 1 if SIOC was enabled and decrement or kept the same if SIOC is disabled.

As you may or may not know, SIOC does not use vCenter, it is only required when enabling the feature and from this simple test, this looks to be the case. It is also important to note, as pointed out by Duncan in his blog post that the latency statistics is stored in .iormstats.sf file which is stored within each of the VMFS datastores that has SIOC enabled. Putting all this together, I hypothesize that Storage I/O Control could actually be enabled without an Enterprise Plus license and without vCenter Server.

The test utilized the following configuration:

  • 2 x virtual ESXi 4.1 hosts licensed as free ESXi (vSphere 4.1 Hypervisor)
  • 1 x 50GB iSCSI volume exported from Openfiler VM
  • 2 x CentOS VM installed with iozone to generate some IO

Here is a screenshot of the two ESXi 4.1 free licensed hosts displaying the licensed version and each running CentOS VM residing on this shared VMFS iSCSI volume:

I configured vm1 which resides on esxi4-1 and set the disk shares to Low with default value of 500 and configured vm2 which resides on esxi4-4 and set the disk shares to High with default value of 2000:

I then ran iozone a filesystem benchmark tool on both CentOSes VM which is generating some amount of IO on the single VMFS volume shared by both ESXi 4.1 hosts:

I then view the SIOC logs on both ESXi 4.1 in /var/log/vmkernel and tail the IO statistics for the VMFS iSCSI datastore using vsish:

Note: The gray bar tells you which host you are viewing the data for which is being displayed by using GNU Screen. The first two screen displays the current status of SIOC which is currently disabled and the second two screen displays the current device queue depth which in my lab environment defaulted to 128, by default it should be 32 as I remember.

Now I enable SIOC on both ESXi 4.1 hosts using vsish and perform the command:

~ # vsish -e set /storage/scsifw/devices/t10.F405E46494C45400A50555567414D2D443E6A7D276D6F6E4/iormState 1597

Note: Remember to run a "get" operation to check the default value and you will just need to increment by one to enable SIOC. From my testing, the default value will either be 1596 or 2000 and you will change it to 1597 or 2001 respectively.

You can now validate that SIOC is enabled by going back to your SSH session and verify the logs:

As you can see, the iormEnabled flag has now changed from 0 to 1, which means SIOC is now enabled.

If you have running virtual machines on the VMFS volume and SIOC is enabled, you should now see a  iormstats.sf latency file stored in the VMFS volume:

After awhile, you can view IO statistics via vsish to see what the device queue is currently configured to and slowly see the throttle based on latency. For this particular snapshot, you can see the vm1 was configured with "high" disk share and vm2 was configured with "low" disk share and there is a large queue depth for the very bottom ESXi host versus the other host which only has a smaller queue depth.

Note: During my test I did notice that the queue depth was dramatically decreased from 128 and even from 32 to single digits. I am pretty sure it was due to the limited amount of resources in my lab that some of these numbers were a little odd.

To summarize, it seems that you can actually enable Storage I/O Control without the use of vCenter Server and an Enterprise Plus license, however, this requires the use of vsish which is only found on ESXi 4.1 and not on classic ESX 4.1. I also found that if you enabled SIOC via this method and joined your host to vCenter Server, it is not aware of these changes and marks SIOC as disabled even though the host actually has SIOC enabled. If you want vCenter to see the update, you will need to enabled SIOC via vCenter.

I would also like to thank Raphael Schitz for helping me validate some of my initial findings.

Update: I also found hidden Storage I/O Control API method called ConfigureDatastoreIORMOnHost which allows you to enable SIOC directly on the host, which validates the claim from above that this can be done directly on an ESX or ESXi host.

Categories // Uncategorized Tags // ESXi 4.1, sioc, vSphere 4.1

  • « Previous Page
  • 1
  • …
  • 3
  • 4
  • 5
  • 6
  • 7
  • …
  • 14
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025