WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Does SIOC actually require Enterprise Plus & vCenter Server?

10.10.2010 by William Lam // 1 Comment

After reading a recent blog post by Duncan Epping, SIOC, tying up some loose ends, I decided to explore whether or not VMware's Storage I/O Control feature actually requires an Enterprise Plus license and vCenter Server. To be completely honest, Duncan's article got me thinking but it was also my recent experience with VMware's vsish and the blog post I wrote What is VMware vsish? that made me think this might be a possibility. vsish is only available on ESXi 4.1 within the Tech Support Mode, but if you have access to debugging rpm from VMware, you can also obtain vsish for classic ESX.

Within vsish, there is a storage section and within that section there is a devices sub-section which provides information regarding your storage devices that includes paths, partitions, IO statistics, queue depth and new SIOC state information.

Here is an example of the various devices that I can view on an ESXi 4.1 host:

~ # vsish
/> ls /storage/scsifw/devices/
t10.F405E46494C45400A50555567414D2D443E6A7D276D6F6E4/
mpx.vmhba1:C0:T0:L0/
mpx.vmhba32:C0:T0:L0/

Here is an example of various properties accessible to a given storage device:

/> ls /storage/scsifw/devices/t10.F405E46494C45400A50555567414D2D443E6A7D276D6F6E4/
worlds/
handles/
filters/
paths/
partitions/
uids/
iormInfo
iormState
maxQueueDepth
injectError
statson
stats
inquiryVPD/
inquirySTD
info

In particular, we are interested in iormState and you can see the value by just using the cat command:

/> cat /storage/scsifw/devices/t10.F405E46494C45400A50555567414D2D443E6A7D276D6F6E4/iormState
1596

This value may not mean a whole lot and I have seen this as the default value when SIOC is disabled as well as 2000 from my limited set of tests. Now, since we can access these particular SIOC parameter, I wanted to see how this value was affected when SIOC is enabled and disabled. To test this, I used the following VMware KB1022091 to enabling additional SIOC logging which goes directly to /var/log/messages with the logger tag "storageRM", this allows you to easily filter out SIOC logs via simple grep.

For testing purposes, you can just enable the logging to level 2 which is more than sufficient to get the necessary output. You will perform the following command to change the default SIOC logging level from 0 to 2 using Tech Support Mode:

~ # esxcfg-advcfg -s 2 /Misc/SIOControlLogLevel
Value of SIOControlLoglevel is 2

Now you will want to open a separate SSH session to your ESXi host and tail /var/log/messages to monitor the SIOC logs:

~ # tail -f /var/log/messages | grep storageRM
Oct 10 18:39:05 storageRM: Number of devices on host = 3
Oct 10 18:39:05 storageRM: Checked device t10.F405E46494C45400A50555567414D2D443E6A7D276D6F6E4 iormEnabled= 0 LatThreshold =30
Oct 10 18:39:05 storageRM: Checked device mpx.vmhba1:C0:T0:L0 iormEnabled= 0 LatThreshold =232
Oct 10 18:39:05 storageRM: Checked device mpx.vmhba32:C0:T0:L0 iormEnabled= 0 LatThreshold =30
Oct 10 18:39:05 storageRM: rateControl: Current log level: 2, new: 2
Oct 10 18:39:05 storageRM: rateControl: Alas - No device with IORM enabled!

You should see something similar to the above. In my lab, it is seeing an iSCSI volume, local storage and CD-ROM and you will notice there is an iormEnabled flag and all three has SIOC disabled and the default latency threshold specified by LatThreshold, which is 30ms by default.

Now we know what these values are when SIOC is disabled, let's see what happens when we enable SIOC from vCenter on this ESXi 4.1 host. I am using evaluation license for the host which supports the Storage I/O Control from a licensing perspective. After enabling SIOC and using the default 30ms on my iSCSI volume, I took a look at the SIOC logs and saw that there were some changes in the logs:

~ # tail -f /var/log/messages | grep storageRM
Oct 10 18:48:56 storageRM: Number of devices on host = 3
Oct 10 18:48:56 storageRM: Checked device t10.F405E46494C45400A50555567414D2D443E6A7D276D6F6E4 iormEnabled= 1 LatThreshold =30
Oct 10 18:48:56 storageRM: Found device t10.F405E46494C45400A50555567414D2D443E6A7D276D6F6E4 with datastore openfiler-iSCSI-1
Oct 10 18:48:56 storageRM: Adding device t10.F405E46494C45400A50555567414D2D443E6A7D276D6F6E4 with datastore openfiler-iSCSI-1

As you can see now, SIOC is enabled and the iormEnabled flag has changed from 0 to 1. This should not be a surprise, now let's take a look at vsish storage property and see if that has changed:

~ # vsish -e get /storage/scsifw/devices/t10.F405E46494C45400A50555567414D2D443E6A7D276D6F6E4/iormState
1597

If you recall from the previous command above, the default value was 1596 and after enabling SIOC, the value has incremented by one. I found this to be an interesting observation and I tried a few other configurations including enabling SIOC on local storage and found that this value was always incremented by 1 if SIOC was enabled and decrement or kept the same if SIOC is disabled.

As you may or may not know, SIOC does not use vCenter, it is only required when enabling the feature and from this simple test, this looks to be the case. It is also important to note, as pointed out by Duncan in his blog post that the latency statistics is stored in .iormstats.sf file which is stored within each of the VMFS datastores that has SIOC enabled. Putting all this together, I hypothesize that Storage I/O Control could actually be enabled without an Enterprise Plus license and without vCenter Server.

The test utilized the following configuration:

  • 2 x virtual ESXi 4.1 hosts licensed as free ESXi (vSphere 4.1 Hypervisor)
  • 1 x 50GB iSCSI volume exported from Openfiler VM
  • 2 x CentOS VM installed with iozone to generate some IO

Here is a screenshot of the two ESXi 4.1 free licensed hosts displaying the licensed version and each running CentOS VM residing on this shared VMFS iSCSI volume:

I configured vm1 which resides on esxi4-1 and set the disk shares to Low with default value of 500 and configured vm2 which resides on esxi4-4 and set the disk shares to High with default value of 2000:

I then ran iozone a filesystem benchmark tool on both CentOSes VM which is generating some amount of IO on the single VMFS volume shared by both ESXi 4.1 hosts:

I then view the SIOC logs on both ESXi 4.1 in /var/log/vmkernel and tail the IO statistics for the VMFS iSCSI datastore using vsish:

Note: The gray bar tells you which host you are viewing the data for which is being displayed by using GNU Screen. The first two screen displays the current status of SIOC which is currently disabled and the second two screen displays the current device queue depth which in my lab environment defaulted to 128, by default it should be 32 as I remember.

Now I enable SIOC on both ESXi 4.1 hosts using vsish and perform the command:

~ # vsish -e set /storage/scsifw/devices/t10.F405E46494C45400A50555567414D2D443E6A7D276D6F6E4/iormState 1597

Note: Remember to run a "get" operation to check the default value and you will just need to increment by one to enable SIOC. From my testing, the default value will either be 1596 or 2000 and you will change it to 1597 or 2001 respectively.

You can now validate that SIOC is enabled by going back to your SSH session and verify the logs:

As you can see, the iormEnabled flag has now changed from 0 to 1, which means SIOC is now enabled.

If you have running virtual machines on the VMFS volume and SIOC is enabled, you should now see a  iormstats.sf latency file stored in the VMFS volume:

After awhile, you can view IO statistics via vsish to see what the device queue is currently configured to and slowly see the throttle based on latency. For this particular snapshot, you can see the vm1 was configured with "high" disk share and vm2 was configured with "low" disk share and there is a large queue depth for the very bottom ESXi host versus the other host which only has a smaller queue depth.

Note: During my test I did notice that the queue depth was dramatically decreased from 128 and even from 32 to single digits. I am pretty sure it was due to the limited amount of resources in my lab that some of these numbers were a little odd.

To summarize, it seems that you can actually enable Storage I/O Control without the use of vCenter Server and an Enterprise Plus license, however, this requires the use of vsish which is only found on ESXi 4.1 and not on classic ESX 4.1. I also found that if you enabled SIOC via this method and joined your host to vCenter Server, it is not aware of these changes and marks SIOC as disabled even though the host actually has SIOC enabled. If you want vCenter to see the update, you will need to enabled SIOC via vCenter.

I would also like to thank Raphael Schitz for helping me validate some of my initial findings.

Update: I also found hidden Storage I/O Control API method called ConfigureDatastoreIORMOnHost which allows you to enable SIOC directly on the host, which validates the claim from above that this can be done directly on an ESX or ESXi host.

Categories // Uncategorized Tags // ESXi 4.1, sioc, vSphere 4.1

How to run a VM on Dropbox Storage

10.10.2010 by William Lam // 2 Comments

In my previous blog post How to backup VMs in ESX onto Dropbox, I showed you how to backup your virtual machines on ESX to your Dropbox account, but who said it should stop there? I realized after playing with Dropbox on ESX, I had a crazy idea of trying to run a virtual machine that was stored in my Dropbox account. Since I am using the free Dropbox account, I have a maximum of 2GB of online storage and decided to spin up a tiny Linux virtual machine and upload it to my Dropbox account. Not surprised, I was able to register the virtual machine and fire it up and it worked like a charm.

Now, this got me thinking .... if I can run a virtual machine from Dropbox via ESX, could I get another ESX host to see this same Dropbox account? The answer is YES! One feature of Dropbox is the ability to access your files across multiple devices: Windows, Linux, Mac OSX desktop, iPhone, iPad, Andriod and of course ESX. For demonstration purposes, I created two virtual ESX 4.1 hosts called west and east and authorized both hosts to access my Dropbox account.

Here is a screenshot of the two ESX hosts via "My Computers" tab on Dropbox account:

Here is a screenshot of the small 1GB Linux Debian virtual machine I created and uploaded to my Dropbox account:

Here is a screenshot of both ESX hosts (east and west) accessing the directory of the small Linux virtual machine:

I registered the virtual machine on the west coast ESX host and here is a side by side screenshot of both vSphere Clients:

As you can see, the virtual machine is powered on and running. If we take a look at where the virtual machine configuration file and disks is stored, you will see it is currently on the Dropbox account accessed by "west" coast ESX host:

Here is a screenshot showing the registration of the small Linux VM on west and there is currently nothing registered on east:

Let's say we had a failure on the west coast ESX host and we need to access this virtual machine via east coast ESX host. I registered the virtual machine and powered it on and now the system is back up and running again:

Again, we can see the configuration and virtual disk is now being accessed from the Dropbox account by the east coast ESX host:

We also confirm that the virtual machine is now registered on the east coast ESX and there is nothing running on the west coast ESX host:

As you can see from this simple demonstration, you can easily run a virtual machine and have it accessible via multiple ESX host, though I would not recommend you go out and start putting your production VMs on Dropbox, but it is an interesting idea for cloud storage 😉

Note: While testing, I found the synchronization process between the two ESX host to be slightly off and were not always up to date when a change was made. I had to restart the Dropbox daemon several times after I created the virtual machine for the other ESX host to see the updated files. I am pretty sure it is an issue with the daemon versus dropbox, since I can see the files updated on the web browser immediately. I would be very careful to ensure that only one host is accessing the files, else you could see some discrepancies.

Categories // Uncategorized Tags // dropbox, ESX 4.0, vSphere 4.1

How to backup VMs in ESX onto Dropbox

10.10.2010 by William Lam // 5 Comments

I previously wrote about backing up virtual machines directly from ESX and ESXi onto both Amazon S3 and Google Storage, but there was actually a third online file hosting company that I wanted to get working, Dropbox. Dropbox is a relatively new file hosting company that launched back in 2008 and has gain popularity for its ease of use and ability to access and share files across multiple devices. While researching Dropbox initially, I discovered there was a python-based CLI which I was hoping would install and function on ESX(i). This unfortunately did not pan out due to the various python library dependencies including a newer version of python.

While scouring the web, I recently found out that Dropbox actually released a Linux client binary and I thought I'd kick the tires and see if I could get it running on ESX(i). After a few minutes of testing, I found out that it was possible to get it running on classic ESX, but there are still certain python dependencies that prevent the Dropbox client to run on ESXi.

Before you begin, you will need to sign up for a free Dropbox account. With the free account you automatically get 2GB of free online storage, if you want more, you can pay for up to 100GB of online storage. The following has been validated on ESX 4.1, I have not tested this on any other ESX version and your results may vary. ESXi is not supported as mentioned earlier.

1. Download the latest Dropbox Linux Client here.

2. You will need to upload the Dropbox tar ball to your ESX host, you can use scp on UNIX/Linux or winSCP if you are using a Windows system.

3. You should not have the tar ball file sitting in the root directory of your ESX host:

4. You will now need to extract the contents of the tar ball, by running the following:

tar -zxvf dropbox-lnx.x86_64-0.7.110.tar.gz

5. You should now have a hidden directory called .dropbox-dist in your current working directory:

6. If you are starting the Dropbox client for the first time, it will default to using home directory of the current user to access your Dropbox share. In our case, it will be stored as /root/Dropbox which is probably not what you want. We will actually update our home directory to point to a VMFS volume path, by setting the HOME environmental variable:

As you can see, now our new home directory is set to a VMFS datstore. Once the Dropbox client starts, it will create a 64bit encoded string of the path which is stored in a configuration file once you have authorized the addition of this system to your Dropbox account.

Note: Once the default Dropbox path is set, you need to ensure that environmental HOME dir is always set to the one you specified above, else when you start Dropbox it will think it is a new setup.

7. We will now start the Dropbox daemon and ensure that we run it in the background:

If you have successfully started the Dropbox daemon, a unique URL will be generated based on your system which is used to authorize this system to access your Dropbox account. Take the URL and paste it into a browser, it should ask you to login to authorize the system.

8. If this is the first time you are using Dropbox, once you have signed in, you will be brought to the files tab in which all the folders and files that are currently accessible to you. By default, you will have a Public and Photos folder in which both are empty:

9. If you go back to your ESX host, you will now notice some output regarding nautils, you can ignore this error as the packages are not required for functionality:

10. Now, if you remember when we set the home directory to trick Dropbox to put folders under a VMFS datastore, you will now see a new directory called Dropbox which will contain the Public and Photos folder you saw on your web browser:

11. For demo purposes, I created a Backup directory in Dropbox root folder on the ESX host which will be visible from your web browser:

I then created a dummy 1MB VMDK in the same folder as if you were copying a VM to Dropbox account:

You can now go back to your web browser and see the VMDK file that was just created in the Backup directory:

There you have it, you can now transfer files or backup your virtual machines from your ESX host to your Dropbox account.

Earlier I mentioned that there are configuration files that tells the Dropbox client that this system is authorized to connect to your Dropbox account and it stores both the system ID along with the Dropbox path. If you do a long listing in your VMFS volume that was used to store your Dropbox folder, you will notice a hidden directory called .dropbox:

There are two database files, dropbox.db that contains the files and folder structures as it is being synced down to the host and host.db which contains the system's ID and Dropbox path which is encoded as 64bit string. You can decode and verify the path by using this website: http://webnet77.com/cgi-bin/helpers/base-64.pl

Here is what the host.db file looks like:

The second line contains the Dropbox path and you should not try to edit this file manually as it may cause synchronization issues. If you look at the Dropbox documentation, there is a python script that allows you to change the path but it requires sqlite3 to be available which is not available by default on ESX.

Categories // Uncategorized Tags // dropbox, ESX 4.0, vSphere 4.1

  • « Previous Page
  • 1
  • …
  • 539
  • 540
  • 541
  • 542
  • 543
  • …
  • 560
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025