WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud
  • Tanzu
    • Application Modernization
    • Tanzu services
    • Tanzu Community Edition
    • Tanzu Kubernetes Grid
    • vSphere with Tanzu
  • Home Lab
  • Nested Virtualization
  • Apple

Configuring per-VMDK IOPS reservations in vSphere 6.0

05.20.2015 by William Lam // 1 Comment

One of the new features in vSphere 6.0 that was quickly mentioned at the end of Duncan Epping's What's New Storage DRS blog post is the ability to configure an IOPS reservation on a per-VMDK basis which is now integrated with both Storage IO Control (SIOC) and Storage DRS. As Duncan mentioned at the end of his article, this feature is only consumable through the vSphere API and given that, it may not be a feature that is widely known or used. This topic had recently surfaced in an internal thread on how to set the IOPS reservations and below are the details if you wish to leverage this new vSphere 6.0 storage platform capability.

To be able to use this new feature, there are two requirements:

  1. You need to set the IOPS reservation value on an individual VMDK which is under the StorageIOAllocationInfo property of a VMDK called, not surprisingly, reservation.
  2. All ESXi hosts mounting the Datastore must be running vSphere 6.0

To be clear, this reservation property has been around since vSphere 5.5, but had only had support for local datastores. In vSphere 6.0, shared datastores are now supported along with both SIOC and Storage DRS being aware.

To exercise the use of this vSphere API, I have created a simple PowerCLI script called configurePerVMDKIOPS.ps1 (works with both vSphere 5.x & 6.0) which you will need to edit to include your vCenter Server, the name of the VM you wish to set the IOPS reservation along with the VMDK label and IOPS value.

Here is an example output for configuring a VM named Photon with IOPS reservations of 2000 on Hard Disk 1:

configure-per-vmdk-iops-reservations
I have been told that in the future, the plan is to make this configurable available in the vSphere Web Client. Though, honestly why would anyone want to perform this change across multiple VMs by hand, when you can quickly and efficiently automate this across your environment with a simple script? 😉

Categories // Automation, vSphere 6.0 Tags // iops reservation, PowerCLI, sioc, storage drs, storage io control, StorageIOAllocationInfo, vSphere 6.0, vSphere API

Does SIOC actually require Enterprise Plus & vCenter Server?

10.10.2010 by William Lam // 1 Comment

After reading a recent blog post by Duncan Epping, SIOC, tying up some loose ends, I decided to explore whether or not VMware's Storage I/O Control feature actually requires an Enterprise Plus license and vCenter Server. To be completely honest, Duncan's article got me thinking but it was also my recent experience with VMware's vsish and the blog post I wrote What is VMware vsish? that made me think this might be a possibility. vsish is only available on ESXi 4.1 within the Tech Support Mode, but if you have access to debugging rpm from VMware, you can also obtain vsish for classic ESX.

Within vsish, there is a storage section and within that section there is a devices sub-section which provides information regarding your storage devices that includes paths, partitions, IO statistics, queue depth and new SIOC state information.

Here is an example of the various devices that I can view on an ESXi 4.1 host:

~ # vsish
/> ls /storage/scsifw/devices/
t10.F405E46494C45400A50555567414D2D443E6A7D276D6F6E4/
mpx.vmhba1:C0:T0:L0/
mpx.vmhba32:C0:T0:L0/

Here is an example of various properties accessible to a given storage device:

/> ls /storage/scsifw/devices/t10.F405E46494C45400A50555567414D2D443E6A7D276D6F6E4/
worlds/
handles/
filters/
paths/
partitions/
uids/
iormInfo
iormState
maxQueueDepth
injectError
statson
stats
inquiryVPD/
inquirySTD
info

In particular, we are interested in iormState and you can see the value by just using the cat command:

/> cat /storage/scsifw/devices/t10.F405E46494C45400A50555567414D2D443E6A7D276D6F6E4/iormState
1596

This value may not mean a whole lot and I have seen this as the default value when SIOC is disabled as well as 2000 from my limited set of tests. Now, since we can access these particular SIOC parameter, I wanted to see how this value was affected when SIOC is enabled and disabled. To test this, I used the following VMware KB1022091 to enabling additional SIOC logging which goes directly to /var/log/messages with the logger tag "storageRM", this allows you to easily filter out SIOC logs via simple grep.

For testing purposes, you can just enable the logging to level 2 which is more than sufficient to get the necessary output. You will perform the following command to change the default SIOC logging level from 0 to 2 using Tech Support Mode:

~ # esxcfg-advcfg -s 2 /Misc/SIOControlLogLevel
Value of SIOControlLoglevel is 2

Now you will want to open a separate SSH session to your ESXi host and tail /var/log/messages to monitor the SIOC logs:

~ # tail -f /var/log/messages | grep storageRM
Oct 10 18:39:05 storageRM: Number of devices on host = 3
Oct 10 18:39:05 storageRM: Checked device t10.F405E46494C45400A50555567414D2D443E6A7D276D6F6E4 iormEnabled= 0 LatThreshold =30
Oct 10 18:39:05 storageRM: Checked device mpx.vmhba1:C0:T0:L0 iormEnabled= 0 LatThreshold =232
Oct 10 18:39:05 storageRM: Checked device mpx.vmhba32:C0:T0:L0 iormEnabled= 0 LatThreshold =30
Oct 10 18:39:05 storageRM: rateControl: Current log level: 2, new: 2
Oct 10 18:39:05 storageRM: rateControl: Alas - No device with IORM enabled!

You should see something similar to the above. In my lab, it is seeing an iSCSI volume, local storage and CD-ROM and you will notice there is an iormEnabled flag and all three has SIOC disabled and the default latency threshold specified by LatThreshold, which is 30ms by default.

Now we know what these values are when SIOC is disabled, let's see what happens when we enable SIOC from vCenter on this ESXi 4.1 host. I am using evaluation license for the host which supports the Storage I/O Control from a licensing perspective. After enabling SIOC and using the default 30ms on my iSCSI volume, I took a look at the SIOC logs and saw that there were some changes in the logs:

~ # tail -f /var/log/messages | grep storageRM
Oct 10 18:48:56 storageRM: Number of devices on host = 3
Oct 10 18:48:56 storageRM: Checked device t10.F405E46494C45400A50555567414D2D443E6A7D276D6F6E4 iormEnabled= 1 LatThreshold =30
Oct 10 18:48:56 storageRM: Found device t10.F405E46494C45400A50555567414D2D443E6A7D276D6F6E4 with datastore openfiler-iSCSI-1
Oct 10 18:48:56 storageRM: Adding device t10.F405E46494C45400A50555567414D2D443E6A7D276D6F6E4 with datastore openfiler-iSCSI-1

As you can see now, SIOC is enabled and the iormEnabled flag has changed from 0 to 1. This should not be a surprise, now let's take a look at vsish storage property and see if that has changed:

~ # vsish -e get /storage/scsifw/devices/t10.F405E46494C45400A50555567414D2D443E6A7D276D6F6E4/iormState
1597

If you recall from the previous command above, the default value was 1596 and after enabling SIOC, the value has incremented by one. I found this to be an interesting observation and I tried a few other configurations including enabling SIOC on local storage and found that this value was always incremented by 1 if SIOC was enabled and decrement or kept the same if SIOC is disabled.

As you may or may not know, SIOC does not use vCenter, it is only required when enabling the feature and from this simple test, this looks to be the case. It is also important to note, as pointed out by Duncan in his blog post that the latency statistics is stored in .iormstats.sf file which is stored within each of the VMFS datastores that has SIOC enabled. Putting all this together, I hypothesize that Storage I/O Control could actually be enabled without an Enterprise Plus license and without vCenter Server.

The test utilized the following configuration:

  • 2 x virtual ESXi 4.1 hosts licensed as free ESXi (vSphere 4.1 Hypervisor)
  • 1 x 50GB iSCSI volume exported from Openfiler VM
  • 2 x CentOS VM installed with iozone to generate some IO

Here is a screenshot of the two ESXi 4.1 free licensed hosts displaying the licensed version and each running CentOS VM residing on this shared VMFS iSCSI volume:

I configured vm1 which resides on esxi4-1 and set the disk shares to Low with default value of 500 and configured vm2 which resides on esxi4-4 and set the disk shares to High with default value of 2000:

I then ran iozone a filesystem benchmark tool on both CentOSes VM which is generating some amount of IO on the single VMFS volume shared by both ESXi 4.1 hosts:

I then view the SIOC logs on both ESXi 4.1 in /var/log/vmkernel and tail the IO statistics for the VMFS iSCSI datastore using vsish:

Note: The gray bar tells you which host you are viewing the data for which is being displayed by using GNU Screen. The first two screen displays the current status of SIOC which is currently disabled and the second two screen displays the current device queue depth which in my lab environment defaulted to 128, by default it should be 32 as I remember.

Now I enable SIOC on both ESXi 4.1 hosts using vsish and perform the command:

~ # vsish -e set /storage/scsifw/devices/t10.F405E46494C45400A50555567414D2D443E6A7D276D6F6E4/iormState 1597

Note: Remember to run a "get" operation to check the default value and you will just need to increment by one to enable SIOC. From my testing, the default value will either be 1596 or 2000 and you will change it to 1597 or 2001 respectively.

You can now validate that SIOC is enabled by going back to your SSH session and verify the logs:

As you can see, the iormEnabled flag has now changed from 0 to 1, which means SIOC is now enabled.

If you have running virtual machines on the VMFS volume and SIOC is enabled, you should now see a  iormstats.sf latency file stored in the VMFS volume:

After awhile, you can view IO statistics via vsish to see what the device queue is currently configured to and slowly see the throttle based on latency. For this particular snapshot, you can see the vm1 was configured with "high" disk share and vm2 was configured with "low" disk share and there is a large queue depth for the very bottom ESXi host versus the other host which only has a smaller queue depth.

Note: During my test I did notice that the queue depth was dramatically decreased from 128 and even from 32 to single digits. I am pretty sure it was due to the limited amount of resources in my lab that some of these numbers were a little odd.

To summarize, it seems that you can actually enable Storage I/O Control without the use of vCenter Server and an Enterprise Plus license, however, this requires the use of vsish which is only found on ESXi 4.1 and not on classic ESX 4.1. I also found that if you enabled SIOC via this method and joined your host to vCenter Server, it is not aware of these changes and marks SIOC as disabled even though the host actually has SIOC enabled. If you want vCenter to see the update, you will need to enabled SIOC via vCenter.

I would also like to thank Raphael Schitz for helping me validate some of my initial findings.

Update: I also found hidden Storage I/O Control API method called ConfigureDatastoreIORMOnHost which allows you to enable SIOC directly on the host, which validates the claim from above that this can be done directly on an ESX or ESXi host.

Categories // Uncategorized Tags // esxi4.1, sioc, vSphere 4.1

What is VMware vsish?

08.22.2010 by William Lam // 18 Comments

Recently while I was working on testing automated ESXi kickstart installations, I needed to extract some information as part of the build process, but the utilities that were used no longer existed in ESXi's Busybox console. Looking around, I found another way to extract the information I needed, which was using VMware's undocumented vsish utility, also known as the VMkernel Sys Info Shell. There is not much information around the web regarding this vsish utility (probably for good reason), but it has been described by few as a representation of classic Service Console /proc nodes and allows the ability to extract system reliability information similar to mcelog in Linux.

If I recall correctly, the vsish utility used to be bundled with classic ESX, but at some point it was removed. However, in ESXi, the utility is included and that is also true for latest release of ESXi 4.1. When you generate vm-support log, a dump of the vsi nodes are generally included which provides VMware support with the state of your system. To read the vsi node dump, you need a matching version of the vsish utility based on the version of ESX or ESXi you are running. Currently for classic ESX, to obtain the vsish utility, it must be provided by VMware support via debugging package that needs to be installed.

Note: A word of caution before using this utility, you should not make any changes that you are unfamiliar with. Always consult with VMware support before making changes as it can severely impact your host and virtual machines. Okay, now on to the fun stuff 🙂

On ESXi Busybox Console, you can launch the vsish utility by just typing "vsish":

You can perform various operations such as listing the various nodes, get and set parameters, etc. to see the available options, just type "help":

There is a huge amount of information that can be retrieved from vsish. One interesting leaf node within vsish is called "config", this actually maps to the Advanced Settings found on an ESX(i) host:

As you can see, the majority of the sub leaf nodes within "config" is exposed in the Advanced Settings, but there are some that are hidden. In fact, with ESX(i) 4.1, there is a total of 771 configurable options with 250 of those hidden that can only be seen using vsish (more on the configuration option later)!

There are two ways to interact with vsish, you can interactively login to VSI shell and perform ls, get or set operations or perform the same operation through non-interactive mode.

Here is an example of an interactive session listing config's under "COW" leaf node and getting and setting the value for "COWDiskSizeIncrement" which is one of the 250 hidden configuration options:

Here is an example of an non-interactive session performing the same operation as the one listed above:

With the release of vSphere 4.1, there have been a few new additions to the VSI nodes. Here is an excerpt slide from vSphere 4.1 to 4.0 differences presented by Iwan Rahabok - Senior Systems Consultant at VMware listing some of the new Storage I/O Control features:

Here is a screenshot on some of these values if you can not make it out in the slide:

For more details on differences between vSphere 4.0 and vSphere 4.1 - Check out the detailed 2 part deep-dive power point presentation here.

vsish provides an enormous amount of information and I have only begin to scratch the surface. Having said that I did manage to capture all the advanced host settings which includes both public and hidden options. Using a few for loops and some shell scripting, I have generated the following two lists:

Complete vSphere ESXi 4.1 vsish configurations including hidden options - 771 Total:

For the complete list, take a look at https://s3.amazonaws.com/virtuallyghetto-download/complete_vsish_config.html

Hidden vSphere ESXi 4.1 vsish configurations only - 250 Total:

For the hidden list only, take a look at https://s3.amazonaws.com/virtuallyghetto-download/hidden_vsish_config.html

There are definitely some interesting options that can be configured and I can see why VMware would want to hide these from the general public. What is nice about the compiled output, is that it clearly states the path to the configuration item, the current, default, min, max, is hidden or description of the parameter. Again, use at your own risk Hopefully this these two documents will be useful for curious users to explore vsish advanced configs.

Using the table above, you can actually query and modify these values using the standard esxcfg-advcfg utility that exists both on ESX(i). The following example will show you how to translate the vsish node path to proper format that is required for local esxcfg-advcfg utility.

In the example, we will be using vsish path "/config/COW/intOpts/COWDiskSizeIncrement"

Using a local copy of esxcfg-advcfg on ESX or ESXi, you will need to convert the above to the following:

[[email protected] ]# esxcfg-advcfg -g /COW/COWDiskSizeIncrement
Value of COWDiskSizeIncrement is 32768

Note: You just need to extract the root node and the individual config leaf node in "/config/COW/intOpts/COWDiskSizeIncrement" which is highlighted in green and blue.

Some other interesting nodes that I found that might be useful are under /system:
/system/bootOption
/system/bootCmdLine
/system/systemUuid
/system/bootFsUUID

One other interesting tidbit of information that I found was under /system/version which actually shows the build date and time of vSphere ESXi 4.1:

Now that we have a better understanding of the vsish utility, how does this help with my original inquiry? I found that you can extract networking information from your vNICs by looking at /net/tcpip/* nodes:

As you can see from the output above, it's format is in hexidecimal but when converted, you will get the IP Address, netmask and the gateway for a given VMkernel interface. To accomplish this, I used a modified python script to convert these entries to their human readable addresses:

Here some additional links referencing vsish that may be of interest:

  • http://www.ntpro.nl/blog/archives/1388-Lets-create-some-Kernel-Panic-using-vsish.html

Categories // Uncategorized Tags // sioc, vsish, vSphere 4.1

  • 1
  • 2
  • Next Page »

Search

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Infrastructure Business Group (CIBG) at VMware. He focuses on Cloud Native technologies, Automation, Integration and Operation for the VMware Cloud based Software Defined Datacenters (SDDC)

Connect

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Recent

  • Blocking vSphere HTML5 VM Console and allowing only Standalone VM Remote Console (VMRC)? 02/08/2023
  • Quick Tip - Inventory core count for vSphere+, vSAN+ & VCF+ Cloud Service 02/07/2023
  • How to automate adding a license into vCenter Server with custom label?  02/06/2023
  • Automated ESXi Installation with a USB Network Adapter using Kickstart 02/01/2023
  • How to bootstrap ESXi compute only node and connect to vSAN HCI Mesh? 01/31/2023

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2023

 

Loading Comments...