WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

New Performance Metrics In vSphere 5

08.03.2011 by William Lam // 2 Comments

I recently had to look at some performance metrics in my vSphere 5 lab and I was curious if VMware had documented all the new performance metrics. I headed over to the vSphere 5 API reference guide and to my surprise, they were exactly the same as the vSphere 4 API reference guide. Though looking at the vSphere Client, it was obvious there were new performance metrics for features such as Storage DRS that did not exists in vSphere 4.

Using a similar method in a previous post about Power performance metrics, I extracted all the new metrics in vSphere 5 and created the following table that includes the metric name (rollup,units and internal name), collection level and description of the metric. There are a total of 129 new performance metrics that include Storage DRS and HBR (Host Based Replication).

Hopefully this will be fixed in the API documentation when vSphere 5 GA's as I recalled providing the same feedback during the beta program. 

Metric Stat Level Description
cpu
average.MHz.capacity.provisioned 3 Capacity in MHz of the physical CPU cores
average.MHz.capacity.entitlement 1 CPU resources devoted by the ESX scheduler to virtual machines and resource pools
average.MHz.capacity.usage 3 CPU usage in MHz during the interval
average.MHz.capacity.demand 2 The amount of CPU resources VMs on this host would use if there were no CPU contention or CPU limit
average.percent.capacity.contention 2 Percent of time the VMs on this host are unable to run because they are contending for access to the physical CPU(s)
average.number.corecount.provisioned 2 The number of physical cores provisioned to the entity
average.number.corecount.usage 2 The number of virtual processors running on the host
average.percent.corecount.contention 1 Time the VM is ready to run, but is unable to run due to co-scheduling constraints
average.MHz.capacity.demand 2 The amount of CPU resources VMs on this host would use if there were no CPU contention or CPU limit
average.percent.latency 2 Percent of time the VM is unable to run because it is contending for access to the physical CPU(s)
latest.MHz.entitlement 2 CPU resources devoted by the ESX scheduler
average.MHz.demand 2 The amount of CPU resources a VM would use if there were no CPU contention or CPU limit
summation.millisecond.costop 2 Time the VM is ready to run, but is unable to due to co-scheduling constraints
summation.millisecond.maxlimited 2 Time the VM is ready to run, but is not run due to maxing out its CPU limit setting
summation.millisecond.overlap 3 Time the VM was interrupted to perform system services on behalf of that VM or other VMs
summation.millisecond.run 2 Time the VM is scheduled to run
datastore
latest.millisecond.maxTotalLatency 3 Highest latency value across all datastores used by the host
average.KBps.throughput.usage 2 usage
average.millisecond.throughput.contention 2 contention
summation.number.busResets 2 busResets
summation.number.commandsAborted 2 commandsAborted
summation.number.commandsAborted 2 commandsAborted
summation.number.busResets 2 busResets
latest.number.datastoreReadBytes 2 Storage DRS datastore bytes read
latest.number.datastoreWriteBytes 2 Storage DRS datastore bytes written
latest.number.datastoreReadIops 1 Storage DRS datastore read I/O rate
latest.number.datastoreWriteIops 1 Storage DRS datastore write I/O rate
latest.number.datastoreReadOIO 1 Storage DRS datastore outstanding read requests
latest.number.datastoreWriteOIO 1 Storage DRS datastore outstanding write requests
latest.number.datastoreNormalReadLatency 2 Storage DRS datastore normalized read latency
latest.number.datastoreNormalWriteLatency 2 Storage DRS datastore normalized write latency
latest.number.datastoreReadLoadMetric 4 Storage DRS datastore metric for read workload model
latest.number.datastoreWriteLoadMetric 4 Storage DRS datastore metric for write workload model
latest.number.datastoreMaxQueueDepth 1 Storage I/O Control datastore maximum queue depth
disk
average.KBps.throughput.usage 3 Aggregated disk I/O rate
average.millisecond.throughput.contention 3 Average amount of time for an I/O operation to complete
summation.number.scsiReservationConflicts 2 Number of SCSI reservation conflicts for the LUN during the collection interval
average.percent.scsiReservationCnflctsPct 2 Number of SCSI reservation conflicts for the LUN as a percent of total commands during the collection interval
average.kiloBytes.capacity.provisioned 3 provisioned
average.kiloBytes.capacity.usage 2 usage
average.percent.capacity.contention 1 contention
hbr
average.number.hbrNumVms 4 Current Number of Replicated VMs
average.KBps.hbrNetRx 4 Average amount of data received per second
average.KBps.hbrNetTx 4 Average amount of data transmitted per second
managementAgent
average.MHz.cpuUsage 3 Amount of Service Console CPU usage
mem
average.kiloBytes.capacity.provisioned 3 Total amount of memory configured for the VM
average.kiloBytes.capacity.entitlement 1 Amount of host physical memory the VM is entitled to, as determined by the ESX scheduler
average.kiloBytes.capacity.usable 2 Amount of physical memory available for use by virtual machines on this host
average.kiloBytes.capacity.usage 1 Amount of physical memory actively used
average.percent.capacity.contention 2 Percentage of time the VM is waiting to access swapped, compressed, or ballooned memory
average.kiloBytes.capacity.usage.vm 2 vm
average.kiloBytes.capacity.usage.vmOvrhd 2 vmOvrhd
average.kiloBytes.capacity.usage.vmkOvrhd 2 vmkOvrhd
average.kiloBytes.capacity.usage.userworld 2 userworld
average.kiloBytes.reservedCapacity.vm 2 vm
average.kiloBytes.reservedCapacity.vmOvhd 2 vmOvhd
average.kiloBytes.reservedCapacity.vmkOvrhd 2 vmkOvrhd
average.kiloBytes.reservedCapacity.userworld 2 userworld
average.percent.reservedCapacityPct 3 Percent of memory that has been reserved either through VMkernel use, by userworlds, or due to VM memory reservations
average.kiloBytes.consumed.vms 2 Amount of physical memory consumed by VMs on this host
average.kiloBytes.consumed.userworlds 2 Amount of physical memory consumed by userworlds on this host
average.percent.latency 2 Percentage of time the VM is waiting to access swapped or compressed memory
average.kiloBytes.entitlement 2 Amount of host physical memory the VM is entitled to, as determined by the ESX scheduler
average.kiloBytes.lowfreethreshold 2 Threshold of free host physical memory below which ESX will begin reclaiming memory from VMs through ballooning and swapping
none.kiloBytes.llSwapUsed 4 Space used for caching swapped pages in the host cache
average.KBps.llSwapInRate 2 Rate at which memory is being swapped from host cache into active memory
average.KBps.llSwapOutRate 2 Rate at which memory is being swapped from active memory to host cache
average.kiloBytes.overheadTouched 4 Actively touched overhead memory (KB) reserved for use as the virtualization overhead for the VM
average.kiloBytes.llSwapUsed 4 Space used for caching swapped pages in the host cache
maximum.kiloBytes.llSwapUsed 4 Space used for caching swapped pages in the host cache
minimum.kiloBytes.llSwapUsed 4 Space used for caching swapped pages in the host cache
none.kiloBytes.llSwapIn 4 Amount of memory swapped-in from host cache
average.kiloBytes.llSwapIn 4 Amount of memory swapped-in from host cache
maximum.kiloBytes.llSwapIn 4 Amount of memory swapped-in from host cache
minimum.kiloBytes.llSwapIn 4 Amount of memory swapped-in from host cache
none.kiloBytes.llSwapOut 4 Amount of memory swapped-out to host cache
average.kiloBytes.llSwapOut 4 Amount of memory swapped-out to host cache
maximum.kiloBytes.llSwapOut 4 Amount of memory swapped-out to host cache
minimum.kiloBytes.llSwapOut 4 Amount of memory swapped-out to host cache
net
average.KBps.throughput.provisioned 2 Provisioned pNic I/O Throughput
average.KBps.throughput.usable 2 Usable pNic I/O Throughput
average.KBps.throughput.usage 3 Average vNic I/O rate
summation.number.throughput.contention 2 Count of vNic packet drops
average.number.throughput.packetsPerSec 2 Average rate of packets received and transmitted per second
average.KBps.throughput.usage.vm 3 Average pNic I/O rate for VMs
average.KBps.throughput.usage.nfs 3 Average pNic I/O rate for NFS
average.KBps.throughput.usage.vmotion 3 Average pNic I/O rate for vMotion
average.KBps.throughput.usage.ft 3 Average pNic I/O rate for FT
average.KBps.throughput.usage.iscsi 3 Average pNic I/O rate for iSCSI
average.KBps.throughput.usage.hbr 3 Average pNic I/O rate for HBR
average.KBps.bytesRx 2 Average amount of data received per second
average.KBps.bytesTx 2 Average amount of data transmitted per second
summation.number.broadcastRx 2 Number of broadcast packets received during the sampling interval
summation.number.broadcastTx 2 Number of broadcast packets transmitted during the sampling interval
summation.number.multicastRx 2 Number of multicast packets received during the sampling interval
summation.number.multicastTx 2 Number of multicast packets transmitted during the sampling interval
summation.number.errorsRx 2 Number of packets with errors received during the sampling interval
summation.number.errorsTx 2 Number of packets with errors transmitted during the sampling interval
summation.number.unknownProtos 2 Number of frames with unknown protocol received during the sampling interval
power
summation.joule.energy 3 Total energy used since last stats reset
average.percent.capacity.usagePct 3 Current power usage as a percentage of maximum allowed power
average.watt.capacity.usable 2 Current maximum allowed power usage
average.watt.capacity.usage 2 Current power usage
storageAdapter
latest.millisecond.maxTotalLatency 3 Highest latency value across all storage adapters used by the host
average.millisecond.throughput.cont 2 Average amount of time for an I/O operation to complete
average.percent.OIOsPct 3 The percent of I/Os that have been issued but have not yet completed
average.number.outstandingIOs 2 The number of I/Os that have been issued but have not yet completed
average.number.queued 2 The current number of I/Os that are waiting to be issued
average.number.queueDepth 2 The maximum number of I/Os that can be outstanding at a given time
average.millisecond.queueLatency 2 Average amount of time spent in the VMkernel queue, per SCSI command, during the collection interval
average.KBps.throughput.usage 4 The storage adapter's I/O rate
storagePath
average.millisecond.throughput.cont 2 Average amount of time for an I/O operation to complete
latest.millisecond.maxTotalLatency 3 Highest latency value across all storage paths used by the host
summation.number.busResets 2 Number of SCSI-bus reset commands issued during the collection interval
summation.number.commandsAborted 2 Number of SCSI commands aborted during the collection interval
average.KBps.throughput.usage 2 Storage path I/O rate
sys
latest.second.osUptime 4 Total time elapsed, in seconds, since last operating system boot-up
vcResources
average.kiloBytes.buffersz 4 buffersz
average.kiloBytes.cachesz 4 cachesz
average.number.diskreadsectorrate 4 diskreadsectorrate
average.number.diskwritesectorrate 4 diskwritesectorrate
virtualDisk
average.millisecond.throughput.cont 2 Average amount of time for an I/O operation to complete
average.KBps.throughput.usage 2 Virtual disk I/O rate
summation.number.commandsAborted 2 commandsAborted
summation.number.busResets 2 busResets
latest.number.readOIO 2 Average number of outstanding read requests to the virtual disk during the collection interval
latest.number.writeOIO 2 Average number of outstanding write requests to the virtual disk during the collection interval
latest.number.readLoadMetric 2 Storage DRS virtual disk metric for the read workload model
latest.number.writeLoadMetric 2 Storage DRS virtual disk metric for the write workload model

Categories // Uncategorized Tags // api, ESXi 5.0, performance, vSphere 5.0

How to Enable Nested vFT (virtual Fault Tolerance) in vSphere 5

07.31.2011 by William Lam // 5 Comments

The ability to enable virtual Fault Tolerance in nested virtual machines running in vESX(i) is not a new feature in vSphere 5, vFT has been an unsupported feature since vSphere 4 and was initially identified by Simon Gallagher. The process is exactly the same in vSphere 5 in which three virtual machine configuration options need to be configured for the virtual machine to be enabled with FT, not the vESXi VM.

replay.supported = "true"
replay.allowFT = "true"
replay.allowBTOnly = "true"

During the beta of vSphere 5, I did enable vFT but on an offline virtual machine to conserve on unnecessary compute resources. Today there was a question on the beta community around configuring vFT for vSphere 5 and I wanted to quickly validate the configurations still hold true. I ran into a interesting error when trying to enable vFT, the power on process for the secondary virtual machine failed with the following error:

This was not an error I had seen before in vSphere 4 and looking at the vmkernel and vmware.log files, I noticed the following:

2011-07-31T17:31:39.314Z| vcpu-0| [vob.vmotion.stream.keepalive.read.fail] vMotion migration [ac1e0050:1312133702562144] failed to read stream keepalive: Connection closed by remote host, possibly due to timeout
2011-07-31T17:31:39.314Z| vcpu-0| [msg.checkpoint.precopyfailure] Migration to host <> failed with error Connection closed by remote host, possibly due to timeout (0xbad003f).
2011-07-31T17:31:39.324Z| vcpu-0| Migrate: secondary failure during migration: error Connection closed by remote host, possibly due to timeout.

I tried changing the advanced option on the vESX(i) host to increase the vMotion timeout but continued to hit the same error. I decided to look more into the first error message "failed to read stream keepalive" and found an advanced ESX(i) setting called /Migrate/VMotionStreamDisable, this advanced option has been available since ESX(i) 4.x.

I decided to disable vMotion Stream and to my surprised, it allowed FT to power on the secondary virtual machine and no longer ran into that error.

Note: You may or may not run into this error message and the configuration may not be necessary. If you enable vFT on an offline VM, you should not have any issues as long as you meet the minimum Fault Tolerance requirements.

You can configure the advanced ESXi option using either esxcli or legacy esxcfg-advcfg commands:

esxcli system settings advanced set -o /Migrate/VMotionStreamDisable -i 0
esxcfg-advcfg -s 0 /Migrate/VMotionStreamDisable

It is important to understand that even though one can setup a vESX(i) hosts and test and play with some of the advanced functionality such as vMotion and FT that the actual behavior is unpredictable as these configurations are unsupported by VMware. This of course is also great feature for home labs and studying for VMware certifications such as VCP and VCAP-DCA, but that should be the extent of leveraging these unsupported configurations.

Categories // ESXi, Nested Virtualization, Not Supported Tags // ESXi 5.0, fault tolerance, nested ft, vft, vSphere 5.0

vSphere 5 Summary on virtuallyGhetto

07.29.2011 by William Lam // Leave a Comment

Here is a collection of all my blog posts relating to vSphere 5 that I have worked on over the last 6 months.

General Topics
1. How to Enable Support for Nested 64bit & Hyper-V VMs in vSphere 5
2. Automating ESXi 5.x Kickstart Tips & Tricks
3. Major Enhancements in esxcli for vSphere 5
4. What's New in VMware Vsish for ESXi 5
5. SSH Keys & Lockdown Mode Caveat in ESXi 5
6. How to Create Custom Firewall Rules in ESXi 5.0
7. How to Format and Create VMFS Volume using the CLI in ESXi 5
8. HBR (Host Based Replication) CLI for SRM 5
9. New vSphere 5 CLI Utilities/Tricks Marketing Did Not Tell You About Part 1
10. New vSphere 5 CLI Utilities/Tricks Marketing Did Not Tell You About Part 2
11. New vSphere 5 CLI Utilities/Tricks Marketing Did Not Tell You About Part 3
12. New vSphere 5 HA, DRS and SDRS Advanced/Hidden Options
13. How to Trick ESXi 5 in seeing an SSD Datastore
14. Free Linux & Windows Syslog Alternatives to depercated vi-logger in vMA 5
15. Host Profiles Free in ESXi 5?
16. vi-fastpass esxcli and resxtop bug resolved in vMA 5
17. Tips and Tricks for vMA 5
18. How to Enable Nested vFT (virtual Fault Tolerance) in vSphere 5
20. When Can I Run Apple OSX on vSphere 5?
21. How Fast is the New vSphere 5 HA/DRS on 64 Node Cluster? FAST!
22. New Hidden CBRC (Content-Based Read Cache) Feature in vSphere 5 & for VMware View 5?
 

API + SCRIPTS
1. There's a new mob in town, FDM MOB for ESXi 5
2. New SRM 5 APIs
3. Automating the New Integrated VIX/Guest Operations API in vSphere 5 
4. Automating Storage DRS & Datastore Cluster Management in vSphere 5
5. How to Automate Host Cache Configuration in ESXi 5
6. 2 Hidden Virtual Machine Gems in the vSphere 5 API
7. New vSphere Health Check 5.0 & ghettoVCB Script
8. New Performance Metrics In vSphere 5
9. How to Persist Configuration Changes in ESXi 4.x/5.x Part 1
10. How to Persist Configuration Changes in ESXi 4.x/5.x Part 2
11. How to Automate the Upgrade of Classic ESX 4.x to ESXi 5
12. New Application Awareness API in vSphere 5

If you have found these and other resources useful on this site and would like to support us, you can donate here. Thanks!

Categories // Uncategorized Tags // ESXi 5.0, vSphere 5.0

  • « Previous Page
  • 1
  • …
  • 9
  • 10
  • 11
  • 12
  • 13
  • …
  • 19
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025