WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

New vSphere 5 CLI Utilities/Tricks Marketing Did Not Tell You About Part 1

07.21.2011 by William Lam // 2 Comments

With the new release of vSphere 5, there are lot of changes including some new CLI utilities that have not made it into the official documentation for whatever reason. Here are some handy tools that maybe useful for troubleshooting, quickly gather some information about your vSphere 5 environment or just straight up dangerous and should be used with extreme care 🙂

1. If you remember the old esxcfg-info command which provides extensive information about an ESX(i) host which was normally ran via the Service Console of classic ESX or ESXi Shell (formally TSM) on ESXi, you can now retrieve the same information by just pointing your browser to your ESXi 5.0 host using the following:

https://[hostname]/cgi-bin/esxcfg-info.cgi

This just a CGI script that runs to collect this information and displays it on the browser for convenience

You can also get the same output in XML by using the following:

https://[hostname]/cgi-bin/esxcfg-info.cgi?xml

2. Another neat trick is to generate a vm-support log bundle using your browser versus having to login to the ESXi Shell and running vm-support command. To do so, point your browser over to the following:

https://[hostname]/cgi-bin/vm-support.cgi

Once the vm-support bundle is complete, you will be prompted to download it to your local system.

3. localcli is a new utility found in ESXi Shell and works just like esxcli except it does not go through hostd for changes to be made or reflected in the UI and hostd's internal state. The main use case for localcli is if hostd crashes and you need to make a change to recover in case of an emergency, VMware built a back up tool so you would not be stuck. The use of this utility can potentially put your system in an inconsistent state, and should only be used as a last resort.

4. Another neat trick that works with both localcli and esxcli is the use of the --format option which allows you to customize the formatted output of your text. If you would like to mask the headers so you don't need to do extra parsing, you can specify the following:

localcli --format-param=show-header=false

You can also show specific fields you care about by using the csv formatter and specifying the fields of interest with --format-param:

~ # esxcli --formatter=csv --format-param=fields="Name,Virtual Switch" network vswitch standard portgroup list
Name,VirtualSwitch,
ESXSecretAgentNetwork,vSwitch0,
Management Network,vSwitch0,
VM Network,vSwitch0,
VMkernel,vSwitch1,
vmk1,vSwitch0,
vmk2,vSwitch0,

5. It looks like a memscrb utility is now included in ESXI Shell under /usr/lib/vmware/memscrub/bin

/usr/lib/vmware # /usr/lib/vmware/memscrub/bin/memscrub -h
Usage: /usr/lib/vmware/memscrub/bin/memscrub [-h] [d[pidFile]] [-s[waitTime]] [-f firstMPN] [-l lastMPN]
-h --help:      Prints this message.
-d --daemonize: Daemonizes the memory scrubber.
-s --spin:      Scrub, wait 900 seconds, repeat. To change the default value, pass in a parameter.
-f --firstMPN:  Specify first MPN to scan.
-l --lastMPN:   Specify last MPN to scan.

6. Another way to list all syslog loggers on ESXi host is using the following:

~ # /usr/lib/vmware/vmsyslog/bin/esxcfg-syslog --list=loggers
id description size rotate dest
-- ----------- ---- ------ ----
syslog Default syslog catch-all 1024 20 syslog.log
vobd Vobd logs 1024 20 vobd.log
vprobed Vprobed logs 1024 20 vprobed.log
esxupdate esxupdate logs 1024 20 esxupdate.log
hostprofiletrace Host Profile trace logs 1024 20 hostprofiletrace.log
auth Authentication logs 1024 20 auth.log
shell ESX shell logs 1024 20 shell.log
storageRM Storage I/O Control log 1024 20 storagerm.log
usb USB related logs 1024 20 usb.log
vmkeventd vmkeventd related logs 1024 20 vmkeventd.log
vmauthd VMware Authorization daemon logs 1024 20 vmauthd.log
dhclient DHCP client logs 1024 20 dhclient.log
vmksummary Log heartbeats (vmksummary) 1024 20 vmksummary.log
vmkwarning vmkernel warnings and sysalerts (vmkwarning) 1024 20 vmkwarning.log
vmkernel vmkernel logs 2048 20 vmkernel.log
hostd Hostd logs 2048 20 hostd.log
fdm Fdm logs 1024 20 fdm.log
vpxa Vpxa logs 1024 20 vpxa.log

7. There are a few new options in vmkfstools such as -N --avoidnativeclone option which allows you to leverage a NAS disklib plugin (SvaNasPlugin) if you have a supported NAS array such as snapshot cloning, analogous to VAAI for NFS. By default native cloning will be performed but if you would like to leverage the array to perform the clone operation, you will need to specify the -N option. A few other options that I have not had a chance to dig into is -M --migratevirtualdisk, -I --snapshotdisk and -e --chainConsistent option.

Categories // Uncategorized Tags // ESXi 5.0, vSphere 5.0

New vSphere 5 HA, DRS and SDRS Advanced/Hidden Options

07.21.2011 by William Lam // 7 Comments

While testing the new HA (FDM) in vSphere 5 during the beta, I had noticed a new warning message on one of the ESXi 5.0 hosts "The number of heartbeat datastores for host is 1, which is less than required: 2"

I wondered if this was something that could be disabled as long as the user was aware of this. Looking at the new availability guide, I found that two new advaned HA have been introduced relating to datastore heartbeat which is a secondary means of determining whether or not a host has been partitioned, isolated or has failed.

das.ignoreinsufficienthbdatastore - Disables configuration issues created if the host does not
have sufficient heartbeat datastores for vSphere HA. Default
value is false.
das.heartbeatdsperhost - Changes the number of heartbeat datastores required. Valid
values can range from 2-5 and the default is 2.

To disable the message, you will need to add this new advanced setting under the "vSphere HA" Advanced Options second and set the value to be true.

You then need to perform a reconfiguration of vSphere HA for this to take into effect. One method is to just disable/re-enable vSphere HA and the message is now gone. If you know you will have less than the minimal 2 datastores for heartbeating, you can configure this option when you first enable vSphere HA.

I was curious (obviously) to see if there were other advanced options and searching through the vpxd binary, I located some old and new advanced options that maybe applicable to vSphere DRS, DPM and SDRS.

Disclaimer: These options may or may not have been properly documented from my research/digging and it is most likely not supported by VMware. Please take caution if you decide to play with this advanced settings.

Setting Description
AvgStatPeriod Statistical sampling period in minutes
CapRpReservationAtDemand Caps the RP entitled reservation at demand during reservation divvying
CompressDrmdumpFiles Set to 1 to compress drmdump files & to 0 to not compress them
CostBenefit Enable/disable the use of cost benefit metric for filtering moves
CpuActivePctThresh Active percentage threshold above which the VM's CPU entitlement cap is increased to cluster maximum Mhz. Set it to 125 to disable this feature
DefaultDownTime Down time (millisecs) to use for VMs w/o history (-1 -> unspecified)
DefaultMigrationTime Migration time (secs) to use for VMs w/o history (-1 -> unspecified)
DefaultSioCapacityInIOPS Default peak IOPS to be used for datastore with zero slope
DefaultSioDeviceIntercept Default intercept parameter in device model for SDRS in x1000
DemandCapacityRatioTarget unknown
DemandCapacityRatioToleranceHost DPM/DRS: Consider recent demand history over this period for DPM power performance & DRS cost performance decisions
DumpSpace Disk space limit in megabytes for dumping module and domain state, set to 0 to disable dumping, set to -1 for unlimited space
EnableMinimalDumping Enable or Disable minimal dumping in release builds
EnableVmActiveAdjust Enable Adjustment of VM Cpu Active
EwmaWeight Weight for newer samples in exponential weighted moving averagein 1/100's
FairnessCacheInvalSec Maximum age of the fairness cache
GoodnessMetric Goodness metric for evaluating migration decisions
GoodnessPerStar Maximum goodness in 1/1000 required for a 1-star recommendation
IdleTax Idle tax percentage
IgnoreAffinityRulesForMaintenance Ignore affinity rules for datastore maintenance mode
IgnoreDownTimeLessThan Ignore down time less than this value in seconds
IoLoadBalancingAlwaysUseCurrent Always use current stats for IO load balancing
IoLoadBalancingMaxMovesPerHost Maximum number of moves from or to a datastore per round
IoLoadBalancingMinHistSecs Minimum number of seconds that should have passed before using current stats
IoLoadBalancingPercentile IO Load balancing default percentile to use
LogVerbose Turn on more verbose logging
MinGoodness Minimum goodness in 1/1000 required for any balance recommendation; if <=0, min set to abs value; if >0, min set to lessor of option & value set proportionate to running VMs, hosts, & rebal resources
MinImbalance Minimum cluster imbalance in 1/1000 required for any recommendations
MinStarsForMandMoves Minimum star rating for mandatory recommendations
NumUnreservedSlots Number of unreserved capacity slots to maintain
PowerOnFakeActiveCpuPct Fake active CPU percentage to use for initial share allocation
PowerOnFakeActiveMemPct Fake active memory percentage to use for initial share allocation
PowerPerformanceHistorySecs unknown
PowerPerformancePercentileMultiplier DPM: Set percentile for stable time for power performance
PowerPerformanceRatio DPM: Set Power Performance ratio
PowerPerformanceVmDemandHistoryNumStdDev DPM: Compute demand for history period as mean plus this many standard deviations, capped at maximum demand observed
RawCapDiffPercent Percent by which RawCapacity values need to differ to be signicant
RelocateThresh Threshold in stars for relocation
RequireMinCapOnStrictHaAdmit Make Vm power on depend on minimum capacity becoming powered on and on any recommendations triggered by spare Vms
ResourceChangeThresh Minimum percent of resource setting change for a recommendation
SecondaryMetricWeight Weight for secondary metric in overall metric
SecondaryMetricWeightMult Weight multiplier for secondary metric in overall metric
SetBaseGoodnessForSpaceViolation -1*Goodness value added for a move exceeding space threshold on destination
SetSpaceLoadToDatastoreUsedMB If 0, set space load to sum of vmdk entitlements [default]; if 1, set space load to datastore used MB if higher
SpaceGrowthSecs The length of time to consider in the space growth risk analysis. Should be an order of magnitude longer than the typical storage vmotion time.
UseDownTime Enable/disable the use of downtime in cost benefit metric
UseIoSharesForEntitlement Use vmdk IO shares for entitlement computation
UsePeakIOPSCapacity Use peak IOPS as the capacity of a datastore
VmDemandHistorySecsHostOn unknown
VmDemandHistorySecsSoftRules Consider recent demand history over this period in making decisions to drop soft rules
VmMaxDownTime Reject the moves if the predicted downTime will exceed the max (in secs) for non-FT VM
VmMaxDownTimeFT Reject the moves if the predicted downTime will exceed the max (in Secs) for FT VM
VmRelocationSecs Amount of time it takes to relocate a VM

As you can see the advanced/hidden options in the above table can be potentially applicable to DRS, DPM and SDRS and I have not personally tested all of the settings. There might be some interesting and possibly useful settings, one such setting is SDRS IgnoreAffinityRulesForMaintenance which ignores the affinity rules for datastore maintenance mode. To configure SDRS Advanced Options, you will need to navigate over to the "Datastore" view and edit a Storage Pod under "SDRS Automation" and selecting "Advanced Option"

Categories // Uncategorized Tags // ESXi 5.0, fdm, ha, SDRS, vSphere 5.0

HBR (Host Based Replication) CLI for SRM 5

07.20.2011 by William Lam // 20 Comments

Host based replication (HBR) is a new feature in the upcoming SRM 5.0 which gives user the ability to replicate VM’s between dissimilar storage. Traditionally, SRM mainly relied on array-based replication to backup and recover virtual machines residing on set of LUN(s). This required all virtual machines to be backed up to be in a set of protected and common LUN(s). With HBR, you now have the ability to target specific VM and their respective VMDK(s) and backup to different storage type at the destination such as local storage, iSCSI/FC LUN or NFS datastores.

Another key difference is HBR does not leverage array replication technology but something analogous to CBT (Change Block Tracking) in which the initial backup is a full copy and all subsequent copies will be differentials. The frequency of this differential backup will be solely based on the configured RPO specified by the user.

Now that we have some background on what HBR and how it relates to the new Site Recovery Manager, let's talk about some of the "limited" automation options. As it stands today there is no publicly exposed SDK from VMware that can be consumed from the various toolkits such as vSphere SDK for Perl, PowerCLI, VI java, etc. To configure a VM to be backed using the new HBR functionality, you will still need to manually go through the vSphere Client wizard by simply right clicking on a VM and selecting "Site Recovery Manager HBR Replication" option.

Once you have the initial configuration set for a given virtual machine, there are some limited functionality that has been exposed through the vimsh interface using vim-cmd. A new "hbrsvc" has now been added which provides some limited options in making configuration and state changes for a given VM under HBR management.

~ # vim-cmd hbrsvc
Commands available under hbrsvc/:
vmreplica.abort vmreplica.pause
vmreplica.create vmreplica.queryReplicationState
vmreplica.disable vmreplica.reconfig
vmreplica.diskDisable vmreplica.resume
vmreplica.diskEnable vmreplica.startOfflineInstance
vmreplica.enable vmreplica.stopOfflineInstance
vmreplica.getConfig vmreplica.sync
vmreplica.getState

Note: This is probably not officially supported by VMware, please test this in a development or lab environment before using.

If you have used vim-cmd interface, then you should be pretty familiar with how the options work and since this is applicable for a virtual machine, you will need to know the virtual machine's VmId for all the commands.

To retrieve the HBR configuration for a particular VM, you will use the vmreplica.getConfig option:

Here you can see all the configurations that was made through the GUI such as the RPO, quiesce of guestOS and the VMDK(s) configured for replication. You also get some additional information such as the HBR server and the configured port and some important identifiers such as the "VM Replication ID" and "Replication ID". These two identifiers will be very important later on if you want to make use of the other commands.

To retrieve the state of a given VM, you will use the vmreplica.getState option:

This will provide you the current state of replication and progress if the replication is still going on. You will not only get the progress but also the amount transferred data to the destination site.

To retrieve the current replication state of a VM, you will use the vmreplica.queryReplicationState option:

This should be pretty straight forward command to only get details regarding the replication state and the progress both in percentage and amount of data transferred to the destination site.

To pause replication just like you can using the vSphere Client, you will use the vmreplica.pause option:

To resume replication just like you can using the vSphere Client, you will use the vmreplica.resume option:

To disable replication for a VM, you will use the vmreplica.disable option:

Note: Before attempting to disable replication for a VM, it is extremely important to make sure you take down the two important identifiers we had mentioned earlier: "VM Replication ID" and "Replication ID". The reason for this is when you re-enable replication, you will actually need to specify these ID's else your VM will be in a bad state and the only way to recover is using the vSphere Client to re-enable replication.

To re-enable replication for a VM that was disabled, you will use the vmreplica.enable option:

You will need to specify a few parameters such as the VmId, RPO, Destination HBR Server + Port, Enable Quiesce for guestOS, Enable Opportunistic Updates, VM Replication ID and Disk Replication ID which can all be found by running getConfig prior to disabling replication for a given VM

To manually force a replication sync, you will use the vmreplica.sync option:

You also have the ability to change some of the configurations for a VM for replication using the vmreplica.reconfig option:

Currently this is limited to only the RPO, Destination HBR Server + Port and enabling Quiesce guestOS and Opportunistic Updates. In the example above, you can see the RPO window has been updated to 10 minutes and we can confirm this from the vSphere Client. You will notice that the sync will happen ~10 minutes but the reflect RPO is not updated in the SRM interface, this may be a UI bug or the modification is not pushed up to the HBR servers.

Note: Per the vSphere Client and SRM/HBR documentation, the smallest RPO window is 15minutes but I have found that you can actually go smaller but again, use this at your own risk.

I was also interested to see if I could shrink the RPO window even further to say 1 minute and there was no errors and the ESXi tasks actually confirmed the change

Though after making the change and monitoring the next sync, I noticed it did not actually run every minute but anywhere from 6-11 minutes which seems to be the smallest RPO window.

You can also disable replication for a particular VMDK by using the vmreplica.diskDisable option:

To re-enable replication for a particular VMDK, you will use the vmreplica.diskEnable option:

As mentioned earlier, there are no official SDKs from VMware for SRM but the options provided from hbrsvc are from a hidden HBR API found on ESXi 5.0 host, you can see the new "ha-hbr-manager" using the vSphere MOB. Though you can not fully automate the configuration of HBR for a given VM, you do have the ability to automate the reconfiguration or state change for a given VM if you needed to.

Note: I have never placed with SRM prior to vSphere 5, but I also found WSDL files for what looks to be SRM API under the following URLs: http://[SRM-HOST]:8096/sdk/srm and http://[SRM-HOST]:8096/sdk/drService Once could create an SDK bindings using the WSDL files but I will leave that as task for the reader

There is also one additional HBR utility that can be found on the ESXi Shell of ESXi 5.0 which is the hbrfilterctl which provides some information about disks being replicated in HBR.

~ # hbrfilterctl
Ioctl to device is working.
Usage: hbrfilterctl [command args]

Commands:
ba : Print the active replication bitmap the the specified disk.
bt : Print the inactive replication bitmap the the specified disk.
pr : Print the disk length, bitmap length and extent for the secified disk.
ts []: Extract and transfer a light-weight delta for the specified disk.
li : Returns the File ID, Number of entries, copy index and size of the demand
si : Returns information about the full-sync process
de : Detaches a filter attach for offline replication
log for the specified disk.
fs : Force a full sync of the specified disk.
stats : Returns stats for all (but at most ) groups.

The first two options is pretty verbose as it prints the bitmaps of the specified disk, if you are interested, you can run those to get the output.

Here is an example of running the "pr" option:

Here is an example of running the "li" option:

Here is an example of running the "si" option:

The last option "stats" is probably the only real useful command for users at least which provides the status of replication and by specifying a number, it limits the output. Here is an example

Categories // Uncategorized Tags // ESXi 5.0, hbr, hbrsvc, srm5, vim-cmd, vSphere 5.0

  • « Previous Page
  • 1
  • …
  • 519
  • 520
  • 521
  • 522
  • 523
  • …
  • 560
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025