While testing the new HA (FDM) in vSphere 5 during the beta, I had noticed a new warning message on one of the ESXi 5.0 hosts "The number of heartbeat datastores for host is 1, which is less than required: 2"
I wondered if this was something that could be disabled as long as the user was aware of this. Looking at the new availability guide, I found that two new advaned HA have been introduced relating to datastore heartbeat which is a secondary means of determining whether or not a host has been partitioned, isolated or has failed.
das.ignoreinsufficienthbdatastore - Disables configuration issues created if the host does not
have sufficient heartbeat datastores for vSphere HA. Default
value is false.
das.heartbeatdsperhost - Changes the number of heartbeat datastores required. Valid
values can range from 2-5 and the default is 2.
To disable the message, you will need to add this new advanced setting under the "vSphere HA" Advanced Options second and set the value to be true.
You then need to perform a reconfiguration of vSphere HA for this to take into effect. One method is to just disable/re-enable vSphere HA and the message is now gone. If you know you will have less than the minimal 2 datastores for heartbeating, you can configure this option when you first enable vSphere HA.
I was curious (obviously) to see if there were other advanced options and searching through the vpxd binary, I located some old and new advanced options that maybe applicable to vSphere DRS, DPM and SDRS.
Disclaimer: These options may or may not have been properly documented from my research/digging and it is most likely not supported by VMware. Please take caution if you decide to play with this advanced settings.
|AvgStatPeriod||Statistical sampling period in minutes|
|CapRpReservationAtDemand||Caps the RP entitled reservation at demand during reservation divvying|
|CompressDrmdumpFiles||Set to 1 to compress drmdump files & to 0 to not compress them|
|CostBenefit||Enable/disable the use of cost benefit metric for filtering moves|
|CpuActivePctThresh||Active percentage threshold above which the VM's CPU entitlement cap is increased to cluster maximum Mhz. Set it to 125 to disable this feature|
|DefaultDownTime||Down time (millisecs) to use for VMs w/o history (-1 -> unspecified)|
|DefaultMigrationTime||Migration time (secs) to use for VMs w/o history (-1 -> unspecified)|
|DefaultSioCapacityInIOPS||Default peak IOPS to be used for datastore with zero slope|
|DefaultSioDeviceIntercept||Default intercept parameter in device model for SDRS in x1000|
|DemandCapacityRatioToleranceHost||DPM/DRS: Consider recent demand history over this period for DPM power performance & DRS cost performance decisions|
|DumpSpace||Disk space limit in megabytes for dumping module and domain state, set to 0 to disable dumping, set to -1 for unlimited space|
|EnableMinimalDumping||Enable or Disable minimal dumping in release builds|
|EnableVmActiveAdjust||Enable Adjustment of VM Cpu Active|
|EwmaWeight||Weight for newer samples in exponential weighted moving averagein 1/100's|
|FairnessCacheInvalSec||Maximum age of the fairness cache|
|GoodnessMetric||Goodness metric for evaluating migration decisions|
|GoodnessPerStar||Maximum goodness in 1/1000 required for a 1-star recommendation|
|IdleTax||Idle tax percentage|
|IgnoreAffinityRulesForMaintenance||Ignore affinity rules for datastore maintenance mode|
|IgnoreDownTimeLessThan||Ignore down time less than this value in seconds|
|IoLoadBalancingAlwaysUseCurrent||Always use current stats for IO load balancing|
|IoLoadBalancingMaxMovesPerHost||Maximum number of moves from or to a datastore per round|
|IoLoadBalancingMinHistSecs||Minimum number of seconds that should have passed before using current stats|
|IoLoadBalancingPercentile||IO Load balancing default percentile to use|
|LogVerbose||Turn on more verbose logging|
|MinGoodness||Minimum goodness in 1/1000 required for any balance recommendation; if <=0, min set to abs value; if >0, min set to lessor of option & value set proportionate to running VMs, hosts, & rebal resources|
|MinImbalance||Minimum cluster imbalance in 1/1000 required for any recommendations|
|MinStarsForMandMoves||Minimum star rating for mandatory recommendations|
|NumUnreservedSlots||Number of unreserved capacity slots to maintain|
|PowerOnFakeActiveCpuPct||Fake active CPU percentage to use for initial share allocation|
|PowerOnFakeActiveMemPct||Fake active memory percentage to use for initial share allocation|
|PowerPerformancePercentileMultiplier||DPM: Set percentile for stable time for power performance|
|PowerPerformanceRatio||DPM: Set Power Performance ratio|
|PowerPerformanceVmDemandHistoryNumStdDev||DPM: Compute demand for history period as mean plus this many standard deviations, capped at maximum demand observed|
|RawCapDiffPercent||Percent by which RawCapacity values need to differ to be signicant|
|RelocateThresh||Threshold in stars for relocation|
|RequireMinCapOnStrictHaAdmit||Make Vm power on depend on minimum capacity becoming powered on and on any recommendations triggered by spare Vms|
|ResourceChangeThresh||Minimum percent of resource setting change for a recommendation|
|SecondaryMetricWeight||Weight for secondary metric in overall metric|
|SecondaryMetricWeightMult||Weight multiplier for secondary metric in overall metric|
|SetBaseGoodnessForSpaceViolation||-1*Goodness value added for a move exceeding space threshold on destination|
|SetSpaceLoadToDatastoreUsedMB||If 0, set space load to sum of vmdk entitlements [default]; if 1, set space load to datastore used MB if higher|
|SpaceGrowthSecs||The length of time to consider in the space growth risk analysis. Should be an order of magnitude longer than the typical storage vmotion time.|
|UseDownTime||Enable/disable the use of downtime in cost benefit metric|
|UseIoSharesForEntitlement||Use vmdk IO shares for entitlement computation|
|UsePeakIOPSCapacity||Use peak IOPS as the capacity of a datastore|
|VmDemandHistorySecsSoftRules||Consider recent demand history over this period in making decisions to drop soft rules|
|VmMaxDownTime||Reject the moves if the predicted downTime will exceed the max (in secs) for non-FT VM|
|VmMaxDownTimeFT||Reject the moves if the predicted downTime will exceed the max (in Secs) for FT VM|
|VmRelocationSecs||Amount of time it takes to relocate a VM|
As you can see the advanced/hidden options in the above table can be potentially applicable to DRS, DPM and SDRS and I have not personally tested all of the settings. There might be some interesting and possibly useful settings, one such setting is SDRS IgnoreAffinityRulesForMaintenance which ignores the affinity rules for datastore maintenance mode. To configure SDRS Advanced Options, you will need to navigate over to the "Datastore" view and edit a Storage Pod under "SDRS Automation" and selecting "Advanced Option"
Good post but I would like to stress that unless VMware Support specifically requests to modify the advanced SDRS/DRS/DPM settings these should not be modified!
Agreed! Hence the disclaimer before the goodies 🙂 Per the Beta release, IgnoreAffinityRulesForMaintenance is actually set by default with 0 ... curious if that'll be the case in GA
Thanks it worked...
Great Post William. Clearing up warnings in my lab today and the Datastore Heartbeat setting was just what I needed.
I build the vsphere5 test enviroment,
but can`t see the HA Advanced Options in VC
this is the similar discuss
ПОМОГИТЕ НЕВИННЫМ ДЕТЯМ
IgnoreAffinityRulesForMaintenance should apply to hosts as well. it super sucks that for example 3 hosts, with 3 vm's with an affinity rule to run on separate hosts will fail to go to maintenance mode.