WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

How to Trick ESXi 5 in seeing an SSD Datastore

07.22.2011 by William Lam // 38 Comments

In vSphere 5, there is a new feature called Host Cache which allows a user to offload the virtual machine's swap onto a dedicated SSD device for better performance. This is done by creating a VMFS volume on an SSD device which is then detected by SATP (Storage Adapter Type Plugin) and allows a user to add and configure a VMFS datastore for host caching.

During the vSphere 5 beta, I was testing out various new features including Host Caching but did not have access to a system with an SSD device while updating and creating a few new scripts. After some research I found that if a default SATP rule is not available to identify a particular SSD device, that a new rule could be created containing a special metadata field specifying that it is an SSD device.

In the following example, I will take a local virtual disk (mpx.vmhba1:C0:T2:L0) in a vESXi 5.0 host and trick ESXi into thinking that it is an SSD device.

We will need to use esxcli whether that is directly on the ESXi Shell or using vMA and/or PowerCLI esxcli's remote version.

Note: The following assumes there is already a VMFS volume created on the device you want to present as an SSD device, if you have not done so, please create a VMFS volume before continuing

First you will need to create a new SATP rule specifying your device and specifying the "enable_ssd" string as part of the --option parameter:

~ # esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d mpx.vmhba1:C0:T2:L0 -o enable_ssd

You can verify that your rule was created property by performing a list operation on the SATP rules:

~ #  esxcli storage nmp satp rule list | grep enable_ssd
VMW_SATP_LOCAL       mpx.vmhba1:C0:T2:L0                                                enable_ssd                  user

Next you will need to reclaim your device so that the new rule is applied:

~ # esxcli storage core claiming reclaim -d mpx.vmhba1:C0:T2:L0

You now can verify from the command line that your new device is being seen as an SSD device, by displaying the details for this particular device:

~ # esxcli storage core device list -d mpx.vmhba1:C0:T2:L0
mpx.vmhba1:C0:T2:L0
Display Name: Local VMware Disk (mpx.vmhba1:C0:T2:L0)
Has Settable Display Name: false
Size: 5120
Device Type: Direct-Access
Multipath Plugin: NMP
Devfs Path: /vmfs/devices/disks/mpx.vmhba1:C0:T2:L0
Vendor: VMware
Model: Virtual disk
Revision: 1.0
SCSI Level: 2
Is Pseudo: false
Status: on
Is RDM Capable: false
Is Local: true
Is Removable: false
Is SSD: true
Is Offline: false
Is Perennially Reserved: false
Thin Provisioning Status: unknown
Attached Filters:
VAAI Status: unsupported
Other UIDs: vml.0000000000766d686261313a323a30

As you can see the "Is SSD" field is not being populated as true where as before if you ran this command, it would display false

Now you can refresh the Storage view on the vSphere Client or you can do so from the command line by running the following command:

~ #vim-cmd hostsvc/storage/refresh

Now if you go back to the vSphere Client under "Host Cache Configuration" you should see the new fake SSD device for selection and you just need to configure it and Host Cache is enabled for this device.

This of course is probably not officially supported unless directed by VMware nor is there a real good reason for this. I personally had to go down this route for scripting purposes but if you wanted to see how Host Cache works, this is a neat trick to allow you to do so.

Categories // Uncategorized Tags // ESXi 5.0, host cache, ssd, vSphere 5.0

New vSphere 5 CLI Utilities/Tricks Marketing Did Not Tell You About Part 1

07.21.2011 by William Lam // 2 Comments

With the new release of vSphere 5, there are lot of changes including some new CLI utilities that have not made it into the official documentation for whatever reason. Here are some handy tools that maybe useful for troubleshooting, quickly gather some information about your vSphere 5 environment or just straight up dangerous and should be used with extreme care 🙂

1. If you remember the old esxcfg-info command which provides extensive information about an ESX(i) host which was normally ran via the Service Console of classic ESX or ESXi Shell (formally TSM) on ESXi, you can now retrieve the same information by just pointing your browser to your ESXi 5.0 host using the following:

https://[hostname]/cgi-bin/esxcfg-info.cgi

This just a CGI script that runs to collect this information and displays it on the browser for convenience

You can also get the same output in XML by using the following:

https://[hostname]/cgi-bin/esxcfg-info.cgi?xml

2. Another neat trick is to generate a vm-support log bundle using your browser versus having to login to the ESXi Shell and running vm-support command. To do so, point your browser over to the following:

https://[hostname]/cgi-bin/vm-support.cgi

Once the vm-support bundle is complete, you will be prompted to download it to your local system.

3. localcli is a new utility found in ESXi Shell and works just like esxcli except it does not go through hostd for changes to be made or reflected in the UI and hostd's internal state. The main use case for localcli is if hostd crashes and you need to make a change to recover in case of an emergency, VMware built a back up tool so you would not be stuck. The use of this utility can potentially put your system in an inconsistent state, and should only be used as a last resort.

4. Another neat trick that works with both localcli and esxcli is the use of the --format option which allows you to customize the formatted output of your text. If you would like to mask the headers so you don't need to do extra parsing, you can specify the following:

localcli --format-param=show-header=false

You can also show specific fields you care about by using the csv formatter and specifying the fields of interest with --format-param:

~ # esxcli --formatter=csv --format-param=fields="Name,Virtual Switch" network vswitch standard portgroup list
Name,VirtualSwitch,
ESXSecretAgentNetwork,vSwitch0,
Management Network,vSwitch0,
VM Network,vSwitch0,
VMkernel,vSwitch1,
vmk1,vSwitch0,
vmk2,vSwitch0,

5. It looks like a memscrb utility is now included in ESXI Shell under /usr/lib/vmware/memscrub/bin

/usr/lib/vmware # /usr/lib/vmware/memscrub/bin/memscrub -h
Usage: /usr/lib/vmware/memscrub/bin/memscrub [-h] [d[pidFile]] [-s[waitTime]] [-f firstMPN] [-l lastMPN]
-h --help:      Prints this message.
-d --daemonize: Daemonizes the memory scrubber.
-s --spin:      Scrub, wait 900 seconds, repeat. To change the default value, pass in a parameter.
-f --firstMPN:  Specify first MPN to scan.
-l --lastMPN:   Specify last MPN to scan.

6. Another way to list all syslog loggers on ESXi host is using the following:

~ # /usr/lib/vmware/vmsyslog/bin/esxcfg-syslog --list=loggers
id description size rotate dest
-- ----------- ---- ------ ----
syslog Default syslog catch-all 1024 20 syslog.log
vobd Vobd logs 1024 20 vobd.log
vprobed Vprobed logs 1024 20 vprobed.log
esxupdate esxupdate logs 1024 20 esxupdate.log
hostprofiletrace Host Profile trace logs 1024 20 hostprofiletrace.log
auth Authentication logs 1024 20 auth.log
shell ESX shell logs 1024 20 shell.log
storageRM Storage I/O Control log 1024 20 storagerm.log
usb USB related logs 1024 20 usb.log
vmkeventd vmkeventd related logs 1024 20 vmkeventd.log
vmauthd VMware Authorization daemon logs 1024 20 vmauthd.log
dhclient DHCP client logs 1024 20 dhclient.log
vmksummary Log heartbeats (vmksummary) 1024 20 vmksummary.log
vmkwarning vmkernel warnings and sysalerts (vmkwarning) 1024 20 vmkwarning.log
vmkernel vmkernel logs 2048 20 vmkernel.log
hostd Hostd logs 2048 20 hostd.log
fdm Fdm logs 1024 20 fdm.log
vpxa Vpxa logs 1024 20 vpxa.log

7. There are a few new options in vmkfstools such as -N --avoidnativeclone option which allows you to leverage a NAS disklib plugin (SvaNasPlugin) if you have a supported NAS array such as snapshot cloning, analogous to VAAI for NFS. By default native cloning will be performed but if you would like to leverage the array to perform the clone operation, you will need to specify the -N option. A few other options that I have not had a chance to dig into is -M --migratevirtualdisk, -I --snapshotdisk and -e --chainConsistent option.

Categories // Uncategorized Tags // ESXi 5.0, vSphere 5.0

New vSphere 5 HA, DRS and SDRS Advanced/Hidden Options

07.21.2011 by William Lam // 7 Comments

While testing the new HA (FDM) in vSphere 5 during the beta, I had noticed a new warning message on one of the ESXi 5.0 hosts "The number of heartbeat datastores for host is 1, which is less than required: 2"

I wondered if this was something that could be disabled as long as the user was aware of this. Looking at the new availability guide, I found that two new advaned HA have been introduced relating to datastore heartbeat which is a secondary means of determining whether or not a host has been partitioned, isolated or has failed.

das.ignoreinsufficienthbdatastore - Disables configuration issues created if the host does not
have sufficient heartbeat datastores for vSphere HA. Default
value is false.
das.heartbeatdsperhost - Changes the number of heartbeat datastores required. Valid
values can range from 2-5 and the default is 2.

To disable the message, you will need to add this new advanced setting under the "vSphere HA" Advanced Options second and set the value to be true.

You then need to perform a reconfiguration of vSphere HA for this to take into effect. One method is to just disable/re-enable vSphere HA and the message is now gone. If you know you will have less than the minimal 2 datastores for heartbeating, you can configure this option when you first enable vSphere HA.

I was curious (obviously) to see if there were other advanced options and searching through the vpxd binary, I located some old and new advanced options that maybe applicable to vSphere DRS, DPM and SDRS.

Disclaimer: These options may or may not have been properly documented from my research/digging and it is most likely not supported by VMware. Please take caution if you decide to play with this advanced settings.

Setting Description
AvgStatPeriod Statistical sampling period in minutes
CapRpReservationAtDemand Caps the RP entitled reservation at demand during reservation divvying
CompressDrmdumpFiles Set to 1 to compress drmdump files & to 0 to not compress them
CostBenefit Enable/disable the use of cost benefit metric for filtering moves
CpuActivePctThresh Active percentage threshold above which the VM's CPU entitlement cap is increased to cluster maximum Mhz. Set it to 125 to disable this feature
DefaultDownTime Down time (millisecs) to use for VMs w/o history (-1 -> unspecified)
DefaultMigrationTime Migration time (secs) to use for VMs w/o history (-1 -> unspecified)
DefaultSioCapacityInIOPS Default peak IOPS to be used for datastore with zero slope
DefaultSioDeviceIntercept Default intercept parameter in device model for SDRS in x1000
DemandCapacityRatioTarget unknown
DemandCapacityRatioToleranceHost DPM/DRS: Consider recent demand history over this period for DPM power performance & DRS cost performance decisions
DumpSpace Disk space limit in megabytes for dumping module and domain state, set to 0 to disable dumping, set to -1 for unlimited space
EnableMinimalDumping Enable or Disable minimal dumping in release builds
EnableVmActiveAdjust Enable Adjustment of VM Cpu Active
EwmaWeight Weight for newer samples in exponential weighted moving averagein 1/100's
FairnessCacheInvalSec Maximum age of the fairness cache
GoodnessMetric Goodness metric for evaluating migration decisions
GoodnessPerStar Maximum goodness in 1/1000 required for a 1-star recommendation
IdleTax Idle tax percentage
IgnoreAffinityRulesForMaintenance Ignore affinity rules for datastore maintenance mode
IgnoreDownTimeLessThan Ignore down time less than this value in seconds
IoLoadBalancingAlwaysUseCurrent Always use current stats for IO load balancing
IoLoadBalancingMaxMovesPerHost Maximum number of moves from or to a datastore per round
IoLoadBalancingMinHistSecs Minimum number of seconds that should have passed before using current stats
IoLoadBalancingPercentile IO Load balancing default percentile to use
LogVerbose Turn on more verbose logging
MinGoodness Minimum goodness in 1/1000 required for any balance recommendation; if <=0, min set to abs value; if >0, min set to lessor of option & value set proportionate to running VMs, hosts, & rebal resources
MinImbalance Minimum cluster imbalance in 1/1000 required for any recommendations
MinStarsForMandMoves Minimum star rating for mandatory recommendations
NumUnreservedSlots Number of unreserved capacity slots to maintain
PowerOnFakeActiveCpuPct Fake active CPU percentage to use for initial share allocation
PowerOnFakeActiveMemPct Fake active memory percentage to use for initial share allocation
PowerPerformanceHistorySecs unknown
PowerPerformancePercentileMultiplier DPM: Set percentile for stable time for power performance
PowerPerformanceRatio DPM: Set Power Performance ratio
PowerPerformanceVmDemandHistoryNumStdDev DPM: Compute demand for history period as mean plus this many standard deviations, capped at maximum demand observed
RawCapDiffPercent Percent by which RawCapacity values need to differ to be signicant
RelocateThresh Threshold in stars for relocation
RequireMinCapOnStrictHaAdmit Make Vm power on depend on minimum capacity becoming powered on and on any recommendations triggered by spare Vms
ResourceChangeThresh Minimum percent of resource setting change for a recommendation
SecondaryMetricWeight Weight for secondary metric in overall metric
SecondaryMetricWeightMult Weight multiplier for secondary metric in overall metric
SetBaseGoodnessForSpaceViolation -1*Goodness value added for a move exceeding space threshold on destination
SetSpaceLoadToDatastoreUsedMB If 0, set space load to sum of vmdk entitlements [default]; if 1, set space load to datastore used MB if higher
SpaceGrowthSecs The length of time to consider in the space growth risk analysis. Should be an order of magnitude longer than the typical storage vmotion time.
UseDownTime Enable/disable the use of downtime in cost benefit metric
UseIoSharesForEntitlement Use vmdk IO shares for entitlement computation
UsePeakIOPSCapacity Use peak IOPS as the capacity of a datastore
VmDemandHistorySecsHostOn unknown
VmDemandHistorySecsSoftRules Consider recent demand history over this period in making decisions to drop soft rules
VmMaxDownTime Reject the moves if the predicted downTime will exceed the max (in secs) for non-FT VM
VmMaxDownTimeFT Reject the moves if the predicted downTime will exceed the max (in Secs) for FT VM
VmRelocationSecs Amount of time it takes to relocate a VM

As you can see the advanced/hidden options in the above table can be potentially applicable to DRS, DPM and SDRS and I have not personally tested all of the settings. There might be some interesting and possibly useful settings, one such setting is SDRS IgnoreAffinityRulesForMaintenance which ignores the affinity rules for datastore maintenance mode. To configure SDRS Advanced Options, you will need to navigate over to the "Datastore" view and edit a Storage Pod under "SDRS Automation" and selecting "Advanced Option"

Categories // Uncategorized Tags // ESXi 5.0, fdm, ha, SDRS, vSphere 5.0

  • « Previous Page
  • 1
  • …
  • 520
  • 521
  • 522
  • 523
  • 524
  • …
  • 561
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automating the vSAN Data Migration Pre-check using vSAN API 06/04/2025
  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025