WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

VM Provisioning on Datastore Clusters in vSphere 5

03.25.2012 by William Lam // 24 Comments

Last year I wrote an article called Automating Storage DRS & Datastore Cluster Management in vSphere 5 and I provided a pretty comprehensive vSphere SDK for Perl script to help administrators automate Storage DRS configurations. These past few months I have noticed an increase in interest on the VMTN developer forums relating to Storage DRS. Majority of the questions has been related to which vSphere API methods to use and how to use these methods for cloning VMs to datastore clusters.

If you have cloned a VM before, the underlying vSphere API method being used is the CloneVM_Task(), but when cloning a VM to a datastore cluster, a different API must be used.

In vSphere 5, VMware introduced several new API methods in the StorageResourceManager and the two specific ones relating to provisioning VMs are RecommendDatastores() and ApplyStorageDrsRecommendation_Task(). The process to clone a VM to a datastore cluster is a two step process:

  1. Call RecommendDatastores() which accepts very similar input as the CloneVM_Task(). In addition, you need to specify the datastore cluster also known as a Storage Pod in the vSphere API as well as the "type" (create, clone, relocate, reconfigure), in this example we will be performing a clone operation. This method will then generate a recommendation on where to place the VM which is based on the SDRS placement engine. No provisioning will occur at this point, just a placement recommendation.
  2. To perform the actual provision of the VM, you will need to call ApplyStorageDrsRecommendation_Task() which accepts a recommendation ID that was generated from the first step. Once the recommendation is applied, the provisioning of the VM will start just like it did when you called the CloneVM_Task().

Note: The RecommendDatastores() will return multiple recommendations, the best one will be first entry in the array. This is the same algorithm used when performing this same operation in the vSphere Client, it also selects the first recommendation.

Now that we understand how the APIs work, let's take a look at how we can leverage this in a script for some automation! Here is a simple vSphere SDK for Perl script called datastoreClusterVMProvisioning.pl which allows you to clone an existing VM onto a datastore cluster. You will need a system that has vCLI 5.0 installed or you can use VMware vMA 5 to run the script. You will also need to connect to a vCenter Server 5 for all SDRS operations.

The script requires 4 input parameters:

  • vmname - Name of the VM you wish to clone from
  • clonename - Name of the cloned VM
  • vmfolder - Name of a vCenter folder
  • datastorecluster - Name of a datastore cluster

Here is a screenshot of cloning an existing VM onto a datastore cluster:

The script is pretty straight forward and it can easily be adapted to include other configurations as required in your own environment.

Hopefully this gives you a better idea on how to leverage the new provisioning APIs for Storage DRS and start automating your VM deployments onto datastore clusters and get the benefits Storage DRS in your vSphere environment.

Categories // Automation, vSphere Tags // api, clone, datastore cluster, SDRS, storagePod, vSphere 5.0, vsphere sdk for perl

Automating Storage DRS & Datastore Cluster Management in vSphere 5

07.27.2011 by William Lam // 1 Comment

Storage DRS is probably one of, if not the coolest feature in vSphere 5. Storage DRS allows you to cluster your datastores into what is known as a datastore cluster (storage pod) and automatically balances both your storage I/O and capacity just like DRS does with your compute. The user interface is extremely easy to use but as always, if you need to click through several screens to get to the outcome, some automation can never hurt 🙂

I decided to create a vSphere SDK for Perl script called datastoreClusterManagement.pl which allows you to automate all aspects of creating and managing your storage pod/cluster. You will need a system that has the vCLI installed or you can use VMware vMA 5 to run the script. You will also need to connect to a vCenter Server 5 for all SDRS operations.

The script supports 8 different types of operations and are describe below:

Operation Description
List List all available datastore clusters
Query  Query details for a specific datastore cluster
Create  Create a datastore cluster
Delete  Delete a datastore cluster (Datastores are left intact)
Add Datastore  Add datastore(s) to an existing datastore cluster
Remove Datastore  Remove datastore(s) from an existing datastore cluster
Enter Maintenance Mode  Put a datastore into maintenance mode
Exit Maintenance Mode  Take a datastore out of maintenance mode
Here is an example of performing the "list" operation: 

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation list

Here is an example of performing the "query" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation query --pod homer-NFS-pod

Here is an example of performing the "create" operation w/single datastore:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation create --datacenter MN-physical --enable_sdrs true --enable_sdrs_iometric true --pod moe-NFS-pod --sdrs_automation automated --sdrs_evaluate_period 480 --sdrs_imbal_thres 30 --sdrs_latency 15 --sdrs_util_diff 20 --sdrs_util_space 60 --datastore himalaya-NFS-moe-primp-1

Here is an example of performing the "create" operation w/datastore file:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation create --datacenter MN-physical --enable_sdrs true --enable_sdrs_iometric true --pod moe-NFS-pod --sdrs_automation automated --sdrs_evaluate_period 480 --sdrs_imbal_thres 30 --sdrs_latency 15 --sdrs_util_diff 20 --sdrs_util_space 60 --datastore_file dsfile

Here is an example of performing the "delete" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation delete --pod moe-NFS-pod

Here is an example of performing the "add_datastore" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation add_datastore --pod moe-NFS-pod --datastore himalaya-NFS-moe-primp-2

Here is an example of performing the "remove_datastore" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation remove_datastore --pod moe-NFS-pod --datastore himalaya-NFS-moe-primp-1

Note: Both "add_datastore" and "remove_datastore" operation support single datastore and/or datastore file

Here is an example of performing the "ent_maint" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation ent_maint --pod homer-NFS-pod --datastore himalaya-NFS-moe-primp-5

Here is an example of performing the "ext_maint" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation exi_maint --pod homer-NFS-pod --datastore himalaya-NFS-moe-primp-5

There is also complete perl docs for this script which can be called using the following command:

perldoc datastoreClusterManagement.pl

Categories // Automation, vSphere Tags // ESXi 5.0, SDRS, storage drs, storagePod, vSphere 5.0

New vSphere 5 HA, DRS and SDRS Advanced/Hidden Options

07.21.2011 by William Lam // 7 Comments

While testing the new HA (FDM) in vSphere 5 during the beta, I had noticed a new warning message on one of the ESXi 5.0 hosts "The number of heartbeat datastores for host is 1, which is less than required: 2"

I wondered if this was something that could be disabled as long as the user was aware of this. Looking at the new availability guide, I found that two new advaned HA have been introduced relating to datastore heartbeat which is a secondary means of determining whether or not a host has been partitioned, isolated or has failed.

das.ignoreinsufficienthbdatastore - Disables configuration issues created if the host does not
have sufficient heartbeat datastores for vSphere HA. Default
value is false.
das.heartbeatdsperhost - Changes the number of heartbeat datastores required. Valid
values can range from 2-5 and the default is 2.

To disable the message, you will need to add this new advanced setting under the "vSphere HA" Advanced Options second and set the value to be true.

You then need to perform a reconfiguration of vSphere HA for this to take into effect. One method is to just disable/re-enable vSphere HA and the message is now gone. If you know you will have less than the minimal 2 datastores for heartbeating, you can configure this option when you first enable vSphere HA.

I was curious (obviously) to see if there were other advanced options and searching through the vpxd binary, I located some old and new advanced options that maybe applicable to vSphere DRS, DPM and SDRS.

Disclaimer: These options may or may not have been properly documented from my research/digging and it is most likely not supported by VMware. Please take caution if you decide to play with this advanced settings.

Setting Description
AvgStatPeriod Statistical sampling period in minutes
CapRpReservationAtDemand Caps the RP entitled reservation at demand during reservation divvying
CompressDrmdumpFiles Set to 1 to compress drmdump files & to 0 to not compress them
CostBenefit Enable/disable the use of cost benefit metric for filtering moves
CpuActivePctThresh Active percentage threshold above which the VM's CPU entitlement cap is increased to cluster maximum Mhz. Set it to 125 to disable this feature
DefaultDownTime Down time (millisecs) to use for VMs w/o history (-1 -> unspecified)
DefaultMigrationTime Migration time (secs) to use for VMs w/o history (-1 -> unspecified)
DefaultSioCapacityInIOPS Default peak IOPS to be used for datastore with zero slope
DefaultSioDeviceIntercept Default intercept parameter in device model for SDRS in x1000
DemandCapacityRatioTarget unknown
DemandCapacityRatioToleranceHost DPM/DRS: Consider recent demand history over this period for DPM power performance & DRS cost performance decisions
DumpSpace Disk space limit in megabytes for dumping module and domain state, set to 0 to disable dumping, set to -1 for unlimited space
EnableMinimalDumping Enable or Disable minimal dumping in release builds
EnableVmActiveAdjust Enable Adjustment of VM Cpu Active
EwmaWeight Weight for newer samples in exponential weighted moving averagein 1/100's
FairnessCacheInvalSec Maximum age of the fairness cache
GoodnessMetric Goodness metric for evaluating migration decisions
GoodnessPerStar Maximum goodness in 1/1000 required for a 1-star recommendation
IdleTax Idle tax percentage
IgnoreAffinityRulesForMaintenance Ignore affinity rules for datastore maintenance mode
IgnoreDownTimeLessThan Ignore down time less than this value in seconds
IoLoadBalancingAlwaysUseCurrent Always use current stats for IO load balancing
IoLoadBalancingMaxMovesPerHost Maximum number of moves from or to a datastore per round
IoLoadBalancingMinHistSecs Minimum number of seconds that should have passed before using current stats
IoLoadBalancingPercentile IO Load balancing default percentile to use
LogVerbose Turn on more verbose logging
MinGoodness Minimum goodness in 1/1000 required for any balance recommendation; if <=0, min set to abs value; if >0, min set to lessor of option & value set proportionate to running VMs, hosts, & rebal resources
MinImbalance Minimum cluster imbalance in 1/1000 required for any recommendations
MinStarsForMandMoves Minimum star rating for mandatory recommendations
NumUnreservedSlots Number of unreserved capacity slots to maintain
PowerOnFakeActiveCpuPct Fake active CPU percentage to use for initial share allocation
PowerOnFakeActiveMemPct Fake active memory percentage to use for initial share allocation
PowerPerformanceHistorySecs unknown
PowerPerformancePercentileMultiplier DPM: Set percentile for stable time for power performance
PowerPerformanceRatio DPM: Set Power Performance ratio
PowerPerformanceVmDemandHistoryNumStdDev DPM: Compute demand for history period as mean plus this many standard deviations, capped at maximum demand observed
RawCapDiffPercent Percent by which RawCapacity values need to differ to be signicant
RelocateThresh Threshold in stars for relocation
RequireMinCapOnStrictHaAdmit Make Vm power on depend on minimum capacity becoming powered on and on any recommendations triggered by spare Vms
ResourceChangeThresh Minimum percent of resource setting change for a recommendation
SecondaryMetricWeight Weight for secondary metric in overall metric
SecondaryMetricWeightMult Weight multiplier for secondary metric in overall metric
SetBaseGoodnessForSpaceViolation -1*Goodness value added for a move exceeding space threshold on destination
SetSpaceLoadToDatastoreUsedMB If 0, set space load to sum of vmdk entitlements [default]; if 1, set space load to datastore used MB if higher
SpaceGrowthSecs The length of time to consider in the space growth risk analysis. Should be an order of magnitude longer than the typical storage vmotion time.
UseDownTime Enable/disable the use of downtime in cost benefit metric
UseIoSharesForEntitlement Use vmdk IO shares for entitlement computation
UsePeakIOPSCapacity Use peak IOPS as the capacity of a datastore
VmDemandHistorySecsHostOn unknown
VmDemandHistorySecsSoftRules Consider recent demand history over this period in making decisions to drop soft rules
VmMaxDownTime Reject the moves if the predicted downTime will exceed the max (in secs) for non-FT VM
VmMaxDownTimeFT Reject the moves if the predicted downTime will exceed the max (in Secs) for FT VM
VmRelocationSecs Amount of time it takes to relocate a VM

As you can see the advanced/hidden options in the above table can be potentially applicable to DRS, DPM and SDRS and I have not personally tested all of the settings. There might be some interesting and possibly useful settings, one such setting is SDRS IgnoreAffinityRulesForMaintenance which ignores the affinity rules for datastore maintenance mode. To configure SDRS Advanced Options, you will need to navigate over to the "Datastore" view and edit a Storage Pod under "SDRS Automation" and selecting "Advanced Option"

Categories // Uncategorized Tags // ESXi 5.0, fdm, ha, SDRS, vSphere 5.0

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025