WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

New Application Awareness API in vSphere 5

08.25.2011 by William Lam // 12 Comments

Application Awareness HA is not a new feature in vSphere 5, it has actually has been around since vSphere 4.1. With this feature, vSphere HA can monitor heartbeats generated from an application monitor within the guestOS and reboot the virtual machine.

What is actually new in vSphere 5 is the availability of the Application Awareness API for anyone to consume and integrate into their own application and/or script. Prior to this, the API was only exposed to ISV and 3rd party vendors with solutions such as Symantec's ApplicationHA and Neverfail's vAppHA.

The Application Awareness API (will be shorthanded as AAA, going forward) is supported in both Linux and Windows (32/64bit) and can be accessed by installing a package within the guestOS. This package includes the necessary AAA libraries to create your own program/scripts in C, C++, Java and Perl. In addition, the package also includes a pre-compiled binary (vmware-appmonitor) that implements all the AAA methods that can easily be called from within a script or program. AAA uses the VMware Tools as communication channel to the ESX(i) host and you will need to ensure VMware Tools is installed and running. Since the communication is between VMware Tools and the ESX(i) host, there is no reliance on a TCP/IP network for this communication channel.

UPDATE: You can download GuestAppMonitor SDK here.

There are currently 6 AAA methods:

  • VMGuestAppMonitor_Enable()
    • Enables Monitoring
  • VMGuestAppMonitor_MarkActive()
    • Call every 30 seconds to mark application as active
  • VMGuestAppMonitor_Disable()
    • Disable Monitoring
  • VMGuestAppMonitor_IsEnabled()
    • Returns status of Monitoring
  • VMGuestAppMonitor_GetAppStatus()
    • Returns the current application status recorded for the application, three possible values:
      • green = Virtual machine infrastructure acknowledges that the application is being monitored.
      • red = Virtual machine infrastructure does not think the application is being monitored. The
        HA monitoring agent will initialize an asynchronous reset on the virtual machine if the status is Red
      • gray = Application should send VMGuestAppMonitor_Enable again, followed
        by VMGuestAppMonitor_MarkActive, because either application monitoring failed, or the virtual machine was vMotioned to a different location
  • VMGuestAppMonitor_Free()
    • Frees the result of the *_GetAppStatus() call (only required when writing your own program)

Here is the basic workflow for using AAA within your application:

Check_if_enabled();
If not enabled, set enabled
Monitor application
If application is good, send heartbeat

Wait 15 seconds
Loop

To start using AAA functionality, you will first need to have a vSphere HA enabled cluster and enable the "VM and Application Monitoring" under VM Monitoring.

You have the ability to configure the sensitivity of AAA from Low, Medium and High which correlates to the heartbeat interval and frequency of virtual machine reboots. You also have the option of configuring your own custom policy.

Lastly, you can choose which virtual machines will be included in VM Monitoring and/or Application Monitoring.

Note: It is important to note, that HA will expect an application heartbeat to be generated every 30secs. If HA fails to receive a heartbeat within 30secs, it will transition the appHeartbeatStatus state from green to red. Depending on the configured sensitivity policy, once the heartbeat interval has been violated, HA will then restart the virtual machine. For example, if you have the sensitivity configured to medium and a heartbeat was not received within 30secs, it will change to a red state. If HA still has not received a heartbeat within 60secs of that time, then it will reboot the virtual machine.

Here is an example of installing AAA on a Linux system and compiling the C sample program:

Step 1 - Copy the AAA package to Linux system and extract the contents

Step 2 - Change into VMware-GuestAppMonitorSDK/docs/samples/C/ and ensure you have gcc compiler. You may have to change the makefile if you are on a 64bit platform as it is configured by default to point to the 32bit library. When you are ready, just type "make" and you should get compiled binary called "sample" which is the sample C application

Before you run the application, you need to ensure that your shared library path variable LD_LIBRARY_PATH has been updated to include the libappmonitorlib.so. To update the variable, you will run the following command:

LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/full/path/to/VMware-GuestAppMonitorSDK/lib64

Step 3 - You can now run the "sample" application which runs in a continuous loop and automatically enables AAA within the virtual machine and sends heartbeats to ESX(i) host. You can press control+C which then brings up three options: stop (s), disable (d), continue (c). The last two options should be pretty straight forward, but if you decide to stop the heartbeating and you don't resume, you will see HA restart the virtual machine based on your cluster configuration.

As you can see, once the heartbeats have not been received within the specified interval, HA will take action and reboot the virtual machine as expected. Here is a detail of the events as seen by vCenter and the HA cluster:

Here is an example of installing AAA on a Windows system and using the pre-compiled vmware-appmonitor binary:

Step1 - Copy the AAA package to Windows system and extract the contents 

Step 2 - Launch a cmd prompt and change into C:\Users\Administrator\Desktop\VMware-GuestAppMonitorSDK-457765\VMware-GuestAppMonitorSDK\bin\win64 directory. Depending if you are on a 32/64bit OS, you will need to modify the win{32,64}

Step 3 - Run the vmware-appmonitor.exe which will then provide you with options: enable, disable, markActive, isEnabled, getAppStatus

Note: The options in vmware-appmonitor for both Linux and Windows are exactly the same, this is very nice for consistency and scripting purposes. Just like with the direct use of the API, you need to first run the enable command to turn on Application monitoring and then run the markActive command which sends the heartbeats. You can always check the current heartbeat status by running getAppStatus or check whether AAA is enabled by running isEnabled command

As a reference, here are the paths to the vmware-appmonitor for both Linux and Windows:

  • VMware-GuestAppMonitorSDK/bin/bin{32,64}/vmware-appmonitor
  • VMware-GuestAppMonitorSDK-457765\VMware-GuestAppMonitorSDK\bin\win{32,64}\vmware-appmonitor.exe

For those of you who are not into programming languages such as C,C++ and Java, here is an example using Perl. In the example, the script simulates the monitoring of an application by checking whether or not a file exists. The script starts off by creating a file that will monitored and then loops for 5 minutes and checks for the existence of the file. Once the 5 minutes are up, the script then disables Application monitoring and exits.

Note: You will need to set the two variables at the top which define the path to the shared library and the vmware-appmonitor binary.

So far we have demonstrated on how to setup AAA within the guestOS and provided a variety of programming/scripting interfaces such as C,C++, Java and Perl to integrate with your own application/script. Now what if we wanted to extract the heartbeat status for all virtual machines that have AAA implemented going through vCenter? You can easily do so by using the vSphere API and querying the appHeartbeatStatus property of your virtual machine.

I wrote a very simple vSphere SDK for Perl script getVMAppStatus.pl that allows you to query a virtual machine connecting to either vCenter or directly to an ESX(i) host to extract the heartbeat status.

Download the getVMAppStatus.pl script here.

The script can return three different status: gray, green or red and the definition for each is defined above.

Now before you jump right in and start leveraging this awesome API in either a custom application or script, you need to understand your application and various ways on detecting that it has failed and when you would like vSphere HA to reboot the virtual machine. Simply checking whether the process is running may or may not be enough.

To get more details on some of the best practices around using the Application Awareness API, I would highly recommend you check out Tom Stephens upcoming VMworld 2011 presentation TEX1928 Implementing Application Awareness in the Web Client and The Uptime Blog for more details about implementing AAA . For now, if you would like to learn more about Application Awareness API, check out last year's VMworld presentation.

Categories // Uncategorized Tags // api, ha, vmha, vSphere 4.1, vSphere 5.0

How Fast is the New vSphere 5 HA/DRS on 64 Node Cluster? FAST!

08.05.2011 by William Lam // 2 Comments

**** Disclaimer: 32 nodes is still the maximum supported configuration for vSphere 5 from VMware, this has not changed. This is purely a demonstration, use at your own risk ****

Recently while catching up on several episodes of the the weekly VMTN Community Podcast, an interesting comment was made by Tom Stephens (Sr. Technical Marketing for vSphere HA) in episode #150 regarding the size of a vSphere cluster. Tom mentioned that there was no "technical" reason a vSphere cluster could not scale beyond 32 nodes. I decided to find out for myself as this was something I had tried with vSphere 4.x and though the configuration of the cluster completed, only 32 hosts were property configured.

Here is a quick video on enabling the new HA (FDM) and DRS on a vSphere 5 cluster with 64 vESXi hosts, you should watch the entire video as it only took an astonishing 2minutes and 37seconds to complete! Hats off to the VMware HA/DRS engineering teams, you can really see the difference in the speed and performance of the new vSphere HA/DRS architecture in vSphere 5.

vSphere 5 - 64 Node Cluster from lamw on Vimeo.

BTW - If someone from VMware is watching this, what does CSI stand for? I believe this was the codename for what is now known as FDM

Categories // Uncategorized Tags // cluster, drs, ESXi 5.0, fdm, ha, vSphere 5.0

New vSphere 5 HA, DRS and SDRS Advanced/Hidden Options

07.21.2011 by William Lam // 7 Comments

While testing the new HA (FDM) in vSphere 5 during the beta, I had noticed a new warning message on one of the ESXi 5.0 hosts "The number of heartbeat datastores for host is 1, which is less than required: 2"

I wondered if this was something that could be disabled as long as the user was aware of this. Looking at the new availability guide, I found that two new advaned HA have been introduced relating to datastore heartbeat which is a secondary means of determining whether or not a host has been partitioned, isolated or has failed.

das.ignoreinsufficienthbdatastore - Disables configuration issues created if the host does not
have sufficient heartbeat datastores for vSphere HA. Default
value is false.
das.heartbeatdsperhost - Changes the number of heartbeat datastores required. Valid
values can range from 2-5 and the default is 2.

To disable the message, you will need to add this new advanced setting under the "vSphere HA" Advanced Options second and set the value to be true.

You then need to perform a reconfiguration of vSphere HA for this to take into effect. One method is to just disable/re-enable vSphere HA and the message is now gone. If you know you will have less than the minimal 2 datastores for heartbeating, you can configure this option when you first enable vSphere HA.

I was curious (obviously) to see if there were other advanced options and searching through the vpxd binary, I located some old and new advanced options that maybe applicable to vSphere DRS, DPM and SDRS.

Disclaimer: These options may or may not have been properly documented from my research/digging and it is most likely not supported by VMware. Please take caution if you decide to play with this advanced settings.

Setting Description
AvgStatPeriod Statistical sampling period in minutes
CapRpReservationAtDemand Caps the RP entitled reservation at demand during reservation divvying
CompressDrmdumpFiles Set to 1 to compress drmdump files & to 0 to not compress them
CostBenefit Enable/disable the use of cost benefit metric for filtering moves
CpuActivePctThresh Active percentage threshold above which the VM's CPU entitlement cap is increased to cluster maximum Mhz. Set it to 125 to disable this feature
DefaultDownTime Down time (millisecs) to use for VMs w/o history (-1 -> unspecified)
DefaultMigrationTime Migration time (secs) to use for VMs w/o history (-1 -> unspecified)
DefaultSioCapacityInIOPS Default peak IOPS to be used for datastore with zero slope
DefaultSioDeviceIntercept Default intercept parameter in device model for SDRS in x1000
DemandCapacityRatioTarget unknown
DemandCapacityRatioToleranceHost DPM/DRS: Consider recent demand history over this period for DPM power performance & DRS cost performance decisions
DumpSpace Disk space limit in megabytes for dumping module and domain state, set to 0 to disable dumping, set to -1 for unlimited space
EnableMinimalDumping Enable or Disable minimal dumping in release builds
EnableVmActiveAdjust Enable Adjustment of VM Cpu Active
EwmaWeight Weight for newer samples in exponential weighted moving averagein 1/100's
FairnessCacheInvalSec Maximum age of the fairness cache
GoodnessMetric Goodness metric for evaluating migration decisions
GoodnessPerStar Maximum goodness in 1/1000 required for a 1-star recommendation
IdleTax Idle tax percentage
IgnoreAffinityRulesForMaintenance Ignore affinity rules for datastore maintenance mode
IgnoreDownTimeLessThan Ignore down time less than this value in seconds
IoLoadBalancingAlwaysUseCurrent Always use current stats for IO load balancing
IoLoadBalancingMaxMovesPerHost Maximum number of moves from or to a datastore per round
IoLoadBalancingMinHistSecs Minimum number of seconds that should have passed before using current stats
IoLoadBalancingPercentile IO Load balancing default percentile to use
LogVerbose Turn on more verbose logging
MinGoodness Minimum goodness in 1/1000 required for any balance recommendation; if <=0, min set to abs value; if >0, min set to lessor of option & value set proportionate to running VMs, hosts, & rebal resources
MinImbalance Minimum cluster imbalance in 1/1000 required for any recommendations
MinStarsForMandMoves Minimum star rating for mandatory recommendations
NumUnreservedSlots Number of unreserved capacity slots to maintain
PowerOnFakeActiveCpuPct Fake active CPU percentage to use for initial share allocation
PowerOnFakeActiveMemPct Fake active memory percentage to use for initial share allocation
PowerPerformanceHistorySecs unknown
PowerPerformancePercentileMultiplier DPM: Set percentile for stable time for power performance
PowerPerformanceRatio DPM: Set Power Performance ratio
PowerPerformanceVmDemandHistoryNumStdDev DPM: Compute demand for history period as mean plus this many standard deviations, capped at maximum demand observed
RawCapDiffPercent Percent by which RawCapacity values need to differ to be signicant
RelocateThresh Threshold in stars for relocation
RequireMinCapOnStrictHaAdmit Make Vm power on depend on minimum capacity becoming powered on and on any recommendations triggered by spare Vms
ResourceChangeThresh Minimum percent of resource setting change for a recommendation
SecondaryMetricWeight Weight for secondary metric in overall metric
SecondaryMetricWeightMult Weight multiplier for secondary metric in overall metric
SetBaseGoodnessForSpaceViolation -1*Goodness value added for a move exceeding space threshold on destination
SetSpaceLoadToDatastoreUsedMB If 0, set space load to sum of vmdk entitlements [default]; if 1, set space load to datastore used MB if higher
SpaceGrowthSecs The length of time to consider in the space growth risk analysis. Should be an order of magnitude longer than the typical storage vmotion time.
UseDownTime Enable/disable the use of downtime in cost benefit metric
UseIoSharesForEntitlement Use vmdk IO shares for entitlement computation
UsePeakIOPSCapacity Use peak IOPS as the capacity of a datastore
VmDemandHistorySecsHostOn unknown
VmDemandHistorySecsSoftRules Consider recent demand history over this period in making decisions to drop soft rules
VmMaxDownTime Reject the moves if the predicted downTime will exceed the max (in secs) for non-FT VM
VmMaxDownTimeFT Reject the moves if the predicted downTime will exceed the max (in Secs) for FT VM
VmRelocationSecs Amount of time it takes to relocate a VM

As you can see the advanced/hidden options in the above table can be potentially applicable to DRS, DPM and SDRS and I have not personally tested all of the settings. There might be some interesting and possibly useful settings, one such setting is SDRS IgnoreAffinityRulesForMaintenance which ignores the affinity rules for datastore maintenance mode. To configure SDRS Advanced Options, you will need to navigate over to the "Datastore" view and edit a Storage Pod under "SDRS Automation" and selecting "Advanced Option"

Categories // Uncategorized Tags // ESXi 5.0, fdm, ha, SDRS, vSphere 5.0

  • 1
  • 2
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025