WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

How to run a script from a vCenter Alarm action in the VCSA?

06.07.2016 by William Lam // 9 Comments

As more and more customers transition from the vCenter Server for Windows to the vCenter Server Appliance (VCSA), a question that I get from time to time is how do I run a script using a vCenter Alarm from the VCSA? Traditionally, customers would create and run Windows specific scripts such as a simple batch script or a more powerful Powershell/PowerCLI script. However, since the VCSA is a Linux based operating system, these Windows specific scripts would no longer function, at least out of the box.

There are a couple of options:

  1. If you have quite a few scripts and you prefer not to re-write them to run on the VCSA, then you can instead have them executed on a remote system.
    • Instead of running the Windows scripts on the VCSA itself, a simplified script could be created to remotely call into another system which would then run the actual scripts. An example of this could be creating a Python script and using WMI to remotely run a script on a Windows system.
    • Similar to previous option, instead of a local script you could send an SNMP trap to a remote system would then run the actual Windows scripts. Here is an example of using vCenter Orchestrator which has a PowerCLI plugin that can be triggered by receiving an SNMP trap.
  2. Re-write or create new scripts that would run in the VCSA. You can leverage pyvmomi (vSphere SDK for Python) which is already included in the VCSA. This will allow you to access the vSphere API just like you would with PowerCLI on a vCenter Server for Windows if you had that installed.

Given there are already multiple ways of achieving #1 and I am sure there are other options if you just did a quick Google search, I am going to focus on #2 which will demonstrate how to create a pyvmomi script that can be triggered by a vCenter Alarm action running on the VCSA.

Disclaimer: It is generally recommended that you do not install or run additional tools or scripts on the vCenter Server (Windows or VCSA) itself as it may potentially impact the system. If it is possible, a remote system using programmatic APIs to provide centralize management and configuration is preferred and it also limits the number of users that may have access directly to the vCenter Server.

The following example is based off of a real use case from one of our customers. The requirement was to automatically generate and collect an ESXi performance bundle (part of the support bundle) for all hosts in a given vSphere Cluster when a specific vCenter Alarm is triggered. In this example, we will create a simple vCenter Alarm on our vSphere Cluster that triggers our script when a particular VM is modified as this is probably one of the easiest way to test the alarm itself.

UPDATE (06/14/16) - For generating an ESXI performance support bundle, please have a look at this article which can then be applied to this article in terms of running the script in the VCSA.

When a vCenter Alarm is triggered, there are a bunch of metadata that is passed from the vCenter Alarm itself and stored in a set of VMware specific environmental variables which you can find some more details in the documentation here. These environmental variables can then be accessed and used directly from within the script itself. Here is an example of what the environmental variables looks like for my vCenter Alarm which triggers off of the VM reconfigured event:

VMWARE_ALARM_EVENT_DVS=
VMWARE_ALARM_TRIGGERINGSUMMARY=Event: VM reconfigured (224045)
VMWARE_ALARM_OLDSTATUS=Gray
VMWARE_ALARM_ID=alarm-302
VMWARE_ALARM_EVENT_DATACENTER=NUC-Datacenter
VMWARE_ALARM_EVENT_NETWORK=
VMWARE_ALARM_DECLARINGSUMMARY=([Event alarm expression: VM reconfigured; Status = Red])
VMWARE_ALARM_EVENT_DATASTORE=
VMWARE_ALARM_TARGET_ID=vm-606
VMWARE_ALARM_NAME=00 - Collect all ESXi Host Logs
VMWARE_ALARM_EVENT_USERNAME=VGHETTO.LOCAL\Administrator
VMWARE_ALARM_EVENTDESCRIPTION=Reconfigured DummyVM on 192.168.1.190 in NUC-Datacenter
VMWARE_ALARM_TARGET_NAME=DummyVM
VMWARE_ALARM_EVENT_COMPUTERESOURCE=Non-VSAN-Cluster
VMWARE_ALARM_EVENT_HOST=192.168.1.190
VMWARE_ALARM_NEWSTATUS=Red
VMWARE_ALARM_EVENT_VM=DummyVM
VMWARE_ALARM_ALARMVALUE=Event details

The variable that we care about in this particular example is the VMWARE_ALARM_EVENT_COMPUTERESOURCE as this provides us with the name of the vSphere Cluster that we will need to iterate all the ESXi hosts to then generate the ESXi support bundles. I have created the following pyvmomi script called generate_esxi_support_bundle_from_vsphere_cluster.py which you will need to download and upload to your VCSA. The script has been modified so that the credentials are stored in (line 44 and 50) within the script versus prompting for it on the command-line (you can easily tweak it for testing purposes). By default, the script will store the logs under /storage/log but if you wish to place it else where, you can modify line 55.

Note: If you wish to manually test this script before implementing the vCenter Alarm, you will probably want to comment line 105 and manually specify the vSphere Cluster in line 106.

Below are the instructions on configuring the vCenter Alarm and the python script to run:

Step 1 - Upload the generate_esxi_support_bundle_from_vsphere_cluster.py script to your VCSA (I stored it in /root) and set it to an executable by running the following command:

chmod +x generate_esxi_support_bundle_from_vsphere_cluster.py

Step 2 - Create a new vCenter Alarm under a vSphere Cluster which has some ESXi hosts attached. The trigger event will be VM reconfigured and the alarm action will be the path to the script such as /root/generate_esxi_support_bundle_from_vsphere_cluster.py as shown in the screenshot below.

run-script-from-vcenter-alarm-vcsa-0
Step 3 - Go ahead and modify the VM to trigger the alarm. I found the easiest way with the least amount of clicks is to just update the VM's annotation 😉 If everything was configured successfully, you should see the vCenter Alarm activate as well as a task to start the log bundle collection. Since the script is being call non-interactively, if you want to see the progress of the script itself, you can login to the VCSA and all details from the script is logged to its own log file in /var/log/vcenter_alarms.log.

Here is a snippet of what you would see

2016-06-05 17:37:54;INFO;Cluster passed from VC Alarm: Non-VSAN-Cluster
2016-06-05 17:37:54;INFO;Generating support bundle
2016-06-05 17:40:55;INFO;Downloading https://192.168.1.190:443/downloads/vmsupport-527c584f-f4cf-cd6d-fc3e-963a7a375bc9.tgz to /storage/log/esxi-support-logs/vmsupport-192.168.1.190.tgz
2016-06-05 17:40:55;INFO;Downloading https://192.168.1.191:443/downloads/vmsupport-521774a8-4fba-7ce4-42a5-e2f0a244f237.tgz to /storage/log/esxi-support-logs/vmsupport-192.168.1.191.tgz

Hopefully this gives you an idea of how you might run custom scripts triggered from a vCenter Alarm from within the VCSA. With a bit of vSphere API knowledge and the fact that pyvmomi is installed on the VCSA by default, you can create some pretty powerful workflows triggered from various events using a vCenter Alarm in the VCSA!

Categories // Automation, PowerCLI, VCSA Tags // alarm, python, pyVmomi, vcenter server appliance, VCSA, vSphere API

Instant Clone PowerCLI cmdlets Best Practices & Troubleshooting

08.06.2015 by William Lam // Leave a Comment

I was fortunate to have been given early access to the VMFork (Instant Clone) PowerCLI cmdlets to help provide early feedback and usability improvements before it was released to customers. Having spent some time with the Fling, I have learned a thing or two about how Instant Cloning works and some of the caveats or gotchas while creating the customization scripts that are used as part of the Instant Clone workflows. I wanted to put together a quick reference on some of my findings as well as well as other recommendations from Engineering who have worked closely with the Instant Clone feature.

The idea is to have this as a living document which I will update as new tips and tricks are identified.

Best Practices

  • Ensure VMware Tools is installed inside the guestOS and also good time to ensure you are running the latest
  • Both Pre/Post Customization scripts are uploaded to /var/tmp by the Enable-InstantCloneVM cmdlet
  • Do not delete Child VMs directly on ESXi, manage it through vCenter Server. There is currently a known issue in which deleting Child VMs will also delete the Parent VM's disk
  • Additional custom variables can be passed to the post-customization script by adding to the -ConfigParams array of variables.
    • An example could be passing in two custom properties called "foo" and "bar" which would look like:

    @{foo = "val1";bar ="val2"}

    • To retrieve the variable "foo" and "bar" from within the post-customization script, you would issue the following commands:

    vmtoolsd --cmd "info-get guestinfo.fork.foo"
    vmtoolsd --cmd "info-get guestinfo.fork.bar"

  • A Forked Child VM will also have a duplicate MAC Address which needs to be updated as it is not automatically picked up.
      • You can either manually set it by retrieving the guestinfo.fork.ethernet0.address with the post-customization script.
      • An easier way would be to reload it based on the guestOS type. On a Linux system, you can use the modprobe command like the following (Submited by George Hicken):

    modprobe -r vmxnet3;modprobe vmxnet3

  • A Forked Child VM may also have identical kernel entropy pools which means semi-predictable RNG, possibly including TCP sequence numbers (Submited by George Hicken)
  • A Forked Child VM's system clock may also be out of date (until you call hwclock --hctosys or similar) which can cause problems with ordering of file timestamps (Submited by George Hicken)
  • Shared host keys if you are using a PKI system or identical asset identifiers in the case of Windows and any sort of AD infrastructure would also need to be either removed prior or updated after a Child VM is created (Submited by George Hicken)
  • Instant Cloning Nested ESXi has been a bit tricky due to a known issue with the VMware Tools for Nested ESXi. I have found that manually preparing the guest prior to Instant Cloning has yield better results. For more information on how to Instant Clone Nested ESXi, check out the blog post here
  • Powering off the Parent VM means that the VM is no longer quiesced and this also means that new Child VMs can not be instantiated until all existing Forked Child VMs have been powered off and the Parent VM has been re-quiesced
  • If you plan on downloading or installing additional software packages on the Parent VM, it is recommended that you perform that operation directly in the VM and not within the pre-customization script. I have noticed that if pre-customization takes too long, the quiesce operation eventually fails even though the operations within the pre-customization script executed successfully.
  • To ensure Forked Child VMs do not contain duplicate disk ID's from Parent VM such as setting up a VSAN environment using Instant Clone Nested ESXi, add the disks after Forked Child VMs have been created.
  • For additional OS Customization Scripts, be sure to check out the Instant Clone community customization script repository and consider contributing back scripts that you have developed.
  • When you hard reset or power off on a child VM it will respawn from the parent, soft resets will not respawn (Submitted by Alan Renouf)

Troubleshooting

  • Instant Clone guestOS logs are stored in /var/tmp/quiesce.logvmfork-logs
  • Consider enabling tracing within your customization scripts. An example of this for a shell script is using

    set -x

  • Add additional echo or print statements like Start/Stop of certain sections like Pre/Post which can aide in reviewing the Instant Clone logs as seen in the screenshot above
  • For Instant Cloning Nested ESXi guestOSes, I recommend taking a snapshot after you have prepared the guest and removed any system specific information. This allows you to quickly revert back to a known state for ease of debugging. I found this to be very useful to be able to start back a known clean state while developing the customization scripts for Instant Cloning Nested ESXi
  • A known issue that is mentioned in the documentation of the Instant Clone cmdlets is after enabling a ParentVM for Instant Cloning, is that it is no longer available for migration to another ESXi host. The reason for this is that after powering off the VM, the "parentEnabled" boolean flag is still set to "true" which prevents the migration. Currently, there is not a work around but hopefully this will be resolved in a future update of the cmdlets. You can see this by running the following PowerCLI snippet:

    (Get-VM "MyParentVM").ExtensionData.Config.ForkConfigInfo

 

Categories // Automation, PowerCLI, vSphere 6.0 Tags // Fling, instant clone, vmfork, vmtoolsd, vSphere 6.0

Using PowerCLI to invoke Guest Operations API to a Nested ESXi VM

07.14.2015 by William Lam // 1 Comment

In my opinion, the Guest Operations API in vSphere is still one of the most powerful Virtual Machine capabilities that is available to vSphere Administrators and anyone else who integrates with the vSphere Platform. The Guest Operations API allows users to perform guest operations (running commands, transferring files, etc) directly within the guestOS as if you were logged in. Valid guest credentials are still required and once authenticated, the operations are then proxied through VMware Tools. Networking is not even required which makes this a handy feature for troubleshooting and can even extend into application level provisioning through a single API.

Obviously, I am a huge fan of this capability and have used it myself on more than one occasion. However, one limitation that I discovered awhile back when using the Guest Operations API with Nested ESXi VMs is that it threw some very strange memory related errors. It was only recently did I find out that there was a known issue with the VMware Tools for Nested ESXi, both the installable VIB and the pre-installed version in ESXi 6.0 on how the guest operations are executed. The good news is that for now, there is a simple workaround that can be applied when using the Guest Operations API.

You will need to add the following option, which runs the command under a specific resource group in the ESXi Shell:

'++group=host/vim/tmp'

Here is an example if I were to run the 'echo' command:

/bin/echo '++group=host/vim/tmp' "hello world"

A more interesting example would be to issue ESXCLI commands using the Guest Operations API, perhaps configuring the welcome message?

/bin/python '++group=host/vim/tmp' '/bin/esxcli.py system welcomemsg set -m "vGhetto Was Here"'

Notice, we need to pass in the resource group command to the "python" binary versus ESXCLI binary. The reason for this is that /bin/esxcli is really just a symlink to /bin/esxcli.py which is just a Python wrapper. The actual command being launched is the python interpreter and the arguments to the command is /bin/esxcli.py and the ESXCLI arguments itself.

For those who prefer to consume the Guest Operations API without having to directly use the vSphere API, you can use PowerCLI and use the Invoke-VMScript cmdlet. The problem with that is due to the way the cmdlet has been abstracted, the necessary underlying API details can not be accessed due to certain assumed defaults which can not be overridden or extended. This is a general problem that I have seen in more than one occasion where the abstraction is to make the consumption of the API simpler but in certain cases, it can also inhibit the use of the underlying API feature.

In this case, we will actually need to call into the vSphere API and using PowerCLI as an example, I have created a script called runGuestOpsInNestedESXiVM.ps1 which implements the specific Guest Operations APIs to issue commands to a Nested ESXi VM.

Here is an example of running the script and configuring the welcome message using ESXCLI:

guest_operations_api_nested_esxi

Categories // Automation, ESXi, PowerCLI, vSphere, vSphere 6.0 Tags // guest operations, nested, nested virtualization, vix, vix api, vmware tools

  • « Previous Page
  • 1
  • …
  • 52
  • 53
  • 54
  • 55
  • 56
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...