WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Using the vSphere API to remotely generate ESXi performance support bundles

06.14.2016 by William Lam // 2 Comments

This is a follow-up post from my previous article on how to run a script using a vCenter Alarm action in the vCenter Server Appliance (VCSA). What I had demonstrated was using a pyvmomi (vSphere SDK for Python) script to be triggered automatically to generate a VMware Support Bundle for all ESXi hosts for a given vSphere Cluster. The one caveat that I had mentioned in the blog post was that the solution slightly differed from the original request which was to create an ESXi performance support bundle. However, the vSphere API only supports the creation of generic VMware Support Bundle which may not be as useful if you are only interested in collecting more granular performance stats for troubleshooting purposes.

After publishing the article, I had thought about the problem a bit more and realized there is still a way to solve the original request. Before going into the solution, I wanted to quickly cover how you can generate an ESXi Performance Support Bundle which can be done either directly in the ESXi Shell using something like the following:

vm-support -p -d 60 -i 5 -w /vmfs/volumes/datastore1

or you can actually use a neat little trick which I had blogged about here back in 2011 where you can simply open a web browser and run the following:

https://esxi-1.primp-industries.com/cgi-bin/vm-support.cgi?performance=true&interval=5&duration=60

Obviously, the first option is not ideal as you would need to SSH (generally disabled for good security practices) to each and every ESXi host and then manually run the command and copy the support bundle off of each system. The second option still requires going to each and every ESXi host, however it does not require ESXi Shell or SSH access. This is still not ideal from an Automation standpoint, especially if these ESXi hosts are already being managed by vCenter Server.

However, the second option is what gave me the lightbulb idea! I had recalled a couple of years back that I had blogged about a way to efficiently transfer files to a vSphere Datastore using the vSphere API. The solution leveraged a neat little vSphere API method called AcquireGenericServiceTicket() which is part of the vCenter Server sessionManager. Using this method, we can request a ticket for a specific file with a one time HTTP request to connect directly to an ESXi host. This means, I can connect to vCenter Server using the vSphere API, retrieve all ESXi hosts from a given vSphere Cluster and request a one time ticket to remotely generate an ESXi performance support bundle and then download it locally to the VCSA (or any other place that you can run the pyvmomi sample).

Download the pyvmomi script: generate_esxi_perf_bundle_from_vsphere_cluster.py

Here is the sample log output when triggering this script from vCenter Alarm in the VCSA:

2016-06-08 19:29:42;INFO;Cluster passed from VC Alarm: Non-VSAN-Cluster
2016-06-08 19:29:42;INFO;Creating directory /storage/log/esxi-support-logs to store support bundle
2016-06-08 19:29:42;INFO;Requesting Session Ticket for 192.168.1.190
2016-06-08 19:29:42;INFO;Waiting for Performance support bundle to be generated on 192.168.1.190 to /storage/log/esxi-support-logs/vmsupport-192.168.1.190.tgz
2016-06-08 19:33:19;INFO;Requesting Session Ticket for 192.168.1.191
2016-06-08 19:33:19;INFO;Waiting for Performance support bundle to be generated on 192.168.1.191 to /storage/log/esxi-support-logs/vmsupport-192.168.1.191.tgz

I have also created a couple more scripts exercising some additional use cases that I think customers may also find useful. Stay tuned for those additional articles later this week.

UPDATE (06/16/16) - There was another question internally asking whether other types of ESXi Support Bundles could also be generated using this method and the answer is yes. You simply just need to specify the types of manifests you would like to collect, such as HungVM for example.

To list the available Manifests and their respective IDs, you can manually perform this operation once by opening browser specifying the following URL:

https://192.168.1.149/cgi-bin/vm-support.cgi?listmanifests=true

Screen Shot 2016-06-16 at 5.38.08 AM
To list the available Groups and their respective IDs, you can specify the following URL:

https://192.168.1.149/cgi-bin/vm-support.cgi?listgroups=true

Screen Shot 2016-06-16 at 5.45.56 AM
Here is an example URL constructed using some of these params:

https://192.168.1.149/cgi-bin/vm-support.cgi?manifests=HungVM:Coredump_VM%20HungVM:Suspend_VM&groups=Fault%20Hardware%20Logs%20Network%20Storage%20System%20Userworld%20Virtual&vm=FULL_PATH_TO_VM_VMX

The following VMware KB 2005715 may also be useful as it provides some additional examples on using these additional parameters.

Categories // Automation, ESXi, vSphere Tags // ESXi, performance, python, pyVmomi, support bundle, vm-support, vm-support.cgi, vSphere API

How to run a script from a vCenter Alarm action in the VCSA?

06.07.2016 by William Lam // 9 Comments

As more and more customers transition from the vCenter Server for Windows to the vCenter Server Appliance (VCSA), a question that I get from time to time is how do I run a script using a vCenter Alarm from the VCSA? Traditionally, customers would create and run Windows specific scripts such as a simple batch script or a more powerful Powershell/PowerCLI script. However, since the VCSA is a Linux based operating system, these Windows specific scripts would no longer function, at least out of the box.

There are a couple of options:

  1. If you have quite a few scripts and you prefer not to re-write them to run on the VCSA, then you can instead have them executed on a remote system.
    • Instead of running the Windows scripts on the VCSA itself, a simplified script could be created to remotely call into another system which would then run the actual scripts. An example of this could be creating a Python script and using WMI to remotely run a script on a Windows system.
    • Similar to previous option, instead of a local script you could send an SNMP trap to a remote system would then run the actual Windows scripts. Here is an example of using vCenter Orchestrator which has a PowerCLI plugin that can be triggered by receiving an SNMP trap.
  2. Re-write or create new scripts that would run in the VCSA. You can leverage pyvmomi (vSphere SDK for Python) which is already included in the VCSA. This will allow you to access the vSphere API just like you would with PowerCLI on a vCenter Server for Windows if you had that installed.

Given there are already multiple ways of achieving #1 and I am sure there are other options if you just did a quick Google search, I am going to focus on #2 which will demonstrate how to create a pyvmomi script that can be triggered by a vCenter Alarm action running on the VCSA.

Disclaimer: It is generally recommended that you do not install or run additional tools or scripts on the vCenter Server (Windows or VCSA) itself as it may potentially impact the system. If it is possible, a remote system using programmatic APIs to provide centralize management and configuration is preferred and it also limits the number of users that may have access directly to the vCenter Server.

The following example is based off of a real use case from one of our customers. The requirement was to automatically generate and collect an ESXi performance bundle (part of the support bundle) for all hosts in a given vSphere Cluster when a specific vCenter Alarm is triggered. In this example, we will create a simple vCenter Alarm on our vSphere Cluster that triggers our script when a particular VM is modified as this is probably one of the easiest way to test the alarm itself.

UPDATE (06/14/16) - For generating an ESXI performance support bundle, please have a look at this article which can then be applied to this article in terms of running the script in the VCSA.

When a vCenter Alarm is triggered, there are a bunch of metadata that is passed from the vCenter Alarm itself and stored in a set of VMware specific environmental variables which you can find some more details in the documentation here. These environmental variables can then be accessed and used directly from within the script itself. Here is an example of what the environmental variables looks like for my vCenter Alarm which triggers off of the VM reconfigured event:

VMWARE_ALARM_EVENT_DVS=
VMWARE_ALARM_TRIGGERINGSUMMARY=Event: VM reconfigured (224045)
VMWARE_ALARM_OLDSTATUS=Gray
VMWARE_ALARM_ID=alarm-302
VMWARE_ALARM_EVENT_DATACENTER=NUC-Datacenter
VMWARE_ALARM_EVENT_NETWORK=
VMWARE_ALARM_DECLARINGSUMMARY=([Event alarm expression: VM reconfigured; Status = Red])
VMWARE_ALARM_EVENT_DATASTORE=
VMWARE_ALARM_TARGET_ID=vm-606
VMWARE_ALARM_NAME=00 - Collect all ESXi Host Logs
VMWARE_ALARM_EVENT_USERNAME=VGHETTO.LOCAL\Administrator
VMWARE_ALARM_EVENTDESCRIPTION=Reconfigured DummyVM on 192.168.1.190 in NUC-Datacenter
VMWARE_ALARM_TARGET_NAME=DummyVM
VMWARE_ALARM_EVENT_COMPUTERESOURCE=Non-VSAN-Cluster
VMWARE_ALARM_EVENT_HOST=192.168.1.190
VMWARE_ALARM_NEWSTATUS=Red
VMWARE_ALARM_EVENT_VM=DummyVM
VMWARE_ALARM_ALARMVALUE=Event details

The variable that we care about in this particular example is the VMWARE_ALARM_EVENT_COMPUTERESOURCE as this provides us with the name of the vSphere Cluster that we will need to iterate all the ESXi hosts to then generate the ESXi support bundles. I have created the following pyvmomi script called generate_esxi_support_bundle_from_vsphere_cluster.py which you will need to download and upload to your VCSA. The script has been modified so that the credentials are stored in (line 44 and 50) within the script versus prompting for it on the command-line (you can easily tweak it for testing purposes). By default, the script will store the logs under /storage/log but if you wish to place it else where, you can modify line 55.

Note: If you wish to manually test this script before implementing the vCenter Alarm, you will probably want to comment line 105 and manually specify the vSphere Cluster in line 106.

Below are the instructions on configuring the vCenter Alarm and the python script to run:

Step 1 - Upload the generate_esxi_support_bundle_from_vsphere_cluster.py script to your VCSA (I stored it in /root) and set it to an executable by running the following command:

chmod +x generate_esxi_support_bundle_from_vsphere_cluster.py

Step 2 - Create a new vCenter Alarm under a vSphere Cluster which has some ESXi hosts attached. The trigger event will be VM reconfigured and the alarm action will be the path to the script such as /root/generate_esxi_support_bundle_from_vsphere_cluster.py as shown in the screenshot below.

run-script-from-vcenter-alarm-vcsa-0
Step 3 - Go ahead and modify the VM to trigger the alarm. I found the easiest way with the least amount of clicks is to just update the VM's annotation 😉 If everything was configured successfully, you should see the vCenter Alarm activate as well as a task to start the log bundle collection. Since the script is being call non-interactively, if you want to see the progress of the script itself, you can login to the VCSA and all details from the script is logged to its own log file in /var/log/vcenter_alarms.log.

Here is a snippet of what you would see

2016-06-05 17:37:54;INFO;Cluster passed from VC Alarm: Non-VSAN-Cluster
2016-06-05 17:37:54;INFO;Generating support bundle
2016-06-05 17:40:55;INFO;Downloading https://192.168.1.190:443/downloads/vmsupport-527c584f-f4cf-cd6d-fc3e-963a7a375bc9.tgz to /storage/log/esxi-support-logs/vmsupport-192.168.1.190.tgz
2016-06-05 17:40:55;INFO;Downloading https://192.168.1.191:443/downloads/vmsupport-521774a8-4fba-7ce4-42a5-e2f0a244f237.tgz to /storage/log/esxi-support-logs/vmsupport-192.168.1.191.tgz

Hopefully this gives you an idea of how you might run custom scripts triggered from a vCenter Alarm from within the VCSA. With a bit of vSphere API knowledge and the fact that pyvmomi is installed on the VCSA by default, you can create some pretty powerful workflows triggered from various events using a vCenter Alarm in the VCSA!

Categories // Automation, PowerCLI, VCSA Tags // alarm, python, pyVmomi, vcenter server appliance, VCSA, vSphere API

Getting Started with Tech Preview of Docker Volume Driver for vSphere

05.31.2016 by William Lam // 8 Comments

A couple of weeks ago, I got an early sneak peak at some of the work that was being done in VMware's Storage and Availability Business Unit (SABU) on providing storage persistency for Docker Containers in a vSphere based environment. Today, VMware has open sourced a new Docker Volume Driver for vSphere (Tech Preview) that will enable customers to easily take advantage of their existing vSphere Storage (VSAN, VMFS and NFS) and provide persistent storage access to Docker Containers running on top of the vSphere platform. Both the Developers and vSphere Administrators will have familiar interfaces in how they manage and interact with these Docker Volumes from vSphere, which we will explore further below. 

The new Docker Volume Driver for vSphere is comprised of two components: The first is the vSphere Docker Volume Plugin that is installed inside of a Docker Host (VM) that will allow you to instantiate new Docker Volumes. The second is the vSphere Data Volume Driver that is installed in the ESXi Hypervisor host that will handle the VMDK creation and the mapping of the Docker Volume request back to the Docker Hosts. If you have shared storage on your ESXi hosts, you can have a VM on one ESXi host create a Docker Volume and have a completely different VM on another ESXi host mount the exact same Docker Volume. Below is diagram to help illustrate the different components that make up the Docker Volume Driver for vSphere.
docker-volume-driver-for-vsphere-00
Below is a quick tutorial on how to get started with the new Docker Volume Driver for vSphere.

Pre-Requisites

  • vSphere ESXi 6.0+
  • vSphere Storage (VSAN, VMFS or NFS) for ESXi host (shared storage required for multi-ESXi host support)
  • Docker Host (VM) running Docker 1.9+ (recommend using VMware Photon 1.0 RC OVA but Ubuntu 10.04 works as well)

Getting Started

Step 1 - Download the vSphere Docker Volume Plugin (RPM or DEB) and vSphere Docker Volume Driver VIB for ESXi

Step 2 - Install the vSphere Docker Volume Driver VIB in ESXi by SCP'ing the VIB to the ESXi and then run the following command specifying the full path to the VIB:

esxcli software vib install -v /vmware-esx-vmdkops-0.1.0.tp.vib -f

docker-volume-driver-for-vsphere-1
Step 3 - Install the vSphere Docker Volume Plugin by SCP'ing the RPM or DEB file to your Docker Host (VM) and then run one of the following commands:

rpm -ivh docker-volume-vsphere-0.1.0.tp-1.x86_64.rpm
dpkg -i docker-volume-vsphere-0.1.0.tp-1.x86_64.db

docker-volume-driver-for-vsphere-2

Creating Docker Volumes on vSphere (Developer)

To create your first Docker Volume on vSphere, a Developer would only need access to a Container Host (VM) like PhotonOS for example that has the vSphere Docker Volume Plugin installed. They would then use the familiar Docker CLI to create a Docker Volume like they normally would and there is nothing they need to know about the underlying infrastructure.

Run the following command to create a new Docker Volume called vol1 with the capacity of 10GB using the new vmdk driver:

docker volume create --driver=vmdk --name=vol1 -o size=10gb

We can list all the Docker Volumes that available by running the following command:

docker volume ls

We can also inspect a specific Docker Volume by running the following command and specifying the name of the volume:

docker volume inspect vol1

docker-volume-driver-for-vsphere-3
Lets actually do something with this volume now by attaching it to a simple Busybox Docker Container by running the following command:

docker run --rm -it -v vol1:/mnt/volume1 busybox

docker-volume-driver-for-vsphere-4
As you can see from the screenshot above, I have now successfully accessed the Docker Volume that we had created earlier and I am now able to write to it. If you have another VM that resides on the same underlying shared storage, you can also mount the Docker Volume that you had just created from a different system.

Pretty straight forward and easy right? Happy Developers 🙂

Managing Docker Volumes on vSphere (vSphere Administrator)

For the vSphere Administrators, you must be wondering, did I just give my Developers full access to the underlying vSphere Storage to consume as much storage as possible? Of course not, we have not forgotten about our VI Admins and we have some tools to help. Today, there is a CLI utility located at /usr/lib/vmware/vmdkops/bin/vmdkops_admin.py which runs directly on the ESXi Shell (hopefully this will turn into an API in the future) which provides visibility into how much storage is being consumed (provisioned and usage) by the individual Docker Volumes as well as who is creating them and their respective Virtual Machine mappings.

Lets take a look at a quick example by logging into the ESXi Shell. To view the list ofDocker Volumes that have been created, run the following command:

/usr/lib/vmware/vmdkops/bin/vmdkops_admin.py ls

You should see the name of the Docker Volume that we had created earlier and the respective vSphere Datastore in which it was provisioned to. At the time of writing this, these were the only two default properties that are displayed out of the box. You can actually add additional columns by simply using the -c option by running the following command:

/usr/lib/vmware/vmdkops/bin/vmdkops_admin.py ls -c volume,datastore,created-by,policy,attached-to,capacity,used

docker-volume-driver-for-vsphere-5
Now we get a bunch more information like which VM had created the Docker Volume, the BIOS UUID that the Docker Volume is currently attached to, the VSAN VM Storage Policy that was used (applicable to VSAN env only), the provisioned and used capacity. In my opinion, this should be the default set of columns and this is something I have feedback to the team, so perhaps this will be the default when the Tech Preview is released.

One thing that to be aware of is that the Docker Volumes (VMDKs) will automatically be provisioned onto the same underlying vSphere Datastore as the Docker Host VM (which makes sense given it needs to be able to access it). In the future, it may be possible to specify where you may want your Docker Volumes to be provisioned. If you have any feedback on this, be sure to leave a comment in the Issues page of the Github project.

Docker Volume Role Management

Although not yet implemented in the Tech Preview, it looks like VI Admins will also have the ability to create Roles that restrict the types of Docker Volume operations that a given set of VM(s) can perform as well as the maximum amount of storage that can be provisioned.

Here is an example of what the command would look like:

/usr/lib/vmware/vmdkops/bin/vmdkops_admin.py role create --name DevLead-Role --volume-maxsize 100GB --rights create,delete,mount --matches-vm photon-docker-host-*

Docker Volume VSAN VM Storage Policy Management

Since VSAN is one of the supported vSphere Storage backends with the new Docker Volume Driver, VI Admins will also have the ability to create custom VSAN VM Storage Policies that can then be specified during Docker Volume creations. Lets take a look at how this works.

To create a new VSAN Policy, you will need to specify the name of the policy and provide the set of VSAN capabilities formatted using the same syntax found in esxcli vsan policy getdefault command. Here is a mapping of the VSAN capabilities to the attribute names:

VSAN Capability Description VSAN Capability Key
Number of failures to tolerate hostFailuresToTolerate
Number of disk stripes per object stripeWidth
Force provisioning forceProvisioning
Object space reservation proportionalCapacity
Flash read cache reservation cacheReservation

Run the following command to create a new VSAN Policy called FTT=0 which sets Failure to Tolerate to 0 and Force Provisioning to true:

/usr/lib/vmware/vmdkops/bin/vmdkops_admin.py policy create --name FTT=0 --content '(("hostFailuresToTolerate" i0) ("forceProvisioning" i1))'

docker-volume-driver-for-vsphere-6
If we now go back to our Docker Host, we can create a second Docker Volume called vol2 with capacity of 20GB, but we will also now include our new VSAN Policy called FTT=0 policy by running the following command:

docker volume create --driver=vmdk --name=vol2 -o size=20gb -o vsan-policy-name=FTT=0

We can also easily see which VSAN Policies are in use by simply listing all policies by running the following command:

docker-volume-driver-for-vsphere-7
All VSAN Policies and Docker Volumes (VMDK) that are created are stored under a folder called dockvols in the root of the vSphere Datastore as shown in the screenshot below.

docker-volume-driver-for-vsphere-8
Hopefully this gave you a nice overview of what the Docker Volume Driver for vSphere can do in its first release. Remember, this is still in Tech Preview and our Engineers would love to get your feedback on the things you like, new features or things that we can improve on. The project is on Github which you can visit the page here and if you have any questions or run into bugs, be sure to submit an issue here or contribute back!

Categories // Automation, Cloud Native, Docker, ESXi, VSAN, vSphere Tags // cloud native apps, container, Docker, docker volume, ESXi, nfs, vmdkops_admin.py, vmfs, VSAN

  • « Previous Page
  • 1
  • …
  • 163
  • 164
  • 165
  • 166
  • 167
  • …
  • 224
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...