WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Getting started with the new VSAN 6.2 Management API

03.17.2016 by William Lam // 4 Comments

As I have previously written, with the release of VSAN 6.2 (vSphere 6.0 Update 2), a new VSAN Management API has been introduced which allows developers, partners and administrators to automate all aspects of VSAN functionality including: complete lifecycle (install, upgrade, patch), monitoring (including RVC and VSAN Observer capabilities), configuration and troubleshooting. Simply put, anything that you can do from the vSphere Web Client UI or the RVC CLI from a VSAN standpoint, you will be able to completely automate using one of the four new VSAN Management SDKs: Python, Ruby, Java and C#.

In this article, I will show you how to quickly get started with the new VSAN Management API by exercising two of the VSAN Management SDKs: Python and Ruby. Another must bookmark is the VSAN Management API Reference Guide which provides more details on the individual APIs and how they work.

Step 1 - Download the VSAN Management SDK of your choice. You can find the VSAN Management SDK downloads in either of two locations:

  • VMware Developer Center, under the SDK tab
  • vSphere Download page under Automation Tools & SDK(s) Tab

In this example, I will be using the VSAN Management for Python and Ruby.

Step 2 - Extract the VSAN Management SDK zip file which should give you a directory that contains a README on how to setup the SDK and three folders as shown in the screenshot below:

Screen Shot 2016-03-17 at 6.27.58 AM
The bindings directory contains the language specific library to the VSAN Management API. The docs folder contains the offline copy of the VSAN Management API Reference Guide and lastly the sample directory contains a basic sample to connect to VSAN Cluster as well as an individual ESXi host contributing to a VSAN Cluster.

Step 3 - Each of the VSAN Management SDKs extends the existing vSphere Management SDKs. This means that you will need to have the appropriate vSphere Management SDK installed on your system before you can proceed further. In our example, Python requires pyvmomi (vSphere SDK for Python) and Ruby requires rbvmomi (vSphere SDK for Ruby). If you are on Mac OS X, it is pretty easy to install these packages. Make sure you are running the latest version of these SDKs.

Installing pyvmomi:

pip install pyvmomi

Upgrading pyvmomi: (if you already have it installed)

pip install --upgrade pyvmomi

Installing rbvmomi:

gem install rbvmomi

Step 4 - Copy the VSAN Management SDK library file over to the samples directory.

VSAN Mgmt SDK for Python:

cp bindings/vsanmgmtObjects.py samplecode/

VSAN Mgmt SDK for Ruby:

cp bindings/vsanmgmt.api.rb samplecode/

Step 5 - At this point, we can quickly verify that everything was setup correctly by going into the samplecode directory and then run one of the following commands below. If everything is working as expected, then you should see the usage information being printed out. If you do not, the issue is most likely with vSphere Management SDK either not being the latest version or not configured in the default library path for the sample to access.

VSAN Mgmt SDK for Python:

python vsanapisamples.py

Screen Shot 2016-03-17 at 6.43.32 AM
VSAN Mgmt SDK for Ruby:

ruby vsanapisamples.rb

Screen Shot 2016-03-17 at 6.43.56 AM
Step 6 - Now that we have verified our VSAN Management SDK installation was successful, we can now connect to a real VSAN Cluster. To do so, run the following command and specify your vCenter Server along with the credentials as well as the name of the VSAN Cluster. If successful, you should see the status for each of your VSAN hosts and its current state as seen in the screenshot below.

VSAN Mgmt SDK for Python:

python vsanapisamples.py -s 192.168.1.139 -u '*protected email*' -p 'VMware1!' --cluster VSAN-Cluster

Screen Shot 2016-03-17 at 6.54.38 AM
VSAN Mgmt SDK for Ruby:

ruby vsanapisamples.rb -o 192.168.1.139 -u '*protected email*' -k -p 'VMware1!' VSAN-Cluster

Screen Shot 2016-03-17 at 6.56.34 AM
Step 7 - Each individual ESXi hosts that participate in the VSAN Cluster also exposes an VSAN Management API endpoint. We can use this exact same sample to connect to one of the hosts to get some additional information. To do so, run the following command and specify your ESXi hosts along with the credentials.

VSAN Mgmt SDK for Python:

python vsanapisamples.py -s 192.168.1.190 -u root -p vmware123

Screen Shot 2016-03-17 at 7.00.28 AM
VSAN Mgmt SDK for Ruby:

ruby vsanapisamples.rb -o 192.168.1.190 -u root -p vmware123

Screen Shot 2016-03-17 at 6.59.46 AM
As you can see, it is pretty straight forward on getting the new VSAN Management SDK up and running. The provided sample only scratches the surface of what is possible and for a complete list of capabilities within the new VSAN Management API, be sure to check out the VSAN Management API Reference document for more information. I am really looking forward to seeing what solutions our customers and partners develop using this new API. If you would like to contribute code samples back to the community or just find new samples be sure to check out the VMware Developer Center Sample Exchange. 

Categories // Automation, VSAN Tags // python, pyVmomi, rbvmomi, ruby, Virtual SAN, VSAN 6.2, vSphere 6.0 Update 2, vSphere API

Apple Mac Pro 6,1 PCIe SSD issue resolved w/ESXi 6.0 Update 2

03.15.2016 by William Lam // 6 Comments

Early last year, the new Apple Mac Pro 6,1 (aka black can design) was certified and fully supported on vSphere 6.0 which I had blogged about here. Several months later, customers discovered that some of the newer Mac Pro 6,1 units were shipping with different model of their PCIe SSD device than what was originally released at GA. This was problematic because ESXi was not aware of this newer device and could not detect during or after installation. Although a work around was identified for customers looking to install either ESXi 5.x or 6.x on the newer Apple Mac Pros, it definitely was not ideal.

It has taken a bit longer than expected, but the issue has now been resolved with the latest release of ESXi 6.0 Update 2. A similar fix will be available for customers running ESXi 5.5 in a future update. You can find the direct download for ESXi 6.0 Update 2 in link below which includes a pointer to the release notes in case you are interested in other fixes included in this release.

  • vSphere ESXi 6.0u2 - https://my.vmware.com/web/vmware/details?downloadGroup=ESXI60U2&productId=491&rPId=10348

Categories // Apple, ESXi, vSphere Tags // apple, ESXi, mac pro, ssd, vSphere 6.0 Update 2

Quick Tip - iPerf now available on ESXi

03.15.2016 by William Lam // 25 Comments

The other day I was looking to get a baseline of the built-in ethernet adapter of my recently upgraded vSphere home lab running on the Intel NUC. I decided to use iPerf for my testing which is a commonly used command-line tool to help measure network performance. I also found a couple of articles from well known VMware community members: Erik Bussink and Raphael Schitz on this topic as well which were quite helpful. Erik's article here outlines how to run the iPerf Client/Server using a pair of Virtual Machines running on top of two ESXi hosts. Although the overhead of the VMs should be negligible, I was looking for a way to benchmark the ESXi hosts directly. Raphael's article here looked promising as he found a way to create a custom iPerf VIB which can run directly on ESXi.

I was about to download the custom VIB and I had just remembered that the VSAN Health Check plugin in the vSphere Web Client also provides some proactive network performance tests to be run on your environment. I was curious on what tool was being leveraged for this capability and in doing a quick search on the ESXi filesystem, I found that it was actually iPerf. The iPerf binary is located in /usr/lib/vmware/vsan/bin/iperf and looks to have been bundled as part of ESXi starting with the vSphere 6.0 release from what I can tell.

UPDATE (10/19/23) - As of ESXi 8.x, you may run into following error when attempting to start running iperf:

iperf3: running in appDom(30): ipAddr = ::, port = 5201: Access denied by vmkernel access control policy

To workaround this, you can run the following command:

esxcli system secpolicy domain set -n appDom -l disabled

and re-enable (use "enforcing" as the value) once you have completed your iperf test both on the server and client ESXi host.

UPDATE (09/20/22) - As of ESXi 7.0 Update 3 20036589 (possibly earlier) and later, you no longer need to make a copy of the iperf3 utility. You can simply run it from /usr/lib/vmware/vsan/bin/iperf3 and you also do NOT have to lower the security by changing ESXi Advanced Setting execInstalledOnly to FALSE

UPDATE (10/02/18) - It looks like iPerf3 is now back in both ESXi 6.5 Update 2 as well as the upcoming ESXi 6.7 Update 1 release. You can find the iPerf binary under /usr/lib/vmware/vsan/bin/iperf3

One interesting thing that I found when trying to run iPerf in "server" mode is that you would always get the following error:

bind failed: Operation not permitted

The only way I found to fix this issue was to basically copy the iPerf binary to another file like iperf3.copy which it would then allow me to start iPerf in "server" mode. You can do so by running the following command on the ESXi Shell:

cp /usr/lib/vmware/vsan/bin/iperf3 /usr/lib/vmware/vsan/bin/iperf3.copy

Running iPerf in "Client" mode works as expected and the copy is only needed when running in "server" mode. To perform the test, I used both my Apple Mac Mini and the Intel NUC which had ESXi running with no VMs.

I ran the iPerf "Server" on the Intel NUC by running the following command:

/usr/lib/vmware/vsan/bin/iperf3.copy -s -B [IPERF-SERVER-IP]

Note: If you have multiple network interfaces, you can specify which interface to use with the -B option and passing the IP Address of that interface.

I ran the iPerf "Client" on the Mac Mini by running the following command and specifying the address of the iPerf "Server":

/usr/lib/vmware/vsan/bin/iperf3 -n 800M -c [IPERF-SERVER]

I also disabled the ESXi firewall before running the test, which you can do by running the following command:

esxcli network firewall set --enabled false

Here is a screenshot of my iPerf test running between my Mac Mini and Intel NUC. Hopefully this will come in handy for anyone needing to run some basic network performance tests between two ESXi hosts without having to setup additional VMs.

esxi-iperf

Categories // ESXi, vSphere 6.0 Tags // ESXi, iperf, network, performance, vSphere 6.0 Update 1, vSphere 6.0 Update 2

  • « Previous Page
  • 1
  • …
  • 319
  • 320
  • 321
  • 322
  • 323
  • …
  • 560
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...