WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

2 Hidden Virtual Machine Gems in the vSphere 5 API

07.28.2011 by William Lam // 3 Comments

I was recently going through the new vSphere 5 API reference guide and I had stumbled across two new interesting virtual machine features that I did not see any mention of in the vSphere 5 beta documentation.

The first feature is the support for a new e1000e virtual ethernet adapter which provides support for PCI-Express adapters. This is a new virtual device type that has been added in vSphere 5 and is only supported on virtual machines running on Virtual Hardware 8.

The interesting caveat about this feature is that it is not available as an option through the vSphere Client when adding a virtual network adapter to a virtual machine. You only have the option of pcnet,vmxnet2,vmxnet3 or e1000 but not the new e1000e. Since the feature is in the vSphere API, I updated an old vSphere SDK for Perl script called vmNICManagement.pl to support the update of an existing virtual network adapter to an e1000e.

Here is an example of a VM that is running Virtual Hardware 8 with a normal e1000 virtual network adapter which we will then convert into an e1000e adapter.

We then run the script and select the virtual network adapter to update and the operation "updatenictype" and specify the nictype which is e1000e in this example

Once the virtual network adapter has been updated, you can view the virtual machine's settings once again and you will see the new e1000e adapter.

There are also two additional device types that have been introduced in vSphere 5: VirtualUSBXHCIController which is virtual USB Extensible Host Controller Interface (USB 3.0) and VirtualHdAudioCard (check out Kendrick Coleman's post here)

The second feature is the ability configure the boot device order for a virtual machine which is available through the vSphere API but not through the vSphere Client. This feature does not actually change the BIOS boot order device but provides the ability to create an ordered list of preferred boot order devices. Once this list has been exhausted, then it will default to the BIOS list of boot order devices. This feature also requires the virtual machine to be running Virtual Hardware 8.

When viewing the configurations of a virtual machine using the vSphere Client, you have a few options to configure when it comes to the "Boot Options". The only new feature from vSphere 5 that has been exposed in the vSphere Client is the ability to select boot firmware which now includes EFI support. No where in the vSphere Client is there an option to control the boot device order.

The only way to view this new feature is using the vSphere API and using the vSphere MOB we can see that by default this feature is not enabled/configured.

I of course decided to write a simple vSphere SDK for Perl script called updateVMBootOrder.pl which allows you to configure a list of preferred boot devices. The options support cdrom, floppy, hard disk and virtual nic and the last two also require specifying the specific hard disk or virtual nic as a virtual machine can be configured with several.

The script supports a "list" operation which will query an existing virtual machine to check whether boot options have been configured, if so, they will be listed in order they were defined in.

We have confirmed that no boot options have been configured for this VM. If you decide to select a hard disk or virtual network adapter, you will need to run some additional queries to identify the "deviceKey" which is used to identify a particular virtual device. If you select cdrom or floppy, then these additional steps are not necessary.

To view the virtual network adapter device keys, you will need to run the "listnic" operation.

To view the hard disk device keys, you will need to run the "listdisk" operation.

Now let's say you want to configure the following boot device order:

  1. cdrom
  2. ethernet
  3. disk

You will need to use the "update" operation and then specify the devices and their order using --bootorder flag and also the respective --nickey and --diskkey.

If we now re-run the "list" operation, we will see the configured boot devices for this virtual machine.

We can also verify by checking the vSphere MOB for this particular virtual machine and the bootOptions parameter should now be populated.

If you would like to manually add this to a virtual machine's .vmx configuration file you can, but it is definitely not recommended. The following entries map to the example above:

As you can see it pays off to poke around in the vSphere API reference documentation 😀 You should try it sometime!

Categories // Uncategorized Tags // api, boot option, e1000e, ESXi 5.0, vSphere 5.0

Automating Storage DRS & Datastore Cluster Management in vSphere 5

07.27.2011 by William Lam // 1 Comment

Storage DRS is probably one of, if not the coolest feature in vSphere 5. Storage DRS allows you to cluster your datastores into what is known as a datastore cluster (storage pod) and automatically balances both your storage I/O and capacity just like DRS does with your compute. The user interface is extremely easy to use but as always, if you need to click through several screens to get to the outcome, some automation can never hurt 🙂

I decided to create a vSphere SDK for Perl script called datastoreClusterManagement.pl which allows you to automate all aspects of creating and managing your storage pod/cluster. You will need a system that has the vCLI installed or you can use VMware vMA 5 to run the script. You will also need to connect to a vCenter Server 5 for all SDRS operations.

The script supports 8 different types of operations and are describe below:

Operation Description
List List all available datastore clusters
Query  Query details for a specific datastore cluster
Create  Create a datastore cluster
Delete  Delete a datastore cluster (Datastores are left intact)
Add Datastore  Add datastore(s) to an existing datastore cluster
Remove Datastore  Remove datastore(s) from an existing datastore cluster
Enter Maintenance Mode  Put a datastore into maintenance mode
Exit Maintenance Mode  Take a datastore out of maintenance mode
Here is an example of performing the "list" operation: 

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation list

Here is an example of performing the "query" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation query --pod homer-NFS-pod

Here is an example of performing the "create" operation w/single datastore:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation create --datacenter MN-physical --enable_sdrs true --enable_sdrs_iometric true --pod moe-NFS-pod --sdrs_automation automated --sdrs_evaluate_period 480 --sdrs_imbal_thres 30 --sdrs_latency 15 --sdrs_util_diff 20 --sdrs_util_space 60 --datastore himalaya-NFS-moe-primp-1

Here is an example of performing the "create" operation w/datastore file:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation create --datacenter MN-physical --enable_sdrs true --enable_sdrs_iometric true --pod moe-NFS-pod --sdrs_automation automated --sdrs_evaluate_period 480 --sdrs_imbal_thres 30 --sdrs_latency 15 --sdrs_util_diff 20 --sdrs_util_space 60 --datastore_file dsfile

Here is an example of performing the "delete" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation delete --pod moe-NFS-pod

Here is an example of performing the "add_datastore" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation add_datastore --pod moe-NFS-pod --datastore himalaya-NFS-moe-primp-2

Here is an example of performing the "remove_datastore" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation remove_datastore --pod moe-NFS-pod --datastore himalaya-NFS-moe-primp-1

Note: Both "add_datastore" and "remove_datastore" operation support single datastore and/or datastore file

Here is an example of performing the "ent_maint" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation ent_maint --pod homer-NFS-pod --datastore himalaya-NFS-moe-primp-5

Here is an example of performing the "ext_maint" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation exi_maint --pod homer-NFS-pod --datastore himalaya-NFS-moe-primp-5

There is also complete perl docs for this script which can be called using the following command:

perldoc datastoreClusterManagement.pl

Categories // Automation, vSphere Tags // ESXi 5.0, SDRS, storage drs, storagePod, vSphere 5.0

vi-fastpass esxcli and resxtop bug resolved in vMA 5

07.27.2011 by William Lam // 2 Comments

Awhile back I wrote about an resxtop bug found in vMA 4.1 in which it no longer functions with vMA's vi-fastpass component and still requires you to provide the username and password even though vi-fastpass has been initialized for a given target. There was also a slight quirk when using esxcli and vi-fastpass, in which you had to specify in addition the --server of your ESX(i) host which allowed you to utilize vi-fastpass.

With the latest release of vMA 5, both of these issues have now been resolved for both ESXi 5 and ESX(i) 4.x. I would highly recommend you download the latest version if you would like to make use the vi-fastpass component in vMA.

Here is an example of using vi-fastpass with resxtop:

vi-admin@vma50-1:~> vifptarget -s himalaya.primp-industries.com
vi-admin@vma50-1:~[himalaya.primp-industries.com]> resxtop

Here is an example of using vi-fastpass with esxcli:

vi-admin@vma50-1:~> vifptarget -s himalaya.primp-industries.com
vi-admin@vma50-1:~[himalaya.primp-industries.com]> esxcli

Categories // Uncategorized Tags // esxcli, ESXi 5.0, resxtop, vMA5, vSphere 5.0

  • « Previous Page
  • 1
  • …
  • 15
  • 16
  • 17
  • 18
  • 19
  • …
  • 23
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025