WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

New vSphere 5 CLI Utilities/Tricks Marketing Did Not Tell You About Part 3

07.28.2011 by William Lam // Leave a Comment

Continuing from New vSphere 5 CLI Utilities/Tricks Marketing Did Not Tell You About Part 2

15. Another way to run dcui utility is using the dcuiweasel, I'm not exactly sure what the difference between this and dcui utility, but I suspect it has something to do with weasel also loaded.

16. You can run gdbserver for debugging processes, I suspect this maybe for VMware engineers/support to use.

17. To view/modify the security policy under /etc/vmware/secpolicy including VMCI modifications you can use the secpolicytools 

18. Networking details about the various filters can be viewed using summarize-dvfilter utility

19. There are two utilities that deal with managing devices but doesn't have a whole lot of help are vmkdevmgr and vmkmkdev. I suspect these may be as useful as this other vmk* utility (vmkchdev) but I haven't explored either utility

20. If you have VMkernel or VM core dumps, you can use this nifty utility vmkdump_extract to extract various bits of information including the logs within the core dump. This tool may come in very handy for troubleshooting purposes

21. There is a new esxcfg-* command that is only available in ESXi Shell called esxcfg-fcoe which as you can guess from the name allows you to manage and configure your FCoE devices. 

~ # esxcfg-fcoe
No action provided
esxcfg-fcoe []

Where is one of:

-d|--discover=vmnicX [] Initiate FCoE adapter discovery on the given NIC
-r|--remove-adapter=vmhbaXYZ Destroy the specified FCoE adapter
-x|--deactivate-nic=vmnicW Deactivate FCOE configuration for given NIC
-l|--list-vnports List discovered VNPorts associated with this host
-N|--list-fcoe-nics List FCoE-capable NICs with detailed information
-n|--compact-list-fcoe-nics List FCoE-capable NICs each on a single line,
with limited information
-e|--enable Enable an FCoE-capable NIC if it is disabled
-D|--disable Disable an FCoE-capable NIC if it is enabled (requires
reboot to take effect)
-h|--help Show this message

And are a set of:

-p|--priority=[0-7] Priority class to use for FCoE traffic
-v|--vlan=id VLAN ID to use for FCoE traffic
-a|--macaddress=xx:xx:xx:xx:xx:xx MAC address to use for the underlying FCoE controller

Examples:

To discover FCoE adapters on a given NIC, using default settings
esxcfg-fcoe -d vmnicX

To discover FCoE adapters on a given NIC, specifying only MAC address
esxcfg-fcoe -d vmnicX -a MA

To discover FCoE adapters on a given NIC, specifying all settings
esxcfg-fcoe -d vmnicX -p priority -v vlan -a MA

To remove an FCoE adapter
esxcfg-fcoe -r vmhbaXYZ

To enable FCoE for a given NIC, specifying bandwidth and MAC address
esxcfg-fcoe -e vmnicX -a MA

To disable FCoE for a given NIC
esxcfg-fcoe -D vmnicX

To deactivate FCoE for a given NIC
esxcfg-fcoe -x vmnicX

22. For more details on hbrfilterctl check out the blog post here

23. For more details on apply-host-profiles, applyHostProfile, esxhpcli and esxhpedit check out the blog post here.

24. On VCVA (vCenter Virtual Appliance) you can quickly list the port configuration by running the following command:

vcenter50-2:~ # /usr/lib/vmware-vpx/py/vccfg.py -v defaults
VC_ROOT_SSH=yes
VC_PORT_QS_HTTPS=10443
VC_ESXI_AUTODEPLOY_MAX_SIZE=2
VC_PORT_NETDUMPER=6500
VC_ESXI_NETDUMPER_DIR_MAX=2
VC_PORT_HTTPS=443
VC_PORT_WEB_SVC_HTTPS=8443
VC_PORT_HEARTBEAT=902
VC_PORT_AUTODEPLOY=6502
VC_PORT_LDAP=389
VC_PORT_SYSLOG=514
VC_PORT_QS_HTTP=10080
VC_PORT_WEB_SVC_HTTP=8080
VC_PORT_HTTP=80
VC_PORT_SYSLOG_SSL=1514
VC_PORT_QS_XDB=10109
VC_CFG_RESULT=0

Categories // Uncategorized Tags // ESXi 5.0, vSphere 5.0

2 Hidden Virtual Machine Gems in the vSphere 5 API

07.28.2011 by William Lam // 3 Comments

I was recently going through the new vSphere 5 API reference guide and I had stumbled across two new interesting virtual machine features that I did not see any mention of in the vSphere 5 beta documentation.

The first feature is the support for a new e1000e virtual ethernet adapter which provides support for PCI-Express adapters. This is a new virtual device type that has been added in vSphere 5 and is only supported on virtual machines running on Virtual Hardware 8.

The interesting caveat about this feature is that it is not available as an option through the vSphere Client when adding a virtual network adapter to a virtual machine. You only have the option of pcnet,vmxnet2,vmxnet3 or e1000 but not the new e1000e. Since the feature is in the vSphere API, I updated an old vSphere SDK for Perl script called vmNICManagement.pl to support the update of an existing virtual network adapter to an e1000e.

Here is an example of a VM that is running Virtual Hardware 8 with a normal e1000 virtual network adapter which we will then convert into an e1000e adapter.

We then run the script and select the virtual network adapter to update and the operation "updatenictype" and specify the nictype which is e1000e in this example

Once the virtual network adapter has been updated, you can view the virtual machine's settings once again and you will see the new e1000e adapter.

There are also two additional device types that have been introduced in vSphere 5: VirtualUSBXHCIController which is virtual USB Extensible Host Controller Interface (USB 3.0) and VirtualHdAudioCard (check out Kendrick Coleman's post here)

The second feature is the ability configure the boot device order for a virtual machine which is available through the vSphere API but not through the vSphere Client. This feature does not actually change the BIOS boot order device but provides the ability to create an ordered list of preferred boot order devices. Once this list has been exhausted, then it will default to the BIOS list of boot order devices. This feature also requires the virtual machine to be running Virtual Hardware 8.

When viewing the configurations of a virtual machine using the vSphere Client, you have a few options to configure when it comes to the "Boot Options". The only new feature from vSphere 5 that has been exposed in the vSphere Client is the ability to select boot firmware which now includes EFI support. No where in the vSphere Client is there an option to control the boot device order.

The only way to view this new feature is using the vSphere API and using the vSphere MOB we can see that by default this feature is not enabled/configured.

I of course decided to write a simple vSphere SDK for Perl script called updateVMBootOrder.pl which allows you to configure a list of preferred boot devices. The options support cdrom, floppy, hard disk and virtual nic and the last two also require specifying the specific hard disk or virtual nic as a virtual machine can be configured with several.

The script supports a "list" operation which will query an existing virtual machine to check whether boot options have been configured, if so, they will be listed in order they were defined in.

We have confirmed that no boot options have been configured for this VM. If you decide to select a hard disk or virtual network adapter, you will need to run some additional queries to identify the "deviceKey" which is used to identify a particular virtual device. If you select cdrom or floppy, then these additional steps are not necessary.

To view the virtual network adapter device keys, you will need to run the "listnic" operation.

To view the hard disk device keys, you will need to run the "listdisk" operation.

Now let's say you want to configure the following boot device order:

  1. cdrom
  2. ethernet
  3. disk

You will need to use the "update" operation and then specify the devices and their order using --bootorder flag and also the respective --nickey and --diskkey.

If we now re-run the "list" operation, we will see the configured boot devices for this virtual machine.

We can also verify by checking the vSphere MOB for this particular virtual machine and the bootOptions parameter should now be populated.

If you would like to manually add this to a virtual machine's .vmx configuration file you can, but it is definitely not recommended. The following entries map to the example above:

As you can see it pays off to poke around in the vSphere API reference documentation 😀 You should try it sometime!

Categories // Uncategorized Tags // api, boot option, e1000e, ESXi 5.0, vSphere 5.0

Automating Storage DRS & Datastore Cluster Management in vSphere 5

07.27.2011 by William Lam // 1 Comment

Storage DRS is probably one of, if not the coolest feature in vSphere 5. Storage DRS allows you to cluster your datastores into what is known as a datastore cluster (storage pod) and automatically balances both your storage I/O and capacity just like DRS does with your compute. The user interface is extremely easy to use but as always, if you need to click through several screens to get to the outcome, some automation can never hurt 🙂

I decided to create a vSphere SDK for Perl script called datastoreClusterManagement.pl which allows you to automate all aspects of creating and managing your storage pod/cluster. You will need a system that has the vCLI installed or you can use VMware vMA 5 to run the script. You will also need to connect to a vCenter Server 5 for all SDRS operations.

The script supports 8 different types of operations and are describe below:

Operation Description
List List all available datastore clusters
Query  Query details for a specific datastore cluster
Create  Create a datastore cluster
Delete  Delete a datastore cluster (Datastores are left intact)
Add Datastore  Add datastore(s) to an existing datastore cluster
Remove Datastore  Remove datastore(s) from an existing datastore cluster
Enter Maintenance Mode  Put a datastore into maintenance mode
Exit Maintenance Mode  Take a datastore out of maintenance mode
Here is an example of performing the "list" operation: 

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation list

Here is an example of performing the "query" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation query --pod homer-NFS-pod

Here is an example of performing the "create" operation w/single datastore:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation create --datacenter MN-physical --enable_sdrs true --enable_sdrs_iometric true --pod moe-NFS-pod --sdrs_automation automated --sdrs_evaluate_period 480 --sdrs_imbal_thres 30 --sdrs_latency 15 --sdrs_util_diff 20 --sdrs_util_space 60 --datastore himalaya-NFS-moe-primp-1

Here is an example of performing the "create" operation w/datastore file:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation create --datacenter MN-physical --enable_sdrs true --enable_sdrs_iometric true --pod moe-NFS-pod --sdrs_automation automated --sdrs_evaluate_period 480 --sdrs_imbal_thres 30 --sdrs_latency 15 --sdrs_util_diff 20 --sdrs_util_space 60 --datastore_file dsfile

Here is an example of performing the "delete" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation delete --pod moe-NFS-pod

Here is an example of performing the "add_datastore" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation add_datastore --pod moe-NFS-pod --datastore himalaya-NFS-moe-primp-2

Here is an example of performing the "remove_datastore" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation remove_datastore --pod moe-NFS-pod --datastore himalaya-NFS-moe-primp-1

Note: Both "add_datastore" and "remove_datastore" operation support single datastore and/or datastore file

Here is an example of performing the "ent_maint" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation ent_maint --pod homer-NFS-pod --datastore himalaya-NFS-moe-primp-5

Here is an example of performing the "ext_maint" operation:

./datastoreClusterManagement.pl --server vcenter50-1 --username root --operation exi_maint --pod homer-NFS-pod --datastore himalaya-NFS-moe-primp-5

There is also complete perl docs for this script which can be called using the following command:

perldoc datastoreClusterManagement.pl

Categories // Automation, vSphere Tags // ESXi 5.0, SDRS, storage drs, storagePod, vSphere 5.0

  • « Previous Page
  • 1
  • …
  • 11
  • 12
  • 13
  • 14
  • 15
  • …
  • 19
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025