WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

How to Automate Host Cache Configuration in ESXi 5

07.28.2011 by William Lam // 1 Comment

ESXi 5.0 now supports a new feature called Host Cache which allows a user to offload the virtual machine's swap onto a local SSD device for performance, this can be very helpful for VMware View deployments or other VDI type deployments on vSphere 5.

Currently this is a manual process in which a VMFS volume must be created on an local SSD device and then configured to be used as a Host Cache datastore under the host configuration section. There are two ways of automating this whether this is done during the kickstart process which I am a fan of or as part of a post install process.

Method #1

In the first option, the process involves formatting and creating a VMFS volume on a local SSD devices and using a little python to connect to the vSphere MOB to perform the host cache configuration. Here is a snippet of what the kickstart would look like as part of the %firstboot section:

The script uses the partedUtil to format the local SSD and then using vmkfstools to put a VMFS volume and then finally connecting to the vSphere MOB to configure host cache.

Method #2

In the second option, I wrote a vSphere SDK for Perl script hostCacheManagement.pl using the vSphere APIs to manage and configure host cache datastores after an ESXi 5 host has been built. The script supports three options: list, enable and disable and will also validate that datastore being specified are SSD datastores.

Download the hostCacheManagement.pl script here.

Here is an example of listing all SSD datastores and whether or not they are being used for host caching:

Here is an example of enabling an SSD datastore for host cache:

Note: Make sure your "--swapsize" is less than or equal to the size of your SSD datastore else an error will be thrown. VMFS does take up some space for its metadata/etc.

Here is an example of disabling an SSD datastore for host cache:

Here is an example if you try to specify a non-SSD datastore, an error will be thrown:

Categories // Uncategorized Tags // ESXi 5.0, host cache, ssd, vSphere 5.0

New vSphere 5 CLI Utilities/Tricks Marketing Did Not Tell You About Part 3

07.28.2011 by William Lam // Leave a Comment

Continuing from New vSphere 5 CLI Utilities/Tricks Marketing Did Not Tell You About Part 2

15. Another way to run dcui utility is using the dcuiweasel, I'm not exactly sure what the difference between this and dcui utility, but I suspect it has something to do with weasel also loaded.

16. You can run gdbserver for debugging processes, I suspect this maybe for VMware engineers/support to use.

17. To view/modify the security policy under /etc/vmware/secpolicy including VMCI modifications you can use the secpolicytools 

18. Networking details about the various filters can be viewed using summarize-dvfilter utility

19. There are two utilities that deal with managing devices but doesn't have a whole lot of help are vmkdevmgr and vmkmkdev. I suspect these may be as useful as this other vmk* utility (vmkchdev) but I haven't explored either utility

20. If you have VMkernel or VM core dumps, you can use this nifty utility vmkdump_extract to extract various bits of information including the logs within the core dump. This tool may come in very handy for troubleshooting purposes

21. There is a new esxcfg-* command that is only available in ESXi Shell called esxcfg-fcoe which as you can guess from the name allows you to manage and configure your FCoE devices. 

~ # esxcfg-fcoe
No action provided
esxcfg-fcoe []

Where is one of:

-d|--discover=vmnicX [] Initiate FCoE adapter discovery on the given NIC
-r|--remove-adapter=vmhbaXYZ Destroy the specified FCoE adapter
-x|--deactivate-nic=vmnicW Deactivate FCOE configuration for given NIC
-l|--list-vnports List discovered VNPorts associated with this host
-N|--list-fcoe-nics List FCoE-capable NICs with detailed information
-n|--compact-list-fcoe-nics List FCoE-capable NICs each on a single line,
with limited information
-e|--enable Enable an FCoE-capable NIC if it is disabled
-D|--disable Disable an FCoE-capable NIC if it is enabled (requires
reboot to take effect)
-h|--help Show this message

And are a set of:

-p|--priority=[0-7] Priority class to use for FCoE traffic
-v|--vlan=id VLAN ID to use for FCoE traffic
-a|--macaddress=xx:xx:xx:xx:xx:xx MAC address to use for the underlying FCoE controller

Examples:

To discover FCoE adapters on a given NIC, using default settings
esxcfg-fcoe -d vmnicX

To discover FCoE adapters on a given NIC, specifying only MAC address
esxcfg-fcoe -d vmnicX -a MA

To discover FCoE adapters on a given NIC, specifying all settings
esxcfg-fcoe -d vmnicX -p priority -v vlan -a MA

To remove an FCoE adapter
esxcfg-fcoe -r vmhbaXYZ

To enable FCoE for a given NIC, specifying bandwidth and MAC address
esxcfg-fcoe -e vmnicX -a MA

To disable FCoE for a given NIC
esxcfg-fcoe -D vmnicX

To deactivate FCoE for a given NIC
esxcfg-fcoe -x vmnicX

22. For more details on hbrfilterctl check out the blog post here

23. For more details on apply-host-profiles, applyHostProfile, esxhpcli and esxhpedit check out the blog post here.

24. On VCVA (vCenter Virtual Appliance) you can quickly list the port configuration by running the following command:

vcenter50-2:~ # /usr/lib/vmware-vpx/py/vccfg.py -v defaults
VC_ROOT_SSH=yes
VC_PORT_QS_HTTPS=10443
VC_ESXI_AUTODEPLOY_MAX_SIZE=2
VC_PORT_NETDUMPER=6500
VC_ESXI_NETDUMPER_DIR_MAX=2
VC_PORT_HTTPS=443
VC_PORT_WEB_SVC_HTTPS=8443
VC_PORT_HEARTBEAT=902
VC_PORT_AUTODEPLOY=6502
VC_PORT_LDAP=389
VC_PORT_SYSLOG=514
VC_PORT_QS_HTTP=10080
VC_PORT_WEB_SVC_HTTP=8080
VC_PORT_HTTP=80
VC_PORT_SYSLOG_SSL=1514
VC_PORT_QS_XDB=10109
VC_CFG_RESULT=0

Categories // Uncategorized Tags // ESXi 5.0, vSphere 5.0

2 Hidden Virtual Machine Gems in the vSphere 5 API

07.28.2011 by William Lam // 3 Comments

I was recently going through the new vSphere 5 API reference guide and I had stumbled across two new interesting virtual machine features that I did not see any mention of in the vSphere 5 beta documentation.

The first feature is the support for a new e1000e virtual ethernet adapter which provides support for PCI-Express adapters. This is a new virtual device type that has been added in vSphere 5 and is only supported on virtual machines running on Virtual Hardware 8.

The interesting caveat about this feature is that it is not available as an option through the vSphere Client when adding a virtual network adapter to a virtual machine. You only have the option of pcnet,vmxnet2,vmxnet3 or e1000 but not the new e1000e. Since the feature is in the vSphere API, I updated an old vSphere SDK for Perl script called vmNICManagement.pl to support the update of an existing virtual network adapter to an e1000e.

Here is an example of a VM that is running Virtual Hardware 8 with a normal e1000 virtual network adapter which we will then convert into an e1000e adapter.

We then run the script and select the virtual network adapter to update and the operation "updatenictype" and specify the nictype which is e1000e in this example

Once the virtual network adapter has been updated, you can view the virtual machine's settings once again and you will see the new e1000e adapter.

There are also two additional device types that have been introduced in vSphere 5: VirtualUSBXHCIController which is virtual USB Extensible Host Controller Interface (USB 3.0) and VirtualHdAudioCard (check out Kendrick Coleman's post here)

The second feature is the ability configure the boot device order for a virtual machine which is available through the vSphere API but not through the vSphere Client. This feature does not actually change the BIOS boot order device but provides the ability to create an ordered list of preferred boot order devices. Once this list has been exhausted, then it will default to the BIOS list of boot order devices. This feature also requires the virtual machine to be running Virtual Hardware 8.

When viewing the configurations of a virtual machine using the vSphere Client, you have a few options to configure when it comes to the "Boot Options". The only new feature from vSphere 5 that has been exposed in the vSphere Client is the ability to select boot firmware which now includes EFI support. No where in the vSphere Client is there an option to control the boot device order.

The only way to view this new feature is using the vSphere API and using the vSphere MOB we can see that by default this feature is not enabled/configured.

I of course decided to write a simple vSphere SDK for Perl script called updateVMBootOrder.pl which allows you to configure a list of preferred boot devices. The options support cdrom, floppy, hard disk and virtual nic and the last two also require specifying the specific hard disk or virtual nic as a virtual machine can be configured with several.

The script supports a "list" operation which will query an existing virtual machine to check whether boot options have been configured, if so, they will be listed in order they were defined in.

We have confirmed that no boot options have been configured for this VM. If you decide to select a hard disk or virtual network adapter, you will need to run some additional queries to identify the "deviceKey" which is used to identify a particular virtual device. If you select cdrom or floppy, then these additional steps are not necessary.

To view the virtual network adapter device keys, you will need to run the "listnic" operation.

To view the hard disk device keys, you will need to run the "listdisk" operation.

Now let's say you want to configure the following boot device order:

  1. cdrom
  2. ethernet
  3. disk

You will need to use the "update" operation and then specify the devices and their order using --bootorder flag and also the respective --nickey and --diskkey.

If we now re-run the "list" operation, we will see the configured boot devices for this virtual machine.

We can also verify by checking the vSphere MOB for this particular virtual machine and the bootOptions parameter should now be populated.

If you would like to manually add this to a virtual machine's .vmx configuration file you can, but it is definitely not recommended. The following entries map to the example above:

As you can see it pays off to poke around in the vSphere API reference documentation 😀 You should try it sometime!

Categories // Uncategorized Tags // api, boot option, e1000e, ESXi 5.0, vSphere 5.0

  • « Previous Page
  • 1
  • …
  • 515
  • 516
  • 517
  • 518
  • 519
  • …
  • 560
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025