WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Exploring the vSphere Flash Read Cache (vFRC) APIs Part 2

11.12.2013 by William Lam // Leave a Comment

Continuing from Part 1 of Exploring the vSphere Flash Read Cache (vFRC) APIs, we will now explore the necessary vSphere APIs to setup and configure vFRC on your ESXi host. There are two workflows in which you can create your Virtual Flash Resource, the first is simply adding all valid SSDs as you would when using the vSphere Web Client which automatically creates a VFFS (Virtual Flash File System) which is used to manage the underlying SSD devices. The second workflow is to start with a single SSD and to manually create the VFFS volume which then allows you to extend the VFFS by adding additional SSD devices. We will be going over both set of workflows and the necessary vSphere APIs required to perform these operations.

To automate the configuration of vFRC on your ESXi hosts, you will need to access both vFlashManager and storageSystem along with the following vSphere API methods:

  • QueryAvailableSsds
  • ConfigureVFlashResourceEx_Task
  • DestroyVffs
  • FormatVffs
  • HostConfigureVFlashResource
  • ExtendVffs

To demonstrate the functionality of these vSphere APIs, I have created a vSphere SDK for Perl sample script called vflashHostMgmt.pl and it supports the following operations: query, listssd, add, format, extend and destroy

Workflow 1 - Add all valid SSD devices

To configure a Virtual Flash Resource for your ESXi host, you will need to use the vSphere Web Client and click on the "Add Capacity" button and select all valid SSD devices for that particular ESXi host as seen in the screenshot below.

To automate the same workflow, we first need to be able to identify the list of available SSD devices that could be used for either vFRC or even VSAN. There is a nice vSphere API method under the storageSystem called QueryAvailableSsds which has been implemented in the script as the "listssd" operation.

Here is an example execution of the "listssd" operation:

./vflashHostMgmt.pl --config .vcenter55-1 --vihost vesxi55-4.primp-industries.com --operation listssd

As you can see from the output we have three available SSD devices matching the vSphere Web Client output. To add these SSD devices and create your Virtual Flash Resource, you will need to use the "add" operation within the script that accepts a comma seperated list of the SSD device paths as shown in the above output. Next we need to call the vFlashManager's ConfigureVFlashResourceEx_Task method which thnan accepts an array of SSD device paths to automatically configure and add the Virtual Flash Resource.

Here is an example execution of the "add" operation:

./vflashHostMgmt.pl --config .vcenter55-1 --vihost vesxi55-4.primp-industries.com --operation add --disk /vmfs/devices/disks/naa.6000c297de55bcf0471f311abc865449,/vmfs/devices/disks/naa.6000c2992cfbf14a2d827303c48632fa,/vmfs/devices/disks/naa.6000c2989357b5d31eb20256e39f9338

We can confirm that our Virtual Flash Resource was successfully created by running the "query" operation.

Here is an example execution of the "query" operation:

./vflashHostMgmt.pl --config .vcenter55-1 --vihost vesxi55-4.primp-industries.com --operation query

From the output we can see a VFFS was automatically created for us including its name and UUID and it contains the three SSD devices we added in earlier. We can also confirm by logging into our vSphere Web Client and we should see the same output as well.

In preparation for the next workflow, we can easily destroy our VFFS which is the same operation within the vSphere Web Client by selecting the "Remove All" button. To do so, we need to use the storageSystem's DestroyVffs method. In the script, this has been implemented as the "destroy" operation.

Here is an example execution of the "destroy" operation:

As you can see workflow 1 is pretty straight forward if you have an ESXi host that contains all the SSD devices you wish to add to your Virtual Flash Resource. In workflow 2, we will take a look at starting with a single SSD and manually creating the VFFS which can then be extended OR if you have an existing Virtual Flash Resource and would like to extend it, the set of APis shown in workflow 2 will aide in that use case.

Workflow 2 - Create VFFS using single SSD device / Extend VFFS

When going through workflow 1, the VFFS volume is automatically created for the user and is not something on would need to think about unless you would like to extend an existing VFFS. In this workflow we start out by adding a single SSD device which will require the creation of VFFS volume and then we will then extend that VFFS with additional SSD devices so we end up in the same end state as workflow 1.

To create a VFFS, you will need to use the FormatVffs API method which accepts a single SSD device and VFFS label and then using the HostConfigureVFlashResource API method to mount the VFFS volume to the ESXi host. This has been implemented as the "format" operation which is similar to the "add" operation but require an additional --vffs parameter which denotes the VFFS volume label.

Here is an example execution of the "format" operation:

./vflashHostMgmt.pl --config .vcenter55-1 --vihost vesxi55-4.primp-industries.com --operation format --vffs vghetto-vffs --disk /vmfs/devices/disks/naa.6000c297de55bcf0471f311abc865449

As part of the result, it will return the VFFS UUID which is required when extending a VFFS. You can also get this information by using the "query" operation which we can also see the label that we have assigned our VFFS.

To add additional SSD devices to our existing VFFS using either workflow 1 or 2, you will need to use the ExtendVffs API method which requires the VFFS UUID and the SSD device you wish to add. This has been implemented as the "extend" operation within the script.

Here is an example execution of the "extend" operation:

./vflashHostMgmt.pl --config .vcenter55-1 --vihost vesxi55-4.primp-industries.com --operation extend --vffs_uuid 527fc6e6-249cdb69-d502-005056adfa73 --disk /vmfs/devices/disks/naa.6000c2992cfbf14a2d827303c48632fa

We can confirm our changes by using the "query" operation as well as looking at our Virtual Flash Resource using the vSphere Web Client. We should see the two SSD devices that we have added to our VFFS.

 

In Part 3 of exploring the vSphere Flash Read Cache (vFRC) APIs, we will take a look at migrating a virtual machine which has vFRC configured and the options we have in terms of either migrating or dropping the vFRC cache.

Categories // Uncategorized Tags // ESXi 5.5, vffs, vflash, vFRC, virtual flash file system, vSphere 5.5, vSphere Flash Read Cache

w00t! VMware Tools for Nested ESXi!

11.11.2013 by William Lam // 42 Comments

I have been working with Nested ESXi since it original inception and this technology has greatly benefited me and the entire VMware community, especially when it comes to learning about VMware software and being able to easily prototype something before installing it on actual hardware. However, one thing that I felt that has been missing for awhile now is the ability to run an instance of VMware Tools within a Nested ESXi VM. I have personally been asking for this feature for a couple of years and I know many in the VMware community have expressed interests as well.

I am super excited to announce that VMware has just released a new Fling that provides you with a VIB that you can install VMware Tools inside a Nested ESXi host. I originally showed a demo of this at VMworld Barcelona in my vBrownBag Tech Talk and as I mentioned we would be releasing this as a VMware Fling very soon. So here it is!

UPDATE (08/20/15) - An updated version of VMware Tools for Nested ESXi was just published, make sure to download latest version and you can find more details here.

Requirements:

  • Nested ESXi running 5.0, 5.1 or 5.5 

Installation:

To install the VIB, you simply just need to download it and upload the VIB it to your Nested ESXi datastore and then run the following commands:

esxcli system maintenanceMode set -e true
esxcli software vib install -v /vmfs/volumes/[VMFS-VOLUME-NAME]/esx-tools-for-esxi-9.7.0-0.0.00000.i386.vib -f
esxcli system shutdown reboot -r "Installed VMware Tools"

You can also install the VIB directly from VMware.com if you have direct or proxy internet connectivity from your ESXi host by running the following commands:

esxcli network firewall ruleset set -e true -r httpClient
esxcli software vib install -v http://download3.vmware.com/software/vmw-tools/esxi_tools_for_guests/esx-tools-for-esxi-9.7.0-0.0.00000.i386.vib -f

Once the VIB has been successfully installed, you will need to reboot the host for the changes to take effect. To verify, you can now login to either your vSphere Web/C# Client and you should now see the status for VMware Tools for your Nested ESXi host showing green and the IP Address of the Nested ESXi host should be displayed.

So why would you want to do this? Well, there’s a couple of reasons actually. The first one is pretty basic, which is when I need to reboot or shutdown a Nested ESXi VM, instead of having to jump into the VM console or SSH into ESXi host, I could just right click in the vSphere Web/C# Client and just say shutdown or reboot. I also tend to do all sorts of craziness in my lab (I’m sure this is an understatement for folks that know me) and may often break networking connectivity to my Nested ESXi VM. In vSphere 5.0, we introduced the Guest Operations API (formally known as VIX API) which is now part of the vSphere API. This API is actually quite handy as it allows you to perform guest operations within the VM without needing network connectivity as it relies on the fact that VMware Tools is running (pretty cool stuff!).

Here is a screenshot demonstrating the executing of vmkfstools through the Guest Operations API to one of my Nested ESXi VM:

A couple of things to note:

  • If you install VMware Tools on Nested ESXi VM, you will NOT be able to just right click in the UI and say install/upgrade
  • If you wish to integrate this into you ESXi image, you can take a look at a community tool  called ESXi-Customizer created by Andreas Peetz which I have used in the past and works great. Image Builder does not support raw VIBs, only zip files which may need to contain additional metadata information. If you want to create an offline bundle instead to then use Image Builder to create your custom ISO, Andreas has a new tool you can take a look at here.

Finally, if you have any feedback (likes/dis-likes), thanks, comments please head over to the VMware's Fling page for VMware Tools for Nested ESXi and leave a comment. I am sure the Jim Mattson the engineer who built this Fling would greatly appreciate any feedback you may have.

Categories // ESXi, Nested Virtualization Tags // ESXi 5.0, ESXi 5.1, ESXi 5.5, nested, nested virtualization, vmware tools, vSphere 5.0, vSphere 5.1, vSphere 5.5

ESXi 5.5 introduces a new Native Device Driver Architecture Part 2

11.07.2013 by William Lam // 4 Comments

Following up from Part 1 where I provided an overview of the new Native Device Driver architecture introduced in ESXi 5.5, we will now take a deeper look at how this new device driver model works in ESXi. A new concept of driver priority loading is introduced with the Native Device Driver model and the diagram below provides the current ordering of how device drivers are loaded.

As you can see OEM drivers will have the highest priority and by default Native Drivers will be loaded before "legacy" vmklinux drivers. On a clean installation of ESXi 5.5 you should see at least two of these directories: /etc/vmware/default.map.d/ and /etc/vmware/driver.map.d/ which contains driver map files pertaining to Native Device and "legacy" vmklinux drivers.

Here is a screenshot of the map files for both of these directories on an ESXi host:

The following inbox Native Drivers are included in default installation of ESXi 5.5:

Device Device Driver Name
Emulex 10GBe NIC elxnet
Emulex FC lpfc
LSI Megaraid lsi_mr3
LSI mptsas lsi_msgpt3
Micron SSD mtip32xx_native
QLogic FC qlnativefc
SAS/SATA rste
vmxnet3 & graphics vmkernel

As I mentioned earlier, Native Drivers by default will always load before vmklinux drivers, however if you need to perform some troubleshooting, one option is to disable the specific driver in question by using ESXCLI which is applicable to both Native Drivers as well as vmklinux drivers.

To do so, run the following ESXCLI command:

esxcli system module set --enabled=false --module=[DRIVER-NAME]

Categories // Uncategorized Tags // ESXi 5.5, native device driver, nddk, vmklinux, vSphere 5.5

  • « Previous Page
  • 1
  • …
  • 429
  • 430
  • 431
  • 432
  • 433
  • …
  • 560
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025