WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

How to automate vFRC configurations using the command-line in ESXi

11.20.2013 by William Lam // 1 Comment

While working on my vSphere Flash Read Cache (vFRC) articles last week, I wanted to be able to quickly build out my vSphere environment so that vFRC was fully configured as part of my ESXi installation using a Kickstart script. This would allow me to simply add my ESXi hosts into vCenter Server and not have to go through the vSphere Web Client for each host configuring vFRC. Now of course the vSphere Web Client is not the only option to configure vFRC, you can also use the vSphere APIs by creating your own script or even using the new vFRC PowerCLI cmdlets as an alternative.

However, I was interested in creating a very simple script that I could easily integrate with my kickstart deployment as that is what I am using for automated provisioning of my Nested ESXi hosts. With a bit of research and some trial/error, I have come up with a process that can be fully automated from the command-line of ESXi. In my environment I have a Nested ESXi host that contains three SSD's (4GB each) which will be used to construct my Virtual Flash Resource.

Note: Jump to the very bottom for a completely automated script to configure vFRC for your ESXi host.

Step 1 -You will want to list out the available SSD devices on your ESXi host, you can do so by using the following ESXCLI command:

esxcli storage vflash device list

You will need to make a note of the device ID's as they will be required in the sub-sequent steps.

Step 2 - Next we will need to partition our devices before we can create VFFS (Virtual Flash File System) and we will need to calculate the end sector if we wish to consume the entire device. To do so, we will need to use the partedUtil command and specify the "getptbl" option to identify some information.

partedUtil getptbl /vmfs/devices/disks/naa.6000c2932c4ed8a540b6e9f0be9e1009

You will need to make a note of the first three numbers which represents number of cylinders, number of heads and number of sectors per track. To calculate the end sectors, the equation will be the following: (Number of Cylinders x Number of Heads x Number of Sectors Per Track) - 1

In our example we have (522*255*63)-1 which gives us 8385929

To create the partition, we will again use the partedUtil and specify "setptbl" option by running the following command (ensure to replace your end sector value):

partedUtil setptbl /vmfs/devices/disks/naa.6000c2932c4ed8a540b6e9f0be9e1009 "gpt" "1 2048 8385929 AA31E02A400F11DB9590000C2911D1B8 0"

For more details on using the partedUtil command, please refer here and here.

Since my other two devices are exactly the same size, I can just re-use the command and replace the device path. Ensure all devices that you wish to use in your Virtual Flash Resource is partition before moving onto the next step.

Step 3 - We will now create our VFFS volume which only needs to be created on one of the devices. In this example, I have chosen to use the first SSD device as shown in "esxcli storage vflash device list". To create the VFFS volume we will use the vmkfstools tool just like we would if we were creating a VMF volume but instead use the "vmfsl" type.

Run the following command to create your VFFS volume, you will need to append :1 to the end of the SSD device to specify the partition you created earlier as well as a display name of the volume which I chose vffs-$(hostname -s) which will use the short hostname of the ESXi host

vmkfstools -C vmfsl /vmfs/devices/disks/naa.6000c2932c4ed8a540b6e9f0be9e1009:1 -S vffs-$(hostname -s)

Step 4 - Once you have your VFFS volume created, you can extend it with additional SSD devices by using vmkfstools and specifying the -Z option. The syntax for the command is the SSD device partition you wish to add followed by the source SSD device containing the VFFS volume.

Here is an example of the command:

vmkfstools -Z /vmfs/devices/disks/naa.6000c29498be5c56231d631d9c6cbee8:1 /vmfs/devices/disks/naa.6000c2932c4ed8a540b6e9f0be9e1009:1

You will be prompted on whether you want to extend and to confirm enter value of 0.

You will need to do this for all SSD devices you partition earlier to be part of the same VFFS volume.

Step 5 - To confirm that everything was configured correctly, we will use vmkfstools to query our VFFS volume by running the following command and specifying the path to our VFFS volume:

vmkfstools -Ph /vmfs/volumes/vffs-vesxi55-10

From the output we should see the filesystem for the volume is of type VFFS and we should also see the three SSD devices that is backing this VFFS volume as shown in screenshot above.

Step 6 - Finally to make this new VFFS volume visible to the ESXi host, we will need to refresh the ESXi storage system and we can do so by running the following vim-cmd:

vim-cmd hostsvc/storage/refresh

At this point, we now have a fully configured VFFS volume. If you jump right into the vSphere Web Client expecting to see your new Virtual Flash Resource on your newly configured ESXi host, you might be in for a surprise! You will actually NOT see the VFFS volume that we just configured which stumped me initially.

It turns out simply creating a VFFS volume does not automatically equate to configuring a Virtual Flash Resource. You still need to configure the ESXi host to add the Virtual Flash Resource based on your VFFS volume and in my opinion that seems to be quite odd and counter-intuitive. Today there is no CLI command to add the Virtual Flash Resource, you would need to use either the vSphere Web Client or use the vFRC vSphere API. If you login to the vSphere Web Client and configure a Virtual Flash Resource, you will see the VFFS volume that we have created and you just need to select it and it will automatically add it.

This is not very ideal if you want to completely automate vFRC configurations and I decided to leverage my knowledge of the vFRC vSphere APIs and create a very simple python script that would call into the ESXi host's MOB and issue the HostConfigureVFlashResource() method. This was sort of a quick/dirty way to call the vSphere API and add in the Virtual Flash Resource.

Disclaimer: These scripts are provided as examples, please test these scripts in your development/test environment before running them in production.

To make this really useful I have created two scripts that can be embedded into either a kickstart script or executed manually. The script will automatically perform the above operations above as well as configure the Virtual Flash Resource without any user input/intervention.

The main script is called configurevFRC.sh which is a shell script that performs the majority of the work and it then it calls the python script which is called addVirtualFlashResource.py (ensure you change the password variable in the script) for adding the Virtual Flash Resource. You need to download both scripts and run them on the ESXi Shell.

Here is the contents of configurevFRC.sh (you can download both scripts using the links above):
Here is a sample execution of configurevFRC.sh script:

In the future I hope we can completely automate vFRC configurations from the command-line as we can using the vSphere Web Client or vSphere APIs. For now, this solution will help get you around the limitations we have in the command-line utilities.

HostConfigureVFlashResource

Categories // Uncategorized Tags // ESXi 5.5, vFRC, vmfsl, vmkfstools, vSphere 5.5, vSphere Flash Read Cache

Why is Promiscuous Mode & Forged Transmits required for Nested ESXi?

11.19.2013 by William Lam // 28 Comments

Many of us who run Nested ESXi in our home labs for development/testing purposes are pretty familiar with the requirements to properly setup a Nested ESXi environment such as CPUs supporting both Intel-VT+EPT or AMD-V+RVI and enabling both Promiscuous Mode and Forged Transmits on the portgroup that your Nested ESXi VM is connected to. Though these requirements have become second nature to most of us, it may not always be obvious on why they are required, especially for new users of Nested ESXi.

UPDATE 09/01/2014 - Take a look at this article for an updated solution to the problem mentioned below.

I specifically wanted to focus on the networking requirements where both Promiscuous Mode and Forged Transmits are required to be enabled. At a high level, most of us have understood this as a prerequisite for proper network connectivity for the Nested Virtual Machines running inside of your Nested ESXi host, but why is that?

Promiscuous Mode:
Both VMware VSS (Virtual Standard Switch) and VDS (vSphere Distributed Switch) do not implement MAC Learning like a traditional network switch, since the vSphere platform already knows which MAC addresses are assigned to a particular Virtual Machine. This means that the virtual switch will only forward network packets to a Virtual Machine if the destination MAC Address matches the ESXi vmnic's (pNIC) MAC Address.

In a Nested ESXi environment where you can have Nested Virtual Machines, the destination MAC Address for network packets destined to those Virtual Machines will differ from the Nested ESXi vmnic's MAC Address. Due to this, the physical ESXi host's virtual switch will drop the packet if Promiscuous Mode is not enabled. Promiscuous Mode allows the underlying Nested ESXi VM vmnic to monitor all traffic of the virtual switch it is connected to and thus providing connectivity to the underlying Nested Virtual Machines.

An interesting observation was recently made by Anthony Spiteri with his article about Reduced Network Throughput with Promiscuous Mode PortGroups. Since Promiscuous Mode allows all traffic from the virtual switch to be visible on the configured portgroup, there is definitely going to be some amount of overhead when enabling this setting. If you drive a large amount of network traffic for your regular Virtual Machines, you may want to consider separating out your Nested ESXi environment.

Forged Transmits:
Chris Wahl has already written an excellent article on Forged Transmits and its implication with Nested ESXi. I highly recommend you check out this blog post for the details.

Additional Resources:

  • How to enable Nested ESXi using VXLAN
  • Having Difficulties Enabling Nested ESXi in vSphere?

Categories // Uncategorized Tags // distributed virtual switch, forged transmit, nested, nested virtualization, promiscuous mode, virtual switch

Exploring the vSphere Flash Read Cache (vFRC) APIs Part 3

11.18.2013 by William Lam // Leave a Comment

To conclude our 3-part blog series in exploring the vSphere Flash Read Cache (vFRC) APIs, in this last article we will take a look at the vSphere APIs required to live migrate a running Virtual Machine which has been configured with vSphere Flash Read Cache. If you perform this migration using the vSphere Web Client, you will see that you are now presented with a new option in the migration wizard on whether or not to migrate the vFRC cache.

If you choose to migrate the vFRC cache, you have the option to migrate all of the Virtual Machine's vFRC cache or you can specify this on a per VMDK basis by selecting the "Advanced" option. By choosing to migrate the vFRC cache, the migration will of course take longer as it needs to not only transfer the running Virtual Machine, but also the vFRC cache.

To automate this operation from an API perspective, you will need to use the Virtual Machine RelocateVM_Task method. As part of creating the migration spec, there is a new property called migrateCache which can now be specified on each VMDK.

To demonstrate the use of this vSphere API, I have created a vSphere SDK for Perl sample script called migratevFRCVM.pl which accepts the name of the Virtual Machine to migrate, the name of the ESXi host to migrate to and an optional parameter on whether or not to migrate the vFRC cache. By default, the script will migrate the vFRC cache if no option is selected.

Here is an example of migrating a VM along with its configured vFRC cache:

./migratevFRCVM.pl --config .vcenter55-1 --vmname Test-VM --dst_vihost vesxi55-10.primp-industries.com --migrate_cache true

Note: I have simplified the script to apply the --migrate_cache to ALL VMDKs, but as mentioned earlier you can specify this on a per-VMDK basis so you can migrate the vFRC cache for specific virtual disks if you choose to.

Hopefully you have enjoyed this series and now have a better understanding of the vSphere Flash Read Cache APIs. Be sure to check out the other two articles if you have not already.

  • Exploring the vSphere Flash Read Cache (vFRC) APIs Part 1
  • Exploring the vSphere Flash Read Cache (vFRC) APIs Part 2

Categories // Uncategorized Tags // ESXi 5.5, vffs, vflash, vFRC, virtual flash file system, vmotion, vSphere 5.5, vSphere Flash Read Cache

  • « Previous Page
  • 1
  • …
  • 427
  • 428
  • 429
  • 430
  • 431
  • …
  • 560
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025