WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Restoring VSAN VM Storage Policies without vCenter Part 1: Using cmmds-tool

11.22.2013 by William Lam // 5 Comments

A scenario that I have been been looking into recently while testing VSAN in my lab is what happens when vCenter Server is no longer available and the impact that might have on your environment.  We know that VSAN from a configuration perspective works very similiar to vSphere HA where vCenter Server is only required for the initial VSAN Cluster configuration. Once the ESXi hosts has been added to the VSAN Cluster, vCenter Server is no longer part of the picture from a functional perspective and the ESXi hosts will know how to communicate with each other within the VSAN Cluster. We can even build a single VSAN node to help bootstrap vCenter Server itself for greenfield deployments.

So what does that leave us with? Well, the Virtual Machines of course. The Virtual Machines will continue to run without any impact whether or not vCenter Server is available. VSAN will continue to govern and maintain compliance for the VM Storage Policies that have been assigned to each and every Virtual Machine. However, in the scenario where you can not restore vCenter Server which is primarily where the VM Storage Policies are stored and you need to build out a new environment, how do you go about restoring the VM Storage Policies?

Well it turns out that vCenter Server is not the only place where the VM Storage Policies are stored at. To ensure that VSAN can continue enforcing the policies that have been assigned to each Virtual Machine and their associated VMDKs, there is a copy of the VM Storage Policies that is distributed amongst all the ESXi hosts within the VSAN Cluster. In this first first article I will demonstrate how to recover the VM Storage Policies for a particular Virtual Machine running on an ESXi host where vCenter Server is no longer available using a utility located in the ESXi Shell called cmmds-tool. In part two of the article I will demonstrate the same recovery process but leveraging the vSphere API which will be more user friendly.

Disclaimer: The cmmds-tool is not meant for troubleshooting, you should only use under VMware GSS/Engineering supervision. If you choose to use it, do so at your own risk.

In the ESXi Shell, there is a nifty little VSAN utility called cmmds-tool which stands for Clustering Monitoring, Membership and Directory Services. This tool allows you to perform a variety of operations and queries against the VSAN nodes and their associated objects. One interesting command is the "find" operation which will allow us to lookup a specific VM Storage Policy, a bit more on this later.

Lets say we have a Virtual Machine called VSAN-VM-1 and it is associated with three VM Storage Policies called Copper, Aluminum and Platinum. We have one for the VM Home and one for each of the two VMDKs. Here is a screenshot of what that looks like in the vSphere Web Client:

Now lets say vCenter Server is some how lost or unrecoverable for whatever reason, but we still have access to the ESXi host and the running Virtual Machine. Lets go ahead and recover the VM Storage Policies so we can then rebuild a new vCenter Server and re-create the policies.

Step 1 - We need to first identify a couple of pieces of information. The first is going to be the UUID of the VM Home directory (VSAN uses with UUIDs for all its objects). Login to ESXi Shell of the ESXi host that is currently hosting the Virtual Machine and run the following command:

vim-cmd vmsvc/getallvms | grep [DISPPLAY-NAME-OF-YOUR-VM]

The VM Home directory UUID will be part of the Virtual Machine directory name which can be seen in the screenshot above highlighted in green. Make a note of that UUID as you will need it in a later step. You should also make a note of the Virtual Machine MoRef ID which is the first numeric value on the left hand side of the output. In this example, I have 1 as the MoRef ID

Step 2 - Next we need to identify the UUID for each of the VMDKs for that given Virtual Machine. To do so, we need to take a look at the descriptor file for each of the VMDKs in the Virtual Machine home directory. You can use vim-cmd vmsvc/get.filelayout [VM-MOREF-ID] to get the VMDK paths or you can change into the Virtual Machine directory and cat out the files. In my example I have the following two VMDK descriptor files:

/vmfs/volumes/vsanDatastore/51108952-6e91-b30b-a5ab-005056ad9acf/VSAN-VM-1.vmdk
/vmfs/volumes/vsanDatastore/51108952-6e91-b30b-a5ab-005056ad9acf/VSAN-VM-1_1.vmdk

You can just grep for the keyword "vsan" by using the following command (replacing the path of your VMDKs):

grep "vsan" /vmfs/volumes/vsanDatastore/51108952-6e91-b30b-a5ab-005056ad9acf/VSAN-VM-1.vmdk

From the output you will see vsan:// and UUID associated with each VMDK, please make a note of the UUID for each VMDK. We are now ready to query the VM Storage Policy configuration which will help us rebuild the policy in our new vCenter Server.

Step 3 - To look up the VM Home VM Storage Policy, run the following command and specify the UUID of the VM Home in Step 1:

cmmds-tool find -t POLICY -u 51108952-6e91-b30b-a5ab-005056ad9acf -f json

The VM Storage Policy configurations is stored in the "content" field and you will need to translate the properties back to the VSAN policy you have defined. As part of the output you will also see a property called spbmProfileId which is the unique identifier for VM Storage Policy which you can query if you are using the VM Storage Policy APIs that were introduced in vSphere 5.5.

Here is a table that will help you translate the keys to the apporopirate VSAN Policies:

VSAN Capability Description VSAN Capability Key
Number of failures to tolerate hostFailuresToTolerate
Number of disk stripes per object stripeWidth
Force provisioning forceProvisioning
Object space reservation proportionalCapacity
Flash read cache reservation cacheReservation

Step 4 - To lookup the VMDK VM Storage Policies, we will perform the same command and just replace the UUID with our VMDK UUIDs.

Once you have recorded the configurations for each of the VM Storage Policy, you can then head over to your new vCenter Server and re-create the VM Storage Policies and then re-associate the policy with the Virtual Machines.

As you can see the steps to recover a VSAN VM Storage Policy is not too difficult but can be a bit tedious. In the next article, we will simplify this by leveraging the vSphere API which has access to the same CMMDS system but make querying the VM Storage Policy super easy by only requiring the user to provide the name of the Virtual Machine.

Categories // VSAN Tags // cmmds-tool, ESXi 5.5, Virtual SAN, vm storage policy, vm storage profile, VSAN, vSphere 5.5

How cool is that!? Using VMware Workstation to manage your ESXi hosts (including Free ESXi) & VMs

11.21.2013 by William Lam // 9 Comments

To be completely honest, I have not played with VMware Workstation in quite awhile as my day-to-day job primarily revolves around our Enterprise suite of products. In a recent meeting that I was in, I picked up on some interesting tidbits about the latest version of VMware Workstation 10 and after giving it a try in my lab, I thought I would share one very cool feature that you may be aware of (there is actually a lot of cool features in latest release, check what's new here).

The very first thing I noticed is that unlike other downloads from VMware in which you need to register the product and get an evaluation key. VMware Workstation can be downloaded without any registration and you can start the 30-day free trial immediately after installation! I think that is a really slick and can also come in handy if you need to install Workstation right away for something. Make sure you download from this page here by clicking on "Try for Free" instead of going to www.vmware.com/downloads

One of the capabilities that Workstation introduced probably a couple of releases ago was the ability to connect to a remote system whether that is another Workstation instance, vCenter Server and even an ESXi host. At the time I assumed this was to enable users to easily cold migrate a Virtual Machine that was created locally onto one of these remote targets.

What I did not realize was that you could do a lot more with this capability than to just copy offline Virtual Machines. To my surprise I found that you could fully manage the Virtual Machines on these remote targets including changing the virtual hardware configurations such as adding memory, cpu, disk, etc. guestOS as well as provision new Virtual Machines. The VM Console is fully functional leveraging VMRC and you can even connect to Free ESXi instances and get same capabilities you had with the legacy vSphere C# Client.  The other neat thing about this is you can also manage your Virtual Hardware 10 VMs even though the latest vSphere C# Client does not allow this because VMware Workstation 10 is vHW10 aware.

Here is a screenshot of managing my Free ESXi host which is running on my Apple Mac Mini as well as my vCenter Server. As you can see you can have multiple connections open up which is quite useful, especially if you have a couple of Free ESXi hosts in which you would like a single pane of glass to manage.

Another nice feature is the amount of backwards capability it provides for vSphere. You can go as far back as vSphere 4.1 (vCenter Server & ESXi). To prove this in my environment, I provisioned a Nested ESXi running on vSphere 4.1, 5.0, 5.1 and 5.5 and connected them all to Workstation. This is another great way to manage standalone ESXi hosts if you still need to run older versions.

Lastly, you do not need to be running the Windows version of VMware Workstation to get these benefits. You can also do the same using Workstation for Linux and here is a screenshot of running Workstation on an Ubuntu desktop.

As you can see this is just one of many new and cool capabilities of VMware Workstation 10 and I have to say for $250, this is a steal to be able to easily manage not only your VMs running locally but also remote systems like vCenter Server, ESXi hosts including Free ESXi which is a huge deal IMHO. The Workstation team really knocked it out of the park and I am glad I had the opportunity to check out their latest release. I also hope VMware Fusion will be getting these capabilities in the near future! Simon, I hope you see this 😉

Categories // Uncategorized Tags // ESXi 5.5, free esxi, vSphere 5.5, workstation

How to automate vFRC configurations using the command-line in ESXi

11.20.2013 by William Lam // 1 Comment

While working on my vSphere Flash Read Cache (vFRC) articles last week, I wanted to be able to quickly build out my vSphere environment so that vFRC was fully configured as part of my ESXi installation using a Kickstart script. This would allow me to simply add my ESXi hosts into vCenter Server and not have to go through the vSphere Web Client for each host configuring vFRC. Now of course the vSphere Web Client is not the only option to configure vFRC, you can also use the vSphere APIs by creating your own script or even using the new vFRC PowerCLI cmdlets as an alternative.

However, I was interested in creating a very simple script that I could easily integrate with my kickstart deployment as that is what I am using for automated provisioning of my Nested ESXi hosts. With a bit of research and some trial/error, I have come up with a process that can be fully automated from the command-line of ESXi. In my environment I have a Nested ESXi host that contains three SSD's (4GB each) which will be used to construct my Virtual Flash Resource.

Note: Jump to the very bottom for a completely automated script to configure vFRC for your ESXi host.

Step 1 -You will want to list out the available SSD devices on your ESXi host, you can do so by using the following ESXCLI command:

esxcli storage vflash device list

You will need to make a note of the device ID's as they will be required in the sub-sequent steps.

Step 2 - Next we will need to partition our devices before we can create VFFS (Virtual Flash File System) and we will need to calculate the end sector if we wish to consume the entire device. To do so, we will need to use the partedUtil command and specify the "getptbl" option to identify some information.

partedUtil getptbl /vmfs/devices/disks/naa.6000c2932c4ed8a540b6e9f0be9e1009

You will need to make a note of the first three numbers which represents number of cylinders, number of heads and number of sectors per track. To calculate the end sectors, the equation will be the following: (Number of Cylinders x Number of Heads x Number of Sectors Per Track) - 1

In our example we have (522*255*63)-1 which gives us 8385929

To create the partition, we will again use the partedUtil and specify "setptbl" option by running the following command (ensure to replace your end sector value):

partedUtil setptbl /vmfs/devices/disks/naa.6000c2932c4ed8a540b6e9f0be9e1009 "gpt" "1 2048 8385929 AA31E02A400F11DB9590000C2911D1B8 0"

For more details on using the partedUtil command, please refer here and here.

Since my other two devices are exactly the same size, I can just re-use the command and replace the device path. Ensure all devices that you wish to use in your Virtual Flash Resource is partition before moving onto the next step.

Step 3 - We will now create our VFFS volume which only needs to be created on one of the devices. In this example, I have chosen to use the first SSD device as shown in "esxcli storage vflash device list". To create the VFFS volume we will use the vmkfstools tool just like we would if we were creating a VMF volume but instead use the "vmfsl" type.

Run the following command to create your VFFS volume, you will need to append :1 to the end of the SSD device to specify the partition you created earlier as well as a display name of the volume which I chose vffs-$(hostname -s) which will use the short hostname of the ESXi host

vmkfstools -C vmfsl /vmfs/devices/disks/naa.6000c2932c4ed8a540b6e9f0be9e1009:1 -S vffs-$(hostname -s)

Step 4 - Once you have your VFFS volume created, you can extend it with additional SSD devices by using vmkfstools and specifying the -Z option. The syntax for the command is the SSD device partition you wish to add followed by the source SSD device containing the VFFS volume.

Here is an example of the command:

vmkfstools -Z /vmfs/devices/disks/naa.6000c29498be5c56231d631d9c6cbee8:1 /vmfs/devices/disks/naa.6000c2932c4ed8a540b6e9f0be9e1009:1

You will be prompted on whether you want to extend and to confirm enter value of 0.

You will need to do this for all SSD devices you partition earlier to be part of the same VFFS volume.

Step 5 - To confirm that everything was configured correctly, we will use vmkfstools to query our VFFS volume by running the following command and specifying the path to our VFFS volume:

vmkfstools -Ph /vmfs/volumes/vffs-vesxi55-10

From the output we should see the filesystem for the volume is of type VFFS and we should also see the three SSD devices that is backing this VFFS volume as shown in screenshot above.

Step 6 - Finally to make this new VFFS volume visible to the ESXi host, we will need to refresh the ESXi storage system and we can do so by running the following vim-cmd:

vim-cmd hostsvc/storage/refresh

At this point, we now have a fully configured VFFS volume. If you jump right into the vSphere Web Client expecting to see your new Virtual Flash Resource on your newly configured ESXi host, you might be in for a surprise! You will actually NOT see the VFFS volume that we just configured which stumped me initially.

It turns out simply creating a VFFS volume does not automatically equate to configuring a Virtual Flash Resource. You still need to configure the ESXi host to add the Virtual Flash Resource based on your VFFS volume and in my opinion that seems to be quite odd and counter-intuitive. Today there is no CLI command to add the Virtual Flash Resource, you would need to use either the vSphere Web Client or use the vFRC vSphere API. If you login to the vSphere Web Client and configure a Virtual Flash Resource, you will see the VFFS volume that we have created and you just need to select it and it will automatically add it.

This is not very ideal if you want to completely automate vFRC configurations and I decided to leverage my knowledge of the vFRC vSphere APIs and create a very simple python script that would call into the ESXi host's MOB and issue the HostConfigureVFlashResource() method. This was sort of a quick/dirty way to call the vSphere API and add in the Virtual Flash Resource.

Disclaimer: These scripts are provided as examples, please test these scripts in your development/test environment before running them in production.

To make this really useful I have created two scripts that can be embedded into either a kickstart script or executed manually. The script will automatically perform the above operations above as well as configure the Virtual Flash Resource without any user input/intervention.

The main script is called configurevFRC.sh which is a shell script that performs the majority of the work and it then it calls the python script which is called addVirtualFlashResource.py (ensure you change the password variable in the script) for adding the Virtual Flash Resource. You need to download both scripts and run them on the ESXi Shell.

Here is the contents of configurevFRC.sh (you can download both scripts using the links above):
Here is a sample execution of configurevFRC.sh script:

In the future I hope we can completely automate vFRC configurations from the command-line as we can using the vSphere Web Client or vSphere APIs. For now, this solution will help get you around the limitations we have in the command-line utilities.

HostConfigureVFlashResource

Categories // Uncategorized Tags // ESXi 5.5, vFRC, vmfsl, vmkfstools, vSphere 5.5, vSphere Flash Read Cache

  • « Previous Page
  • 1
  • …
  • 431
  • 432
  • 433
  • 434
  • 435
  • …
  • 565
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • PowerCLI remediation script for running NSX Edge on AMD Ryzen for VCF 9.0 06/20/2025
  • Failed to locate kickstart on Nested ESXi VM CD-ROM in VCF 9.0 06/20/2025
  • NVMe Tiering with Nested Virtualization in VCF 9.0 06/20/2025
  • VCF 9.0 Installer workaround for ESXi hosts with different vendor 06/19/2025
  • NVMe Tiering with AMD Ryzen CPU workaround for VCF 9.0 06/19/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025