WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Quick Tip - Don't always assume your local HDs will be claimed correctly

03.21.2014 by William Lam // Leave a Comment

For whatever reason I woke up super early this morning around 4am PST and since I could not go back to sleep, I figure I might as well finish re-building one of my lab environments. I was trying to create a VSAN Disk Group using ESXCLI, but ran into the following error: Stamping non-local disks requires a pre-created vsan cluster If I run the following ESXCLI command, we can clearly see the Hitachi disks that I have are not being recognized as a local device:

esxcli storage core device list

hard-disk-claim-rules-0
I already know that some disks may not be recognized as a local device which VSAN requires and this is nothing new, in fact Duncan Epping even shows you how to get around this in this article here. Seeing that it was so early in the US, I figure Duncan was probably awake and wanted to run a few things by him. It was while he was looking around in the system did he notice a stranger issue.
hard-disk-claim-rules-1
It turns out my local Hitachi disks were being claimed by the VMW_SATP_DEFAULT_AA (ALUA) SATP plugin which was kind of strange as I would have expected to be claimed by VMW_SATP_LOCAL instead. I decided to take a look at the SATP claim rules to see why this was the case by running the following ESXCLI command:

esxcli storage nmp satp rule list

It turns out, for identifying Hitachi storage devices, the SATP rules is quite generic and keys off of the vendor name only and hence the assignment of the ALUA SATP plugin is choosen.

VMW_SATP_DEFAULT_AA          HITACHI                                                                   system      inq_data[128]={0x44 0x46 0x30 0x30}  VMW_PSP_RR
VMW_SATP_DEFAULT_AA          HITACHI                                                                   system

In my opinion this can be problematic as the plugin can potentially cause strange IO or pathing behaviors as it expects one thing but is getting something completely different. I suspect this plugin was probably meant for an Hitachi storage array like an HDS instead of a local device. Once I was able to narrow this down, I just needed to create a new rule that would override the system defaults.

To verify this, I created a new rule based on the vendor name just for testing purposes which will have priority over the default system rules. When creating custom SATP rules, you can filter on a variety of attributes such as the model, transport, controller, target, adapter, etc. Depending on your use case, you may want to be generic or quite specific. The following ESXCLI command will create the new rule and assign it the VMW_SATP_LOCAL plugin:

esxcli storage nmp satp rule add -V HITACHI -P VMW_PSP_FIXED -s VMW_SATP_LOCAL

hard-disk-claim-rules-3
Once that rule has been added, I can now finish my original task which is to mark my Hitachi drives as "local" and reclaim these devices by running the following ESXCLI commands:

esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -o "enable_local" -d [DEVICE]
esxcli storage core claiming reclaim -d [DEVICE]

hard-disk-claim-rules-4
If we now take a look at our device, we can see that are rule was activated and if we perform the following ESXCLI command, we can now see the device is "local":

esxcli storage core device list

hard-disk-claim-rules-5

The lesson here, do not assume your local devices and potentially even remote devices will be claimed correctly. There maybe vendor best practices that differ from the out out box rules and you should always do your due diligence to either verify yourself or work with your vendors to confirm the configurations.

Categories // ESXi, VSAN, vSphere Tags // esxcli, ESXi, SATP, vSphere

Quick Tip - Increasing capacity on a Nested VSAN Datastore

03.21.2014 by William Lam // 2 Comments

The other day I needed to increase the capacity on one of my Nested VSAN Datastores as one of our users required a larger VSAN datastore than it was initially configured for. I was expecting to be able to just increase the size of the underlying VMDKs like I would for a traditional Nested ESXi environment and rescan in ESXi to pick up the new capacity without any downtime. It turns out, this is was not exactly the case for a Nested VSAN environment.

increase-capacity-nested-vsan-datastore-0
Disclaimer: Nested Virtualization is not officially supported by VMware

When you first setup VSAN, regardless of how the disks were claimed, VSAN will consume the entire device (SSD or MD). The capacity that VSAN initially detects will then be used to create the necessary partition as part of the VSAN Disk Group creation. VSAN assumes that the capacity for the underlying devices would never change as in the "real" world, disks do not auto-magically get larger 🙂 and this is a valid assumption. In a Nested ESXi environment however, it can auto-magically get larger but VSAN was not built for this use case. What ends up happening is that the underlying devices can be "hot-extended" but the existing VSAN Disk Group can not detect this new capacity.

Having said that, there are two ways you can increase your VSAN datastore:

Option 1 - If you wish to preserve your VSAN Datastore, you can hot-add additional VMDK(s) to your existing VSAN Disk Group or if it is full, you can create a new disk group and add additional VMDK(s). This will modify your setup slightly if you wanted a particular set of disk groups but will allow you to preserve your data.

Option 2 - The latter option requires the deletion and re-creation of the VSAN Datastore which is not ideal if you already have data on it. You will need to increase the capacity of the underlying VMDKs and then re-create your VSAN Datastore, but this way you can keep the existing number of disks and disk groups you initially created your Nested ESXi environment with.

In my scenario, I could not destroy the VSAN Datastore as I had someone using it and so I opted for option #1. Here is what my configuration looked like before which was a single VSAN Disk Group with 1xSSD and 1xMD:

increase-capacity-nested-vsan-datastore-1
I then added an additional 10GB VMDK to each of my Nested ESXi hosts and issue a rescan so the ESXi host would pickup the new device:

increase-capacity-nested-vsan-datastore-2
In just a few seconds, I can see my new storage device. I can now head over to the VSAN management page which is located at the vSphere Cluster and once I refresh, I can see that VSAN has automatically added the new "MD" into the existing disk group and my storage has automatically expanded!

increase-capacity-nested-vsan-datastore-3

Categories // Nested Virtualization, VSAN Tags // nested virtualization, VSAN, vSphere 5.5

Exploring VSAN APIs Part 6 – Modifying Virtual Machine VM Storage Policy

03.20.2014 by William Lam // 6 Comments

One of the biggest benefit of VSAN is the ability to specify granular storage policies on a per Virtual Machine basis. These storage policies is managed through VMware's Storage Policy Based Management system and is automatically enforced by VSAN to ensure compliance. A VM Storage Policy can be assigned during the initial deployment of a Virtual Machine or it can be modified afterwards, for example if the Virtual Machine's SLA's has changed because the workload has changed. From the vSphere Web Client, modifying a Virtual Machine's VM Storage Policy is simply selecting the VM Storage Policy and re-applying which is also available programmatically through the vSphere API.

Using the vSphere API method ReconfigVM_Task(), you will be able to modify the VM Storage Policy for the VM Home Namespace and/or individual Virtual Disks. To modify the VM Home Namespace, there is a property defined at the root of the Virtual Machine config spec called vmProfile which accepts the VM Storage Policy ID extracted from the SPBM API. To modify the VM Storage Policy for an individual Virtual Disk, you will need to set the profile property which is exposed on a Virtual Device with the VM Storage Policy ID. To demonstrate this functionality, I have created a sample vSphere SDK for Perl script called changeVMStoragePolicy.pl

Disclaimer:  These scripts are provided for informational and educational purposes only. It should be thoroughly tested before attempting to use in a production environment.

In my environment, I have a Virtual Machine called VM1 which has been defined with a VM Storage Policy called "Copper" as seen in screenshot below:

change-vm-storage-policy-0
Let's say I want to change the Virtual Machine's VM Storage Policy to another policy called "Aluminum", I first need to extract the VM Policy ID from SPBM API and then pass it into the script like the following:

./changeVMStoragePolicy.pl --server vcenter55-1.primp-industries.com --username root --vmname VM1 --profileid cd6908b2-0704-4733-ad9b-a9a8f200ab0a

change-vm-storage-policy-1
Once the Virtual Machine has been reconfigured, we can then take a look in our vSphere Web Client and we can see the VM Storage Policy has now been changed and VSAN will automatically enforce these new requirements.

change-vm-storage-policy-2
If you wish to assign a VM Storage Policy as part of a new Virtual Machine creation, you just need to set the vmProfile and profile properties which is similar to a reconfiguration operation.

  1. Exploring VSAN APIs Part 1 – Enable VSAN Cluster
  2. Exploring VSAN APIs Part 2 – Query available SSDs
  3. Exploring VSAN APIs Part 3 – Enable VSAN Traffic Type
  4. Exploring VSAN APIs Part 4 – VSAN Disk Mappings
  5. Exploring VSAN APIs Part 5 – VSAN Host Status
  6. Exploring VSAN APIs Part 6 – Modifying Virtual Machine VM Storage Policy
  7. Exploring VSAN APIs Part 7 – VSAN Datastore Folder Management
  8. Exploring VSAN APIs Part 8 – Maintenance Mode
  9. Exploring VSAN APIs Part 9 – VSAN Component count
  10. Exploring VSAN APIs Part 10 – VSAN Disk Health

Categories // VSAN Tags // spbm, vm storage policy, vm storage profile, VSAN, vSphere 5.5, vSphere API

  • « Previous Page
  • 1
  • …
  • 44
  • 45
  • 46
  • 47
  • 48
  • …
  • 54
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Ultimate Lab Resource for VCF 9.0 06/25/2025
  • VMware Cloud Foundation (VCF) on ASUS NUC 15 Pro (Cyber Canyon) 06/25/2025
  • VMware Cloud Foundation (VCF) on Minisforum MS-A2 06/25/2025
  • VCF 9.0 Offline Depot using Synology 06/25/2025
  • Deploying VCF 9.0 on a single ESXi host? 06/24/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...