WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Quick Tip - Don't always assume your local HDs will be claimed correctly

03.21.2014 by William Lam // Leave a Comment

For whatever reason I woke up super early this morning around 4am PST and since I could not go back to sleep, I figure I might as well finish re-building one of my lab environments. I was trying to create a VSAN Disk Group using ESXCLI, but ran into the following error: Stamping non-local disks requires a pre-created vsan cluster If I run the following ESXCLI command, we can clearly see the Hitachi disks that I have are not being recognized as a local device:

esxcli storage core device list

hard-disk-claim-rules-0
I already know that some disks may not be recognized as a local device which VSAN requires and this is nothing new, in fact Duncan Epping even shows you how to get around this in this article here. Seeing that it was so early in the US, I figure Duncan was probably awake and wanted to run a few things by him. It was while he was looking around in the system did he notice a stranger issue.
hard-disk-claim-rules-1
It turns out my local Hitachi disks were being claimed by the VMW_SATP_DEFAULT_AA (ALUA) SATP plugin which was kind of strange as I would have expected to be claimed by VMW_SATP_LOCAL instead. I decided to take a look at the SATP claim rules to see why this was the case by running the following ESXCLI command:

esxcli storage nmp satp rule list

It turns out, for identifying Hitachi storage devices, the SATP rules is quite generic and keys off of the vendor name only and hence the assignment of the ALUA SATP plugin is choosen.

VMW_SATP_DEFAULT_AA          HITACHI                                                                   system      inq_data[128]={0x44 0x46 0x30 0x30}  VMW_PSP_RR
VMW_SATP_DEFAULT_AA          HITACHI                                                                   system

In my opinion this can be problematic as the plugin can potentially cause strange IO or pathing behaviors as it expects one thing but is getting something completely different. I suspect this plugin was probably meant for an Hitachi storage array like an HDS instead of a local device. Once I was able to narrow this down, I just needed to create a new rule that would override the system defaults.

To verify this, I created a new rule based on the vendor name just for testing purposes which will have priority over the default system rules. When creating custom SATP rules, you can filter on a variety of attributes such as the model, transport, controller, target, adapter, etc. Depending on your use case, you may want to be generic or quite specific. The following ESXCLI command will create the new rule and assign it the VMW_SATP_LOCAL plugin:

esxcli storage nmp satp rule add -V HITACHI -P VMW_PSP_FIXED -s VMW_SATP_LOCAL

hard-disk-claim-rules-3
Once that rule has been added, I can now finish my original task which is to mark my Hitachi drives as "local" and reclaim these devices by running the following ESXCLI commands:

esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -o "enable_local" -d [DEVICE]
esxcli storage core claiming reclaim -d [DEVICE]

hard-disk-claim-rules-4
If we now take a look at our device, we can see that are rule was activated and if we perform the following ESXCLI command, we can now see the device is "local":

esxcli storage core device list

hard-disk-claim-rules-5

The lesson here, do not assume your local devices and potentially even remote devices will be claimed correctly. There maybe vendor best practices that differ from the out out box rules and you should always do your due diligence to either verify yourself or work with your vendors to confirm the configurations.

Categories // ESXi, VSAN, vSphere Tags // esxcli, ESXi, SATP, vSphere

A kitten dies, every time you query the VCDB

03.19.2014 by William Lam // 14 Comments

vcdb1
Okay, maybe I am being a bit dramatic 😉 However, this is what crosses my mind every time I hear someone trying to query for something in the VCDB (vCenter Server Database) instead of using the vSphere API. Every so often I see a customer request asking for a specific SQL Query to find something in the VCDB. It just baffles me on why someone would want to go through such pain, not to mention this is not supported nor a recommended best practice.

Disclaimer: No kittens were harmed during the writing of this article

There are many disadvantages by going directly to the VCDB versus going through the vSphere API:

  • There are no guarantees on backwards compatibility of the database schema or database views from release to release
  • The data is in a very raw form, there is no abstraction which is usually provided by an API to make consumption easier
  • Risk of crafting queries that could negatively impact the performance of vCenter Server
  • Not officially supported by VMware, unless directed by VMware GSS
  • Pulling hair out while trying to understand the internal relationships of various objects, especially with a complex system like vCenter Server

Although the vSphere data is eventually persisted in the VCDB, the main purpose of the vSphere API, like many other software APIs is to provide a well defined interface for interacting with the underlying platform. I think there are still many vSphere/System Administrators who fear the word "API". Instead of being afraid of APIs, we should all embrace it and start to better understand how they work as it is just a way of interacting with the underlying system.

I can speak from my own personal experience, the first task for me out of college and into the real world was to look into building an application impact assessment report when an vSphere HA event occurred. We knew which Virtual Machines were restarted, but we did not have a good idea of what applications were affected. I had to correlate the impacted Virtual Machines with a list of applications and application owner from our homegrown CMDB which ran on MS SQL Server. At the time, I did not know any better and I thought best way of getting this information was to go directly to the VCDB (at the time this was Virtual Center 2.5). I found myself creating super complex SQL queries and trying to reverse engineer the relationships between all the different tables to get basic information from vCenter Server. If I knew what I know now back then, I would have gone straight to learning the VMware APIs, as I could have gotten everything I needed in just a couple of lines of code. After completing the project among few others which I leverage the VCDB directly, I realized that I needed to learn this VMware API. This early lesson has greatly helped me in my career to further automate IT/VMware infrastructure and help us move towards this new world of a Software-Defined Datacenter.

The diagram below provides an overview of some of the ways the vSphere API can be consumed by customers. An example is the vSphere Web Client and the legacy vSphere C# Client, this is a UI interface that customers can use to interact with the vSphere platform which in turns uses the vSphere API. There are also a variety of CLIs and scripting/programming languages that are built for easily extracting information and performing operations against vSphere that could be as simple as one-liner. The vSphere API can also be consumed by 3rd party companies to further abstract and provide new and interesting ways of interacting with vSphere, a great example of this is the CloudPhysics Card Builder platform that makes creating custom vSphere reports a breeze.

vcdb-vs-api
Note: The diagram above is just an example of some of the SDK/CLI/UIs that VMware provides to our customers. This is not meant as an exhaustive list and you can find all SDKs/CLIs for vSphere here.

Hopefully with this information, customers will better understand the benefits of using the vSphere API versus going directly to the VCDB.

Categories // vSphere Tags // SQL, vcdb, vCenter Server Database, vSphere API, vSphere SDK

Extending RVC to support renaming VM Storage Policies

03.18.2014 by William Lam // 1 Comment

I was recently using RVC (Ruby vSphere Console) to setup one of my VSAN lab environments and I had noticed that in the SPBM namespace, that you could create and delete a VM Storage Policy, but you could not rename an existing one. The great thing about RVC is that it is very extensible and I thought it would be useful to have a spbm.profile_rename command, so I decided to build it!

The management of VM Storage Policies is performed through the SPBM API and there is a method called PbmUpdate() which allows you to rename an existing VM Storage Policy. In my environment, I exclusively use the VCSA (vCenter Server Appliance) and in the /root directory, you should see a .rvc directory. To extend the SPBM namepace, you just need to create a new file called spbm.rb which should contain the following snippet of code:

opts :profile_rename do
  summary "Rename a VM Storage Profile"
  arg :profile, nil, :lookup => RbVmomi::PBM::PbmCapabilityProfile
  arg :name, "New name", :type => :string
end

def profile_rename profile, name
  _catch_spbm_resets(nil) do
    pbm = profile.instance_variable_get(:@connection)
    pm = pbm.serviceContent.profileManager
    spec = PBM::PbmCapabilityProfileUpdateSpec(
      :name => name,
    )
    pm.PbmUpdate(:profileId => profile.profileId, :updateSpec => spec)
  end
end

Once you have saved the file, you can now connect to RVC and you should see a new command called spbm.profile_rename which takes an existing VM Storage Policy and the new name of the policy.

Here is an example of what that would look like where I have a VM Storage Policy called "Platinum" and I want to rename it to "Adamantium":

spbm.profile_rename localhost/Datacenter/storage/vmprofiles/Platinum/ Adamantium

Categories // VSAN, vSphere Tags // rvc, spbm, vm storage policy, vm storage profile

  • « Previous Page
  • 1
  • …
  • 94
  • 95
  • 96
  • 97
  • 98
  • …
  • 109
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automating the vSAN Data Migration Pre-check using vSAN API 06/04/2025
  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...