WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

VSAN Flash/MD capacity reporting

04.29.2014 by William Lam // Leave a Comment

One of the capabilities that is available with VSAN when creating a VM Storage Policy is the ability to specify the amount of to Flash to reserve for a Virtual Machine object as a read cache. For Virtual Machines that require high levels of performance, you can assign this policy to the Virtual machine and VSAN will ensure a percentage of the Flash capacity is provided to your workload.

vsan-flash-md-capacity-report-3-NEW
A couple weeks back I was asked whether it was possible to report on the total amount of Flash capacity available to a VSAN Cluster including what has been reserved and in use. I thought that this was a great idea as users would probably want to be able see their utilization over time and ensure they do not over provision their Flash capacity.

For those of you who have used RVC, this information is somewhat available today using the vsan.disks_stats command. The only problem is that this information is only provided at a per device level for each ESXi host and not in an aggregate view for the entire VSAN Cluster.

vsan-flash-md-capacity-report-0
Leveraging the work I had done earlier with exploring the VSAN API and looking at the VSAN component count, I was able to extract the necessary information that I was looking for to provide an aggregate view. To demonstrate this functionality, I have created two sample scripts: vSphere SDK for Perl script called vsanFlashAndMDCapacity.pl and PowerCLI script called vsanFlashAndMDCapacity.ps1

Disclaimer:  These scripts are provided for informational and educational purposes only. It should be thoroughly tested before attempting to use in a production environment.

Both scripts work exactly the same way, you just need to connect it to a vCenter Server that has at least one VSAN Cluster. The script will automatically search for all VSAN enabled vSphere Cluster and provide the following information:

  • Total SSD Capacity
  • Total SSD Reserved Capacity
  • Total SSD Used Capacity
  • Total MD Capacity
  • Total MD Reserved Capacity
  • Total MD Used Capacity

Here is an example screenshot for the vSphere SDK for Perl script:

vsan-flash-md-capacity-report-1
Here is an example screenshot for the PowerCLI script:

vsan-flash-md-capacity-report-2
One question I had myself while looking at the results was regarding the "Used" property and what it meant. I think this is best explained with an example after learning about the details from engineering.

Lets say there are 2 VSAN objects:

  • Object1: Configured size: 100GB, space reservation 10%, actual data written 5GB.
  • Object2: Configured size: 100GB, space reservation 10%, actual data written 15GB.

This would mean:

Object1:
Configured/Provisioned: 100GB
Reserved: 10GB
Physical Used: 5GB
Used: 10GB

Object2:
Configured/Provisioned: 100GB
Reserved: 10GB
Physical Used: 15GB
Used: 15GB

The "Used" property is then calculated as the MAX(Physical Used, Reserved). I have also shared this information with engineering, perhaps they may consider adding this information to RVC 🙂 If you think this is something you would like to see in RVC, please leave a comment.

Categories // Automation, VSAN Tags // ESXi 5.5, flash, PowerCLI, ssd, VSAN, vSphere 5.5, vSphere API

Handy VSAN VOBs for creating vCenter Alarms

04.22.2014 by William Lam // 3 Comments

There have been quite a few questions lately around vCenter Server Alarms for VSAN, one in particular that I have noticed is around individual disk failure for VSAN. Outside of the generic default datastore alarms, there seems to be only two VSAN specific alarms:

vsan-default-alarms
I figure there must be other useful alarms that we could create, especially after showing how you can create a vCenter Server Alarm to monitor the VSAN component count threshold based on a particular VSAN VOB. I took a look around and found the following VSAN specific VOBs which could be useful for creating additional vCenter Alarms.

VOB ID VOB Description
esx.audit.vsan.clustering.enabled VSAN clustering services have been enabled.
esx.clear.vob.vsan.pdl.online VSAN device has come online.
esx.clear.vsan.clustering.enabled VSAN clustering services have now been enabled.
esx.clear.vsan.vsan.network.available VSAN now has at least one active network configuration.
esx.clear.vsan.vsan.vmknic.ready A previously reported vmknic now has a valid IP.
esx.problem.vob.vsan.lsom.componentthreshold VSAN Node: Near node component count limit.
esx.problem.vob.vsan.lsom.diskerror VSAN device is under permanent error.
esx.problem.vob.vsan.lsom.diskgrouplimit Failed to create a new disk group.
esx.problem.vob.vsan.lsom.disklimit Failed to add disk to disk group.
esx.problem.vob.vsan.pdl.offline VSAN device has gone offline.
esx.problem.vsan.clustering.disabled VSAN clustering services have been disabled.
esx.problem.vsan.lsom.congestionthreshold VSAN device Memory/SSD congestion has changed.
esx.problem.vsan.net.not.ready A vmknic added to VSAN network configuration doesn't have valid IP. Network is not ready.
esx.problem.vsan.net.redundancy.lost VSAN doesn't haven any redundancy in its network configuration.
esx.problem.vsan.net.redundancy.reduced VSAN is operating on reduced network redundancy.
esx.problem.vsan.no.network.connectivity VSAN doesn't have any networking configuration for use.
esx.problem.vsan.vmknic.not.ready A vmknic added to VSAN network configuration doesn't have valid IP. It will not be in use.

Looking at the list above, the following two VOBs seems like they would be useful for alerting on a disk failure is:

  • esx.problem.vob.vsan.lsom.diskerror
  • esx.problem.vob.vsan.pdl.offline

Disclaimer: There are no guarantees that a disk error or failure will automatically trigger these VOBs due to the unknown nature of how a disk may be fail, especially if it is intermittently.

Even though we can not simulate a disk error on a physical disk, we can still do some magic using a Nested VSAN environment. The worse case scenario that you could run into is that one of the disk just goes completely offline. We can simulate a similar behavior in a Nested ESXi environment by removing one of the virtual disks from the Virtual Machine (not deleting it).

To demonstrate the following scenario, here are the steps to create a vCenter Alarm for the following two VOBs:

Step 1 - Create a new vCenter Alarm and give it a name. Select “Hosts” for Monitor and “Specific event occurring …” for Monitor:

vsan-disk-failure-alarm-0
Step 2 - Add the following two VOBs above into the Event trigger:

vsan-disk-failure-alarm-1
Step 3 - Remove one of the Virtual Disks (SSD/MD) from the Virtual Machine running the Nested ESXi VM.

Step 4 - There are two ways in which you can trigger the alarm. You can either create a new Virtual Machine which will try to write to the Nested ESXi VM in which you remove the Virtual Disk or you can rescan the storage adapter for the Nested ESXi VM. In my environment, I happen to have a VM running on an NFS datastore and I performed a Storage vMotion of the VM onto my VSAN Datastore using the default FTT=1 policy on a three node VSAN Cluster. This immediately triggered the alarm as seen in the screenshots below:

vsan-disk-failure-alarm-2

vsan-disk-failure-alarm-3

Categories // VSAN Tags // alarm, ESXi 5.5, vob, VSAN, vSphere 5.5

OVF template for creating Nested ESXi 3 or 32 node VSAN Cluster

04.15.2014 by William Lam // 14 Comments

Last week I had to build a couple of Nested VSAN environments for testing and of course I used my VSAN Nested ESXi OVF template to help expedite the deployment. After deploying the OVF for the third time to get my three Nested ESXi nodes, it hit me. Why am I doing this each time when I know I will need a minimum of three nodes for a proper VSAN environment? Not sure why I did not think of this earlier, but why not create a vApp that contains three Nested ESXi VM templates?

By leveraging the Dynamic Disk feature in OVF, I was able to create two tiny vApps (40KB & 410KB respectively) based off of my original Nested VSAN ESXi OVF template:

  • Nested ESXi 3-Node VSAN OVF template
  • Nested ESXi 32-Node VSAN OVF template

The only difference with these OVF templates is that you can now easily an quickly deploy a single OVF that will contain the minimal number of VSAN nodes up to the maximum supported which is 32.

Disclaimer: Nested Virtualization is not not officially supported by VMware, please use at your own risk

Prerequisite:

  • vSphere Web Client
    • To deploy either the single VSAN Nested ESXi OVF template or these new ones, you need to make sure you deploy using the vSphere Web Client. The reason for this is that the lossless OVF import/export feature is only available when using the vSphere Web Client, else you the import will not capture all the settings the OVF template was configured with.
  • vSphere Cluster w/DRS enabled
    • vApp creation is only possible when DRS is enabled

Step 1 - Deploy the OVF template using the vSphere Web Client and make sure you select "Accept extra configuration options" which contains extra parameters needed to run ESXi and VSAN in a nested environment.

nested-esxi-vsan-3-node-template-0
Step 2 - Go through the OVF deployment wizard as you normally would. When you get to "Customize Template" you will notice each Nested ESXi VM is in its own Category as seen in the screenshot below. Here you can leave the defaults for a minimal VSAN deployment which contains 2GB disk for ESXi installation, 4GB disk for an "emulated" SSD and 8GB disk for MD or you can specify the size for each disk.

nested-esxi-vsan-3-node-template-1
In just a couple of seconds, you will now have a vApp that contains either a 3-node Nested ESXi VM or you can go big and deploy a 32-node Nested ESXi environment.

nested-esxi-vsan-3-node-template-2
Note: Please note there maybe other configurations changes such as this one and/or increase in VM resources to run larger VSAN Clusters.

I know these OVF templates will come in handy for myself when needing to quickly deploy a VSAN running in a Nested ESXi environment and hopefully it will also benefit others in the community as well!

Categories // Nested Virtualization, VSAN Tags // nested, nested virtualization, ovf, vapp, VSAN, vSphere 5.5

  • « Previous Page
  • 1
  • …
  • 28
  • 29
  • 30
  • 31
  • 32
  • …
  • 39
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automating the vSAN Data Migration Pre-check using vSAN API 06/04/2025
  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...