WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Why is my VSAN Component maximum showing less than 3000?

01.28.2015 by William Lam // Leave a Comment

This is a question that I have seen come up on several occasions in both the VMTN Community forums as well as in our internal Socialcast group. I have not seen anyone blog about this topic yet and figure I would share the answer since this was a question I had asked myself when I had initially setup VSAN. If you are not familiar with VSAN Components, I highly recommend you check out Cormac Hogan's blog article VSAN Part 4: Understanding Objects and Components.

In vSphere 5.5 Update 1, the maximum number of supported components for VSAN is 3000 which is a per ESXi host maximum. What some folks are noticing when they run the RVC vsan.check_limits command on their VSAN Cluster, they are finding out that the maximum is coming up much lower as seen in the example below.

/localhost/VSAN-Datacenter/computers> vsan.check_limits VSAN-Cluster/
2015-01-28 15:34:25 +0000: Gathering stats from all hosts ...
2015-01-28 15:34:27 +0000: Gathering disks info ...
+--------------------------------+-------------------+-------------------------------------------+
| Host                           | RDT               | Disks                                     |
+--------------------------------+-------------------+-------------------------------------------+
| vesxi55-3.primp-industries.com | Assocs: 30/20000  | Components: 8/750                         |
|                                | Sockets: 17/10000 | naa.6000c2932c3f51f04e4cd395f4a11752: 8%  |
|                                | Clients: 3        | naa.6000c294f6496a99ad756857b9b06f01: 0%  |
|                                | Owners: 5         |                                           |
| vesxi55-2.primp-industries.com | Assocs: 10/20000  | Components: 8/750                         |
|                                | Sockets: 13/10000 | naa.6000c294bde5987d60398e0305978b00: 9%  |
|                                | Clients: 0        | naa.6000c292a964255b82410099360a9b27: 0%  |
|                                | Owners: 0         |                                           |
| vesxi55-1.primp-industries.com | Assocs: 24/20000  | Components: 8/750                         |
|                                | Sockets: 15/10000 | naa.6000c298b69006b820e367b5fde97cbf: 11% |
|                                | Clients: 3        | naa.6000c29db3f272cfb7fb4d08bffad3ab: 0%  |
|                                | Owners: 3         |                                           |
+--------------------------------+-------------------+-------------------------------------------+

The reason for this is actually due to the amount of physical memory available to each ESXi host. If you are running VSAN in a Nested ESXi environment like I am in the example above, I only have 8GB of memory configured for each ESXi host. The number of supported VSAN Components will definitely differ from an actual physical host with more memory and the nice thing about vsan.check_limits command is that it is dynamic in nature based on the actual available resources. Funny enough, the majority of the questions actually came from folks who ran VSAN in a Nested Environment, so this would explain why this question keeps popping up.

If I run the same RVC command on an environment where VSAN was running on real hardware with a decent amount of memory which most modern systems these days have, then I can see the VSAN Component maximum is properly displaying the 3000 limit as expected in the example below.

/localhost/datacenter01/computers> vsan.check_limits vsan-cluster01/
2015-01-28 15:28:47 +0000: Querying limit stats from all hosts ...
2015-01-28 15:28:49 +0000: Fetching VSAN disk info from esx021.vmwcs.com (may take a moment) ...
2015-01-28 15:28:49 +0000: Fetching VSAN disk info from esx022.vmwcs.com (may take a moment) ...
2015-01-28 15:28:49 +0000: Fetching VSAN disk info from esx024.vmwcs.com (may take a moment) ...
2015-01-28 15:28:51 +0000: Done fetching VSAN disk infos
+---------------------------+--------------------+---------------------------------------------------------------------------------+
| Host                      | RDT                | Disks                                                                           |
+---------------------------+--------------------+---------------------------------------------------------------------------------+
| esx021.vmwcs.com          | Assocs: 223/45000  | Components: 97/3000                                                             |
|                           | Sockets: 132/10000 | t10.ATA_____WDC_WD1002FAEX2D00Z3A0________________________WD2DWCATRC061926: 18% |
|                           | Clients: 14        | t10.ATA_____KINGSTON_SH103S3480G__________________00_50026B7226017C69____: 0%   |
|                           | Owners: 29         |                                                                                 |
| esx022.vmwcs.com          | Assocs: 252/45000  | Components: 96/3000                                                             |
|                           | Sockets: 143/10000 | t10.ATA_____KINGSTON_SH103S3480G__________________00_50026B7226017CA2____: 0%   |
|                           | Clients: 14        | t10.ATA_____WDC_WD1002FAEX2D00Z3A0________________________WD2DWCATRC050466: 19% |
|                           | Owners: 38         |                                                                                 |
| esx024.vmwcs.com          | Assocs: 197/45000  | Components: 96/3000                                                             |
|                           | Sockets: 122/10000 | t10.ATA_____ST2000DL0032D9VT166__________________________________5YD73PRP: 8%   |
|                           | Clients: 17        | t10.ATA_____KINGSTON_SH103S3480G__________________00_50026B7226017C5B____: 0%   |
|                           | Owners: 22         |                                                                                 |
+---------------------------+--------------------+---------------------------------------------------------------------------------+

The lesson here is that even though I am a huge supporter of using Nested ESXi to learn about new products, features and how they work from a functional perspective, there is no amount of Nested ESXi testing that can ever replace actual testing of real hardware.

Categories // ESXi, VSAN, vSphere 5.5 Tags // components, rvc, Virtual SAN, VSAN, vsan.check_limits

How to move a VSAN Cluster from one vCenter Server to another?

09.26.2014 by William Lam // 42 Comments

I recently caught an interesting VMTN thread where a user wanted to move an exiting VSAN Cluster from one vCenter Server to another vCenter Server with minimal impact to the ESXi hosts and running Virtual Machines. The great news is that this can be done without any impact to your ESXi hosts and more importantly, there is no impact to your workloads. I have personally performed this operation on several occasions without any problems and the process is actually quite straight forward and thought I walk you through it because it is literally a couple of steps.

The main reason this is not a challenge is that VSAN has been architected to not have a reliance on vCenter Server for its normal operations. It is true that vCenter Server is required for the configuration and management of the VSAN Cluster and VM Storage Policies, but once those configurations have been applied, then the vCenter Server is no longer in picture from operational point of view. This means if you need to move your VSAN Cluster from a development vCenter Server to a production vCenter Server or if you accidentally destroyed your original vCenter Server, the VSAN Cluster can easily be re-created on a new vCenter Server.

To demonstrate the process, I have a 3 Node VSAN Cluster with a running Virtual Machine on vCenter Server (vcenter55-1) and I have built a new vCenter Server (vcenter55-3) which I would like to move the existing VSAN Cluster over to.

UPDATE2 (11/02/17) - There was a question a couple of weeks back on whether the procedure outlined below could also apply to a vSAN Stretched Cluster. I did not see any technical reasons preventing this and one of our GSS Engineers had recently validated this with a customer and successfully moved a vSAN Stretched Cluster. I asked if he could share the modified instructions in case others were interested.

  1. Copy all VDS settings to new cluster
  2. Enable vSAN on new cluster (follow Step 2 below)
  3. Disable stretched cluster
  4. Move each host
  5. Move witness
  6. Re-enable stretched cluster (follow Step 4 below)

Step 1 - Deploy a new vCenter Server and create a vSphere Cluster with VSAN Enabled.

migrate-vsan-cluster-from-one-vcenter-to-another-0
Step 2 -

UPDATE1 (05/02/17) - Updated to include vSAN 6.6. specific instructions.

Pre-vSAN 6.6 - Disconnect one of the ESXi hosts from your existing VSAN Cluster and then add that to the VSAN Cluster in your new vCenter Server.

Note: Technically, you do not even have to disconnect the ESXi hosts from the old vCenter Server. You could just add the ESXi hosts to the new vCenter Server and once you have confirmed you wish to move the ESXi host, it will automatically be disconnected once added. This would actually save you an extra step.

vSAN 6.6 - An additional configuration is needed to be applied to all ESXi hosts PRIOR to disconnecting from the original vCenter Server and adding them into the new vCenter Server. Below are a few examples on how to apply the ESXi Advanced Setting which should be a value of 1:

Here is an example using ESXCLI (local or remotely) on an individual ESXi host:

esxcli system settings advanced set -o /VSAN/IgnoreClusterMemberListUpdates -i 1

Here is an example of using PowerCLI to apply the setting across all ESXi hosts if the original vCenter Server is still available:

Foreach ($vmhost in (Get-Cluster -Name VSAN-Cluster | Get-VMHost)) {
$vmhost | Get-AdvancedSetting -Name "VSAN.IgnoreClusterMemberListUpdates" | Set-AdvancedSetting -Value 1 -Confirm:$false
}

Here is an example of using PowerCLI to apply the setting directly to an ESXi host if the original vCenter Server is no longer available:

Get-VMHost -Name 192.168.1.100 | Get-AdvancedSetting -Name "VSAN.IgnoreClusterMemberListUpdates" | Set-AdvancedSetting -Value 1 -Confirm:$false

migrate-vsan-cluster-from-one-vcenter-to-another-1
Once you have successfully added the ESXi host, you should see a warning within the VSAN Configuration page stating there is a "Misconfiguration detected" which is expected. What is happening is that this ESXi has been configured in an existing VSAN Cluster and the ESXi hosts that it is supposed to be able to communicate with are not part of this VSAN Cluster. Once we add the remainder ESXi hosts, then the VSAN Cluster will be happy and this error will go away.

Note: If you try to add all of the ESXi hosts from the existing VSAN Cluster to the new VSAN Cluster at once, you will see an error regarding UUID mismatch. The trick is to add one host first and once that has been done, you can then bulk add the remainder ESXi hosts and you will not have an issue. This is handy if you are trying to automate this process.

Step 3 - Add the remainder ESXi hosts to the VSAN Cluster in the new vCenter Server. Once all hosts have been added to the new VSAN Cluster, you will see the warning icons disappear and your VSAN Cluster is now fully managed by the new vCenter Server. We can also confirm that there are no network partition as all original VSAN configurations have been retained on the ESXi hosts.

UPDATE1 (05/02/17)

Step 4 - This last step is ONLY applicable to vSAN 6.6 hosts. Once all hosts have been successfully to the new vCenter Server and you have verified cluster status is healthy and there are no network partitions. We now need to update the ESXi Advanced Setting we had set earlier from a value of 1 back to value of 0.

Here is a PowerCLI snippet which given a vSAN Cluster, it will automatically go through all ESXI hosts and update the setting:

Foreach ($vmhost in (Get-Cluster -Name VSAN-Cluster | Get-VMHost)) {
$vmhost | Get-AdvancedSetting -Name "VSAN.IgnoreClusterMemberListUpdates" | Set-AdvancedSetting -Value 0 -Confirm:$false
}

migrate-vsan-cluster-from-one-vcenter-to-another-2
Disclaimer: As mentioned there is no impact to the ESXi hosts (other than not being able to manage it while you disconnect and re-connect on the new vCenter Server) and there is no impact to the running Virtual Machines and any VM Storage Policies that have been applied to the VM will still be enforced by each of the ESXI hosts. However, one thing to be aware of is that the VM Storage Policies in your original vCenter Server will not be available in the new vCenter Server. You will need to re-create each of the VM Storage Policies and re-attach them to the existing Virtual Machines. This can of course be automated by using the vSphere API or leveraging the new PowerCLI 5.8 R1 release which includes VM Storage Policie cmdlets.

Here is an example of exporting a VM Storage Policy named "FTT=1" to a file called policy.xml on your desktop:

Export-SpbmStoragePolicy -StoragePolicy (Get-SpbmStoragePolicy -Name FTT=1) -FilePath C:\Users\Administrator\Desktop\policy.xml

Currently this is the only impact by moving a VSAN Cluster from one vCenter Server to another and of course this assumes you have created VM Storage Policies aside from the default policies.

I received a couple of questions regarding the networking setup for my VSAN Cluster. In the above example I was using VSS (Virtual Standard Switch). I did however, retest this scenario completely on VDS (Virtual Distributed Switch) and the results were the same. When all ESXi hosts have been added to the new vCenter Server, you will see a warning about proxy host switch. The key to properly migrating the networks (VMkernel & VM Portgroup) is to add each ESXi hosts to the new VDS that you will need to create. If you original vCenter Server is still available, you can export and import the VDS configuration. If it is not available, then you will need to manually re-create the Distributed Portgroups before proceeding.

The first step is to go to the Networking view and right click select "Add and Manage Hosts"

migrate-vsan-cluster-from-one-vcenter-to-another-3
Go ahead and walk through the guided wizard and make sure you only add one hosts at a time, as I saw issues when trying to add multiple hosts at a time. Once the ESXi host has been added to the new VDS and its uplinks, VMkernel and VM Portgroups are all connected. You should now see two VDS under the Networking view of ESXi host under "Manage".

migrate-vsan-cluster-from-one-vcenter-to-another-4
This can be seen clearly using the vSphere C# Client as it allows you to view both on the same screen. Once you have confirmed that the everything looks good, then you can go ahead and remove the old VDS switch as shown in the screenshot above. At this point, your ESXi hosts networking is now running on the new VDS. You will continue this same workflow for the remainder ESXi hosts until they all have been migrated over to the new VDS.

Categories // ESXi, VSAN, vSphere, vSphere 5.5 Tags // ESXi, Virtual SAN, VSAN, VSAN 6.6

Restoring VSAN VM Storage Policy without vCenter Part 2: Using vSphere API

11.25.2013 by William Lam // 3 Comments

Last week I demonstrated a manual method of recovering a VSAN VM Storage Policy when vCenter Server is no longer available by using a VSAN command-line utility found in the ESXi Shell called cmmds-tool. Though this approach works, it can also be quite tedious and error prone as you have to manually go through various configuration files and extract out the individual VSAN Object UUIDs. Luckily, one can automate the process outlined in the previous article by leveraging the vSphere API to connect directly to an ESXi host and access the VSAN Internal CMMDS system (Clustering Monitoring, Membership and Directory Services).

Disclaimer: This script is provided as a sample, please ensure it is properly tested before using it in a production environment.

I have created a sample vSphere SDK for Perl script called queryVSANVMStoragePolicyMapping.pl to demonstrate the vSphere API functionality. To use this script, you will need to ensure you have either the vCLI or vSphere SDK for Perl 5.5 installed on a system or you can use the vMA 5.5 appliance. You will also need to install an additional Perl module called JSON which is used in the script.

In my environment, I have a Virtual Machine called VSAN-VM-1 which has the following VM Storage Policies assigned to it.

Lets say our vCenter Server is now gone, how do we go about recovering the VM Storage Policy configurations to rebuild it on our new vCenter Server? With this script it is quite simple to recover the information by simply connecting to the ESXi host and specifying the name of the Virtual Machine.

Here is a sample execution of the script for my VSAN-VM-1:

From the output we can see that it automatically identifies the VSAN Object UUID for the VM Home directory as well as all VMDKs that are associated with that Virtual Machine. The script then uses that information to pass to the QueryCmmds API method which is part of the vsanInternalSystem manager to perform the query. The output is then returned as a JSON string which is parsed by the script to display the VM Storage Policy MoRef ID for each corresponding Virtual Machine component along with their configured VSAN policies.

The VM Home directory maps to our "Copper" policy which looks like the following in the vSphere Web Client:

The first VMDK maps to our "Platinum" policy which looks like the following in the vSphere Web Client:

The final VMDK maps to our "Aluminum" policy which looks like the following in the vSphere Web Client:

Categories // VSAN, vSphere 5.5 Tags // cmmds-tool, ESXi 5.5, Virtual SAN, vm storage policy, vm storage profile, VSAN, vSphere 5.5

  • « Previous Page
  • 1
  • …
  • 6
  • 7
  • 8
  • 9
  • 10
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...