This is a question that I have seen come up on several occasions in both the VMTN Community forums as well as in our internal Socialcast group. I have not seen anyone blog about this topic yet and figure I would share the answer since this was a question I had asked myself when I had initially setup VSAN. If you are not familiar with VSAN Components, I highly recommend you check out Cormac Hogan's blog article VSAN Part 4: Understanding Objects and Components.
In vSphere 5.5 Update 1, the maximum number of supported components for VSAN is 3000 which is a per ESXi host maximum. What some folks are noticing when they run the RVC vsan.check_limits command on their VSAN Cluster, they are finding out that the maximum is coming up much lower as seen in the example below.
/localhost/VSAN-Datacenter/computers> vsan.check_limits VSAN-Cluster/ 2015-01-28 15:34:25 +0000: Gathering stats from all hosts ... 2015-01-28 15:34:27 +0000: Gathering disks info ... +--------------------------------+-------------------+-------------------------------------------+ | Host | RDT | Disks | +--------------------------------+-------------------+-------------------------------------------+ | vesxi55-3.primp-industries.com | Assocs: 30/20000 | Components: 8/750 | | | Sockets: 17/10000 | naa.6000c2932c3f51f04e4cd395f4a11752: 8% | | | Clients: 3 | naa.6000c294f6496a99ad756857b9b06f01: 0% | | | Owners: 5 | | | vesxi55-2.primp-industries.com | Assocs: 10/20000 | Components: 8/750 | | | Sockets: 13/10000 | naa.6000c294bde5987d60398e0305978b00: 9% | | | Clients: 0 | naa.6000c292a964255b82410099360a9b27: 0% | | | Owners: 0 | | | vesxi55-1.primp-industries.com | Assocs: 24/20000 | Components: 8/750 | | | Sockets: 15/10000 | naa.6000c298b69006b820e367b5fde97cbf: 11% | | | Clients: 3 | naa.6000c29db3f272cfb7fb4d08bffad3ab: 0% | | | Owners: 3 | | +--------------------------------+-------------------+-------------------------------------------+
The reason for this is actually due to the amount of physical memory available to each ESXi host. If you are running VSAN in a Nested ESXi environment like I am in the example above, I only have 8GB of memory configured for each ESXi host. The number of supported VSAN Components will definitely differ from an actual physical host with more memory and the nice thing about vsan.check_limits command is that it is dynamic in nature based on the actual available resources. Funny enough, the majority of the questions actually came from folks who ran VSAN in a Nested Environment, so this would explain why this question keeps popping up.
If I run the same RVC command on an environment where VSAN was running on real hardware with a decent amount of memory which most modern systems these days have, then I can see the VSAN Component maximum is properly displaying the 3000 limit as expected in the example below.
/localhost/datacenter01/computers> vsan.check_limits vsan-cluster01/ 2015-01-28 15:28:47 +0000: Querying limit stats from all hosts ... 2015-01-28 15:28:49 +0000: Fetching VSAN disk info from esx021.vmwcs.com (may take a moment) ... 2015-01-28 15:28:49 +0000: Fetching VSAN disk info from esx022.vmwcs.com (may take a moment) ... 2015-01-28 15:28:49 +0000: Fetching VSAN disk info from esx024.vmwcs.com (may take a moment) ... 2015-01-28 15:28:51 +0000: Done fetching VSAN disk infos +---------------------------+--------------------+---------------------------------------------------------------------------------+ | Host | RDT | Disks | +---------------------------+--------------------+---------------------------------------------------------------------------------+ | esx021.vmwcs.com | Assocs: 223/45000 | Components: 97/3000 | | | Sockets: 132/10000 | t10.ATA_____WDC_WD1002FAEX2D00Z3A0________________________WD2DWCATRC061926: 18% | | | Clients: 14 | t10.ATA_____KINGSTON_SH103S3480G__________________00_50026B7226017C69____: 0% | | | Owners: 29 | | | esx022.vmwcs.com | Assocs: 252/45000 | Components: 96/3000 | | | Sockets: 143/10000 | t10.ATA_____KINGSTON_SH103S3480G__________________00_50026B7226017CA2____: 0% | | | Clients: 14 | t10.ATA_____WDC_WD1002FAEX2D00Z3A0________________________WD2DWCATRC050466: 19% | | | Owners: 38 | | | esx024.vmwcs.com | Assocs: 197/45000 | Components: 96/3000 | | | Sockets: 122/10000 | t10.ATA_____ST2000DL0032D9VT166__________________________________5YD73PRP: 8% | | | Clients: 17 | t10.ATA_____KINGSTON_SH103S3480G__________________00_50026B7226017C5B____: 0% | | | Owners: 22 | | +---------------------------+--------------------+---------------------------------------------------------------------------------+
The lesson here is that even though I am a huge supporter of using Nested ESXi to learn about new products, features and how they work from a functional perspective, there is no amount of Nested ESXi testing that can ever replace actual testing of real hardware.
Thanks for the comment!