To get the full benefits and support of the new vSAN 8 Express Storage Architecture (ESA), you will need modern hardware from the official vSAN ESA Ready Node HCL. However, from an education and learning point of view, you may want to explore some of the VSAN ESA workflows and easiest way to do that, well you probably know the answer .... it is using Nested ESXi of course!
With my recently published vSphere and vSAN 8 Lab Deployment Script, you can use that as the base and the Nested ESXi 8.0 IA Virtual Appliance to setup vSAN 8 ESA using virtual hardware 😀
There are only a couple of things to be aware of prior to setting up vSAN ESA using Nested ESXi.
Before you can enable vSAN ESA, each ESXi host within the vSphere Cluster must have a minimum of 16GB of memory. You will see an error as shown in the screenshot above during the pre-check. There are also two other warnings that will show up, one regarding the use of vSphere Lifecycle Manager (vLCM) if the cluster has not been setup and you can safely ignore. The other is a network pre-check to ensure the physical (virtual in our case) network adapter is at least 25GbE or greater. For Nested ESXi, the virtual network adapter will show only 10GbE and you can also ignore and proceed after meeting the memory requirement, which is a hard pre-check.
Note: Thanks to reader Alaa, it looks like the 16GB memory minimum is only for a single disk to be used with vSAN ESA. If you wish to add a second disk, the memory needs to be updated to at least 18GB and potentially for if you plan to add additional disks. vSAN ESA officially requires 512GB of memory for a supported configuration and while that is going to be overkill for Nested environment, you may need to play with memory if you are running into enablement issues, especially if you plan to add additional disks.
During the disk claiming part of the wizard, you will see more warnings and this is expected because we are not using "hardware" from vSAN ESA HCL and hence auto-claim can not be used but we can still select the virtual disks that we wish to use and click next to proceed.
Once you reach the end of the wizard, you will be presented with a summary and enablement of vSAN ESA will begin.
Everything should be fully configured in a few minutes and you now have vSAN ESA running using Nested ESXi!
Again, this is purely for educational and learning purposes, especially in familiarizing yourself with the vSAN ESA enablement workflow. Outside of the initial workflow, I am not sure there will be any real benefits of using vSAN ESA in a Nested ESXi environment, especially compared to the vSAN Original Storage Architecture (OSA) which can be enabled with just 8GB of memory per host versus the 16GB required by vSAN ESA.
Andy says
I'm trying to do this on Vcloud Director, which in my environment is still on 6.7U2, so I need to change the VM Hardware level to 15 or lower. Anytime I do that by editing the ovf file and updating the HW version and editing the manifest with a new checksum, I can never get the uploaded OVF files to boot, I just get a bios screen to select what to boot. Any ideas, or is the HW version too far out of sync to even do this nested on 6.7U2?
William Lam says
You don’t need to mess w/checksum. Convert from OVA->OVF and delete the .mf file. Make your change and convert back to OVA or just use OVF+VMDK as-is
Danilo says
Great Post William, thank you very much!
I created a nested vSAN cluster on my home lab. I have three ESXi hosts + vCenter Server with vSAN ESA arquitecture enabled.
I created a Virtual Machine and I checked the physical disk placement for this VM. I'm using the vSAN Default Storage Policy for this VM.
I don't understand why was created the concatenation components in this case. For example:
[vsanDatastore] 35a74863-cae2-a058-6dea-000c29f65804/windows10-4e1f16ed.vswp
DOM Object: d2a74863-52bb-b824-5fe6-000c29d28ffe (v17, owner: host102.lab.vsan, proxy owner: None, policy: stripeWidth = 1, cacheReservation = 0, proportionalCapacity = 0, hostFailuresToTolerate = 1, forceProvisioning = 1, spbmProfileId = aa6d5a82-1c88-45da-85d3-3d74b91a5bad, spbmProfileGenerationNumber = 0, CSN = 1, spbmProfileName = vSAN Default Storage Policy)
Concatenation
RAID_1
Component: d2a74863-a97c-8b26-9b69-000c29d28ffe (state: ACTIVE (5), host: host100.lab.vsan, capacity: 52e05d45-9130-948f-56f4-50ec2acf7694, cache: ,
votes: 2, usage: 0.0 GB, proxy component: false)
Component: d2a74863-55ef-d226-f15a-000c29d28ffe (state: ACTIVE (5), host: host102.lab.vsan, capacity: 52af91a1-7d7c-faf3-23df-583598a780aa, cache: ,
votes: 1, usage: 0.0 GB, proxy component: false)
RAID_1
RAID_0
Component: d2a74863-65d6-d526-23a6-000c29d28ffe (state: ACTIVE (5), host: host100.lab.vsan, capacity: 52c36d1c-a8bb-086d-041f-dd018ae41028, cache: ,
votes: 1, usage: 0.0 GB, proxy component: false)
Component: d2a74863-a912-d726-cf53-000c29d28ffe (state: ACTIVE (5), host: host100.lab.vsan, capacity: 52cdb989-a6fd-a290-000b-f6b780182e2d, cache: ,
votes: 1, usage: 0.0 GB, proxy component: false)
Component: d2a74863-6d7c-d926-596e-000c29d28ffe (state: ACTIVE (5), host: host100.lab.vsan, capacity: 52cdb989-a6fd-a290-000b-f6b780182e2d, cache: ,
votes: 1, usage: 0.0 GB, proxy component: false)
RAID_0
Component: d2a74863-c523-da26-07c9-000c29d28ffe (state: ACTIVE (5), host: host102.lab.vsan, capacity: 52b40444-5921-86dc-e0bd-0ecfd0a9ca8c, cache: ,
votes: 1, usage: 0.0 GB, proxy component: false)
Component: d2a74863-09e7-db26-0cee-000c29d28ffe (state: ACTIVE (5), host: host102.lab.vsan, capacity: 52e5f0bd-c3ca-1d9f-c3ec-a04140027e01, cache: ,
votes: 1, usage: 0.0 GB, proxy component: false)
Component: d2a74863-c5f3-dc26-e08c-000c29d28ffe (state: ACTIVE (5), host: host102.lab.vsan, capacity: 52b40444-5921-86dc-e0bd-0ecfd0a9ca8c, cache: ,
votes: 1, usage: 0.0 GB, proxy component: false)
Witness: d2a74863-8df0-de26-cbf0-000c29d28ffe (state: ACTIVE (5), host: host101.lab.vsan, capacity: 5244e06d-e293-adaf-03de-1d006acca396, cache: ,
votes: 4, usage: 0.0 GB, proxy component: false)
May you help me in this situation, please?
Thank you William,
Danilo.
Alaa says
Hello William,
First thank you a lot for the share.
Can you confirm if you have tested two disks or more per host (with 16 GB memory) ?
I’ve tested the vSAN ESA on the nested environment on VMware Workstation. I can create the vSAN Datastore with just one disk (SSD or NVMe) per each ESXi Host. But when i choose more than disk per host I get these errors :
“General vSAN error. Failed to add disks to vSAN.”
“A general system error occurred: Failed to create storage pool disk(s) with exception: Failed to invoke create FS on disk: naa.6000c29834a5cfb0f5955dbe7015d246 with error: Unable to complete Sysinfo operation. Please see the VMkernel log file for more details.: Sysinfo error: Out of memorySee VMkernel log for details.”
After a lot of checks and analysis I have find that the issue related to the memory heap.
Exemple of logs :
vmkwarning: cpu1:263779 opID=edbab0dc)WARNING: Heap: 1082: Could not create a pagepool for heap spl-logical-78a8480d2d72b982: Admission check failed for memory resource
vmkwarning: cpu1:263779 opID=edbab0dc)WARNING: WOBTREE: platform_heap_create_with_initial_size:60: Failed to create heap: Out of memory
The solution is :
1- increase the heap memory size + reboot.
2- Exemple :
[root@esxi-01a:~] esxcfg-advcfg -g /LSOM/heapSize
Value of heapSize is 256
[root@esxi-01a:~] esxcfg-advcfg -s 2047 /LSOM/heapSize
Value of heapSize is 2047
[root@esxi-01a:~] reboot
It’s possible to juste increase the host memory, but not helpful for lab environment.
Thanks.
rmbki says
I had similar issues with a General vSAN error. I'm having to "probe" the functional limits to find the smallest configuration that still works in my lab. "16GB per host, boom, done" isn't cutting it.
Alaa says
Hello, I confirm I had again the same error after hosts reboot. I can confirm now that the minimum memory need to use more than one disk (Two or Three max) per host is 18 GB. So the best solution is to increase the memory.
@Wiliam do you have a solution ? Thanks.
William Lam says
Alaa - I've not forgotten about this thread and still trying to get an answer. The official response I've gotten thus far is that VSAN ESA requires minimum of 512GB for proper supported configuration and anything less than that is YMMV and not supported. It sounds like you were able to confirm that simply updating ESXi VM memory from 16GB->18GB allowed you to configure VSAN ESA w/o messing with heap memory?
Alaa says
Thank you William for your response. Yep i confirm that is not related to the heap memory but to the host memory. With 18 GB of RAM i can add up to two disks per hosts, if i want to add more i must increase the RAM. Below is some information about the configuration with the error I got when trying to add a third disk.
*****
[root@esxi-01a:~] vmware -v
VMware ESXi 8.0.0 build-20513097
[root@esxi-01a:~] vim-cmd hostsvc/hosthardware | grep -i "numcpu*"
numCpuPackages = 2,
numCpuCores = 2,
numCpuThreads = 2,
[root@esxi-01a:~] vim-cmd hostsvc/hosthardware | grep -i "memorySize*"
memorySize = 19326255104,
memorySize = 19324616704,
memorySize = 0
[root@esxi-01a:~] esxcfg-advcfg -g /LSOM/heapSize
Value of heapSize is 256
[root@esxi-01a:~] esxcli vsan storagepool list
naa.6000c29375de58c98723c3b6d2e1a973
Device: naa.6000c29375de58c98723c3b6d2e1a973
Display Name: naa.6000c29375de58c98723c3b6d2e1a973
vSAN UUID: 5282f918-8d0f-f5bf-28a0-4b5515129fbf
Used by this host: true
In CMMDS: true
On-disk format version: 17
Checksum: 4267661042492715910
Checksum Ok: true
Is Mounted: true
Is Encrypted: false
Disk Type: singleTier
Creation Time: Tue Oct 25 10:24:42 2022
[root@esxi-01a:~] esxcli vsan storagepool add -d naa.6000c29520e7449debbf5172b28ed6cb
[root@esxi-01a:~] esxcli vsan storagepool list
naa.6000c29375de58c98723c3b6d2e1a973
Device: naa.6000c29375de58c98723c3b6d2e1a973
Display Name: naa.6000c29375de58c98723c3b6d2e1a973
vSAN UUID: 5282f918-8d0f-f5bf-28a0-4b5515129fbf
Used by this host: true
In CMMDS: true
On-disk format version: 17
Checksum: 4267661042492715910
Checksum Ok: true
Is Mounted: true
Is Encrypted: false
Disk Type: singleTier
Creation Time: Tue Oct 25 10:24:42 2022
naa.6000c29520e7449debbf5172b28ed6cb
Device: naa.6000c29520e7449debbf5172b28ed6cb
Display Name: naa.6000c29520e7449debbf5172b28ed6cb
vSAN UUID: 52caba60-39db-df74-a9f0-1c294c7d4697
Used by this host: true
In CMMDS: true
On-disk format version: 17
Checksum: 1995621253246329509
Checksum Ok: true
Is Mounted: true
Is Encrypted: false
Disk Type: singleTier
Creation Time: Wed Oct 26 10:21:45 2022
[root@esxi-01a:~] esxcli vsan storagepool list | grep -i cmmds
In CMMDS: true
In CMMDS: true
[root@esxi-01a:~] esxcli vsan storagepool add -d naa.6000c29b5cdd3b779bfd6ef11cd2c7fd
Unable to add device: Failed to create storage pool disk(s) with exception: Failed to invoke create FS on disk: naa.6000c29b5cdd3b779bfd6ef11cd2c7fd with error: Failed to prepare createFS disk op for storage pool disk naa.6000c29b5cdd3b779bfd6ef11cd2c7fd
[root@esxi-01a:~] esxcli vsan storagepool list
naa.6000c29b5cdd3b779bfd6ef11cd2c7fd
Device: naa.6000c29b5cdd3b779bfd6ef11cd2c7fd
Display Name: naa.6000c29b5cdd3b779bfd6ef11cd2c7fd
vSAN UUID: 52080173-19c0-92e4-a1c4-f4a1932d34d1
Used by this host: true
In CMMDS: false
On-disk format version: 17
Checksum: 10402518381846082570
Checksum Ok: true
Is Mounted: true
Is Encrypted: false
Disk Type: singleTier
Creation Time: Wed Oct 26 22:44:50 2022
naa.6000c29375de58c98723c3b6d2e1a973
Device: naa.6000c29375de58c98723c3b6d2e1a973
Display Name: naa.6000c29375de58c98723c3b6d2e1a973
vSAN UUID: 5293a251-a465-1855-ba32-851e1b45f0ee
Used by this host: true
In CMMDS: true
On-disk format version: 17
Checksum: 16445638087944010071
Checksum Ok: true
Is Mounted: true
Is Encrypted: false
Disk Type: singleTier
Creation Time: Wed Oct 26 22:38:44 2022
naa.6000c29520e7449debbf5172b28ed6cb
Device: naa.6000c29520e7449debbf5172b28ed6cb
Display Name: naa.6000c29520e7449debbf5172b28ed6cb
vSAN UUID: 52e09666-6dfe-c242-4d7d-76f3aa9c6953
Used by this host: true
In CMMDS: true
On-disk format version: 17
Checksum: 451530547096230681
Checksum Ok: true
Is Mounted: true
Is Encrypted: false
Disk Type: singleTier
Creation Time: Wed Oct 26 22:38:44 2022
[root@esxi-01a:~] esxcli vsan storagepool list | grep -i cmmds
In CMMDS: false
In CMMDS: true
In CMMDS: true
*****
Michael says
Hello William ! Thanks a lot for your amazing job. I try to install VCSA 8 with Vsan ESA in command line but I have an issue and I don't know why and how to correct it. During pre-check I have this error : Task 'Running Pre-check: vSAN Cluster Health Checks.' execution failed because", "[(vim.fault.VsanFault) { dynamicType = , dynamicProperty =", "(vmodl.DynamicProperty) [], msg = 'Failed to replace old HCL DB with new". I don't find anything on web for this error. If you have an idea....
Regards,
Michael
Jim says
I ran into the same issue. Does not seem to be any good examples of a WORKING vCSA_with_cluster_on_ESXi.json file.
William Lam says
This is actually due to parsing of the hardware after the HCL DB has been downloaded. While the error is a bit miss-leading, it was an issue I had ran into with one of my env and after filing a bug, it looks like this will be resolved in the upcoming 8.0 Update 1 release.
fabio071975 says
HI William,
I created a vSAN ESXi8 ESA Nested, (three hosts with everyone have one disk for the vSAN). After vSAN Enable i see a warning on "Operation Health" and all disks are in "Disk Mounting - Please Waiting for mounting disk no action is reuired. It is all ok? 🙂
Ashish Chorge says
If possible, please add this pre-requisite in the thread as a note. i.e. You cannot upgrade in-place from vSAN 7 to vSAN 8 Express Storage Architecture (ESA). Going from vSAN 7 to vSAN 8 ESA is a migration. In other words, you will need to stand up your new vSAN 8 ESA cluster separately from your existing vSAN 7 cluster and move the data
Reference URL: https://core.vmware.com/blog/upgrading-vmware-vsan-7-vsan-8#:~:text=You%20cannot%20upgrade%20in%2Dplace,cluster%20and%20move%20the%20data
raydoom says
if the host have 16GB RAM,vsan esa will use about 10GB per host, but if the host have 64GB RAM, then vsan esa will use about 30GB per host, why?
cwindomsr says
Hey Will,
Long time no talk or see. Just a piece of gear you might be interested in checking out;
https://www.amazon.com/MOGINSOK-Firewall-Appliance-Barebone-4xIntel/dp/B0CYH12C6G/ref=sr_1_5?crid=1M2LGKUVCD731&dib=eyJ2IjoiMSJ9.4pvsXvMWXzxVHcJu8aD1eD_T3BfCBuHQtkHzvGXehVUw0vfOOjP20EkIqfN3Bs7te7lDkCO7A-7fQhX74n2Ef7ftiXZ4f7rodjtWpP7oZmAn4fNZPTF3DjoLVLiuec59_80v07hLqF1we3lffGShja8qNesiS99g-tuIIWbYwYvoBr8c_H1ZsX-Q9S6EG5CJ-eRCypA6WHhKqfdeaSwRk7_TuWhiogi3SJ1GQ_wMIq8G72DrutV5FfIun_aQTfUAZwEC6w640z1uNR6ktuyyLYN8nr6bSG_qXUT1us4crsM.h54mDaUYoqOnR3zJMbTMlFeeKUdVekCPG-Q7b0VmS6w&dib_tag=se&keywords=Moginsok&qid=1712329598&s=electronics&sprefix=moginsok%2Celectronics%2C138&sr=1-5&th=1
cwindomsr says
I should've posted this link instead of the previous Amazon one. This is the unit that supports i5 12 Core 64GB, NVMe, 4 ea. 2.5GbE Nics as well as 2 10GbE SFP+ Nics. I use this unit with ESXi 8 installed with PfSense running as a virtual machine. It is available in several configurations.
https://www.moginsok.com/products/updated-12th-gen-micro-firewall-appliance-10gbe-nas-mini-pc-with-sfp-2-intel-82599es-10gb-2xddr4-ram-m-2-pcie-nvme-ssd-4xintel-i226-v-2-5gbe-network-card-firewall-router-2xsata-slot-1xconsole?syclid=cp2n9f4h33ns73fq2vj0&utm_campaign=emailmarketing_164356391196&utm_medium=email&utm_source=shopify_email