WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud
  • Tanzu
    • Application Modernization
    • Tanzu services
    • Tanzu Community Edition
    • Tanzu Kubernetes Grid
    • vSphere with Tanzu
  • Home Lab
  • Nested Virtualization
  • Apple
You are here: Home / Home Lab / Enabling vSAN 8 Express Storage Architecture (ESA) using Nested ESXi

Enabling vSAN 8 Express Storage Architecture (ESA) using Nested ESXi

10.13.2022 by William Lam // 12 Comments

To get the full benefits and support of the new vSAN 8 Express Storage Architecture (ESA), you will need modern hardware from the official vSAN ESA Ready Node HCL. However, from an education and learning point of view, you may want to explore some of the VSAN ESA workflows and easiest way to do that, well you probably know the answer .... it is using Nested ESXi of course!

With my recently published vSphere and vSAN 8 Lab Deployment Script, you can use that as the base and the Nested ESXi 8.0 IA Virtual Appliance to setup vSAN 8 ESA using virtual hardware 😀


There are only a couple of things to be aware of prior to setting up vSAN ESA using Nested ESXi.

Before you can enable vSAN ESA, each ESXi host within the vSphere Cluster must have a minimum of 16GB of memory. You will see an error as shown in the screenshot above during the pre-check. There are also two other warnings that will show up, one regarding the use of vSphere Lifecycle Manager (vLCM) if the cluster has not been setup and you can safely ignore. The other is a network pre-check to ensure the physical (virtual in our case) network adapter is at least 25GbE or greater. For Nested ESXi, the virtual network adapter will show only 10GbE and you can also ignore and proceed after meeting the memory requirement, which is a hard pre-check.

Note: Thanks to reader Alaa, it looks like the 16GB memory minimum is only for a single disk to be used with vSAN ESA. If you wish to add a second disk, the memory needs to be updated to at least 18GB and potentially for if you plan to add additional disks. vSAN ESA officially requires 512GB of memory for a supported configuration and while that is going to be overkill for Nested environment, you may need to play with memory if you are running into enablement issues, especially if you plan to add additional disks.

During the disk claiming part of the wizard, you will see more warnings and this is expected because we are not using "hardware" from vSAN ESA HCL and hence auto-claim can not be used but we can still select the virtual disks that we wish to use and click next to proceed.


Once you reach the end of the wizard, you will be presented with a summary and enablement of vSAN ESA will begin.


Everything should be fully configured in a few minutes and you now have vSAN ESA running using Nested ESXi!


Again, this is purely for educational and learning purposes, especially in familiarizing yourself with the vSAN ESA enablement workflow. Outside of the initial workflow, I am not sure there will be any real benefits of using vSAN ESA in a Nested ESXi environment, especially compared to the vSAN Original Storage Architecture (OSA) which can be enabled with just 8GB of memory per host versus the 16GB required by vSAN ESA.

More from my site

  • How to bootstrap vSAN Express Storage Architecture (ESA) on unsupported hardware?
  • How to bootstrap ESXi compute only node and connect to vSAN HCI Mesh?
  • Nested ESXi installation using HTTPS boot over VirtualEFI in vSphere 8
  • Using vSphere Lifecycle Manager (vLCM) to remediate Nested ESXi host with CPU on the host is not supported 
  • Automated vSphere & vSAN 8 Lab Deployment Script

Categories // Home Lab, Nested Virtualization, VSAN, vSphere 8.0 Tags // Express Storage Architecture, Nested ESXi, nested virtualization, VSAN 8, vSphere 8.0

Comments

  1. Andy says

    10/13/2022 at 12:46 pm

    I'm trying to do this on Vcloud Director, which in my environment is still on 6.7U2, so I need to change the VM Hardware level to 15 or lower. Anytime I do that by editing the ovf file and updating the HW version and editing the manifest with a new checksum, I can never get the uploaded OVF files to boot, I just get a bios screen to select what to boot. Any ideas, or is the HW version too far out of sync to even do this nested on 6.7U2?

    Reply
    • William Lam says

      10/13/2022 at 1:44 pm

      You don’t need to mess w/checksum. Convert from OVA->OVF and delete the .mf file. Make your change and convert back to OVA or just use OVF+VMDK as-is

      Reply
  2. Danilo says

    10/13/2022 at 5:15 pm

    Great Post William, thank you very much!

    I created a nested vSAN cluster on my home lab. I have three ESXi hosts + vCenter Server with vSAN ESA arquitecture enabled.

    I created a Virtual Machine and I checked the physical disk placement for this VM. I'm using the vSAN Default Storage Policy for this VM.

    I don't understand why was created the concatenation components in this case. For example:

    [vsanDatastore] 35a74863-cae2-a058-6dea-000c29f65804/windows10-4e1f16ed.vswp
    DOM Object: d2a74863-52bb-b824-5fe6-000c29d28ffe (v17, owner: host102.lab.vsan, proxy owner: None, policy: stripeWidth = 1, cacheReservation = 0, proportionalCapacity = 0, hostFailuresToTolerate = 1, forceProvisioning = 1, spbmProfileId = aa6d5a82-1c88-45da-85d3-3d74b91a5bad, spbmProfileGenerationNumber = 0, CSN = 1, spbmProfileName = vSAN Default Storage Policy)
    Concatenation
    RAID_1
    Component: d2a74863-a97c-8b26-9b69-000c29d28ffe (state: ACTIVE (5), host: host100.lab.vsan, capacity: 52e05d45-9130-948f-56f4-50ec2acf7694, cache: ,
    votes: 2, usage: 0.0 GB, proxy component: false)
    Component: d2a74863-55ef-d226-f15a-000c29d28ffe (state: ACTIVE (5), host: host102.lab.vsan, capacity: 52af91a1-7d7c-faf3-23df-583598a780aa, cache: ,
    votes: 1, usage: 0.0 GB, proxy component: false)
    RAID_1
    RAID_0
    Component: d2a74863-65d6-d526-23a6-000c29d28ffe (state: ACTIVE (5), host: host100.lab.vsan, capacity: 52c36d1c-a8bb-086d-041f-dd018ae41028, cache: ,
    votes: 1, usage: 0.0 GB, proxy component: false)
    Component: d2a74863-a912-d726-cf53-000c29d28ffe (state: ACTIVE (5), host: host100.lab.vsan, capacity: 52cdb989-a6fd-a290-000b-f6b780182e2d, cache: ,
    votes: 1, usage: 0.0 GB, proxy component: false)
    Component: d2a74863-6d7c-d926-596e-000c29d28ffe (state: ACTIVE (5), host: host100.lab.vsan, capacity: 52cdb989-a6fd-a290-000b-f6b780182e2d, cache: ,
    votes: 1, usage: 0.0 GB, proxy component: false)
    RAID_0
    Component: d2a74863-c523-da26-07c9-000c29d28ffe (state: ACTIVE (5), host: host102.lab.vsan, capacity: 52b40444-5921-86dc-e0bd-0ecfd0a9ca8c, cache: ,
    votes: 1, usage: 0.0 GB, proxy component: false)
    Component: d2a74863-09e7-db26-0cee-000c29d28ffe (state: ACTIVE (5), host: host102.lab.vsan, capacity: 52e5f0bd-c3ca-1d9f-c3ec-a04140027e01, cache: ,
    votes: 1, usage: 0.0 GB, proxy component: false)
    Component: d2a74863-c5f3-dc26-e08c-000c29d28ffe (state: ACTIVE (5), host: host102.lab.vsan, capacity: 52b40444-5921-86dc-e0bd-0ecfd0a9ca8c, cache: ,
    votes: 1, usage: 0.0 GB, proxy component: false)
    Witness: d2a74863-8df0-de26-cbf0-000c29d28ffe (state: ACTIVE (5), host: host101.lab.vsan, capacity: 5244e06d-e293-adaf-03de-1d006acca396, cache: ,
    votes: 4, usage: 0.0 GB, proxy component: false)

    May you help me in this situation, please?

    Thank you William,

    Danilo.

    Reply
  3. Alaa says

    10/19/2022 at 9:17 am

    Hello William,
    First thank you a lot for the share.
    Can you confirm if you have tested two disks or more per host (with 16 GB memory) ?
    I’ve tested the vSAN ESA on the nested environment on VMware Workstation. I can create the vSAN Datastore with just one disk (SSD or NVMe) per each ESXi Host. But when i choose more than disk per host I get these errors :
    “General vSAN error. Failed to add disks to vSAN.”
    “A general system error occurred: Failed to create storage pool disk(s) with exception: Failed to invoke create FS on disk: naa.6000c29834a5cfb0f5955dbe7015d246 with error: Unable to complete Sysinfo operation. Please see the VMkernel log file for more details.: Sysinfo error: Out of memorySee VMkernel log for details.”

    After a lot of checks and analysis I have find that the issue related to the memory heap.

    Exemple of logs :

    vmkwarning: cpu1:263779 opID=edbab0dc)WARNING: Heap: 1082: Could not create a pagepool for heap spl-logical-78a8480d2d72b982: Admission check failed for memory resource
    vmkwarning: cpu1:263779 opID=edbab0dc)WARNING: WOBTREE: platform_heap_create_with_initial_size:60: Failed to create heap: Out of memory

    The solution is :
    1- increase the heap memory size + reboot.
    2- Exemple :

    [[email protected]:~] esxcfg-advcfg -g /LSOM/heapSize
    Value of heapSize is 256
    [[email protected]:~] esxcfg-advcfg -s 2047 /LSOM/heapSize
    Value of heapSize is 2047
    [[email protected]:~] reboot

    It’s possible to juste increase the host memory, but not helpful for lab environment.

    Thanks.

    Reply
    • rmbki says

      10/25/2022 at 7:59 am

      I had similar issues with a General vSAN error. I'm having to "probe" the functional limits to find the smallest configuration that still works in my lab. "16GB per host, boom, done" isn't cutting it.

      Reply
      • Alaa says

        10/25/2022 at 8:05 am

        Hello, I confirm I had again the same error after hosts reboot. I can confirm now that the minimum memory need to use more than one disk (Two or Three max) per host is 18 GB. So the best solution is to increase the memory.
        @Wiliam do you have a solution ? Thanks.

        Reply
        • William Lam says

          10/25/2022 at 8:31 am

          Alaa - I've not forgotten about this thread and still trying to get an answer. The official response I've gotten thus far is that VSAN ESA requires minimum of 512GB for proper supported configuration and anything less than that is YMMV and not supported. It sounds like you were able to confirm that simply updating ESXi VM memory from 16GB->18GB allowed you to configure VSAN ESA w/o messing with heap memory?

          Reply
          • Alaa says

            10/27/2022 at 8:51 am

            Thank you William for your response. Yep i confirm that is not related to the heap memory but to the host memory. With 18 GB of RAM i can add up to two disks per hosts, if i want to add more i must increase the RAM. Below is some information about the configuration with the error I got when trying to add a third disk.

            *****

            [[email protected]:~] vmware -v
            VMware ESXi 8.0.0 build-20513097

            [[email protected]:~] vim-cmd hostsvc/hosthardware | grep -i "numcpu*"
            numCpuPackages = 2,
            numCpuCores = 2,
            numCpuThreads = 2,

            [[email protected]:~] vim-cmd hostsvc/hosthardware | grep -i "memorySize*"
            memorySize = 19326255104,
            memorySize = 19324616704,
            memorySize = 0

            [[email protected]:~] esxcfg-advcfg -g /LSOM/heapSize
            Value of heapSize is 256

            [[email protected]:~] esxcli vsan storagepool list
            naa.6000c29375de58c98723c3b6d2e1a973
            Device: naa.6000c29375de58c98723c3b6d2e1a973
            Display Name: naa.6000c29375de58c98723c3b6d2e1a973
            vSAN UUID: 5282f918-8d0f-f5bf-28a0-4b5515129fbf
            Used by this host: true
            In CMMDS: true
            On-disk format version: 17
            Checksum: 4267661042492715910
            Checksum Ok: true
            Is Mounted: true
            Is Encrypted: false
            Disk Type: singleTier
            Creation Time: Tue Oct 25 10:24:42 2022

            [[email protected]:~] esxcli vsan storagepool add -d naa.6000c29520e7449debbf5172b28ed6cb

            [[email protected]:~] esxcli vsan storagepool list
            naa.6000c29375de58c98723c3b6d2e1a973
            Device: naa.6000c29375de58c98723c3b6d2e1a973
            Display Name: naa.6000c29375de58c98723c3b6d2e1a973
            vSAN UUID: 5282f918-8d0f-f5bf-28a0-4b5515129fbf
            Used by this host: true
            In CMMDS: true
            On-disk format version: 17
            Checksum: 4267661042492715910
            Checksum Ok: true
            Is Mounted: true
            Is Encrypted: false
            Disk Type: singleTier
            Creation Time: Tue Oct 25 10:24:42 2022

            naa.6000c29520e7449debbf5172b28ed6cb
            Device: naa.6000c29520e7449debbf5172b28ed6cb
            Display Name: naa.6000c29520e7449debbf5172b28ed6cb
            vSAN UUID: 52caba60-39db-df74-a9f0-1c294c7d4697
            Used by this host: true
            In CMMDS: true
            On-disk format version: 17
            Checksum: 1995621253246329509
            Checksum Ok: true
            Is Mounted: true
            Is Encrypted: false
            Disk Type: singleTier
            Creation Time: Wed Oct 26 10:21:45 2022

            [[email protected]:~] esxcli vsan storagepool list | grep -i cmmds
            In CMMDS: true
            In CMMDS: true

            [[email protected]:~] esxcli vsan storagepool add -d naa.6000c29b5cdd3b779bfd6ef11cd2c7fd
            Unable to add device: Failed to create storage pool disk(s) with exception: Failed to invoke create FS on disk: naa.6000c29b5cdd3b779bfd6ef11cd2c7fd with error: Failed to prepare createFS disk op for storage pool disk naa.6000c29b5cdd3b779bfd6ef11cd2c7fd

            [[email protected]:~] esxcli vsan storagepool list
            naa.6000c29b5cdd3b779bfd6ef11cd2c7fd
            Device: naa.6000c29b5cdd3b779bfd6ef11cd2c7fd
            Display Name: naa.6000c29b5cdd3b779bfd6ef11cd2c7fd
            vSAN UUID: 52080173-19c0-92e4-a1c4-f4a1932d34d1
            Used by this host: true
            In CMMDS: false
            On-disk format version: 17
            Checksum: 10402518381846082570
            Checksum Ok: true
            Is Mounted: true
            Is Encrypted: false
            Disk Type: singleTier
            Creation Time: Wed Oct 26 22:44:50 2022

            naa.6000c29375de58c98723c3b6d2e1a973
            Device: naa.6000c29375de58c98723c3b6d2e1a973
            Display Name: naa.6000c29375de58c98723c3b6d2e1a973
            vSAN UUID: 5293a251-a465-1855-ba32-851e1b45f0ee
            Used by this host: true
            In CMMDS: true
            On-disk format version: 17
            Checksum: 16445638087944010071
            Checksum Ok: true
            Is Mounted: true
            Is Encrypted: false
            Disk Type: singleTier
            Creation Time: Wed Oct 26 22:38:44 2022

            naa.6000c29520e7449debbf5172b28ed6cb
            Device: naa.6000c29520e7449debbf5172b28ed6cb
            Display Name: naa.6000c29520e7449debbf5172b28ed6cb
            vSAN UUID: 52e09666-6dfe-c242-4d7d-76f3aa9c6953
            Used by this host: true
            In CMMDS: true
            On-disk format version: 17
            Checksum: 451530547096230681
            Checksum Ok: true
            Is Mounted: true
            Is Encrypted: false
            Disk Type: singleTier
            Creation Time: Wed Oct 26 22:38:44 2022

            [[email protected]:~] esxcli vsan storagepool list | grep -i cmmds
            In CMMDS: false
            In CMMDS: true
            In CMMDS: true

            *****

  4. Michael says

    12/23/2022 at 4:14 am

    Hello William ! Thanks a lot for your amazing job. I try to install VCSA 8 with Vsan ESA in command line but I have an issue and I don't know why and how to correct it. During pre-check I have this error : Task 'Running Pre-check: vSAN Cluster Health Checks.' execution failed because", "[(vim.fault.VsanFault) { dynamicType = , dynamicProperty =", "(vmodl.DynamicProperty) [], msg = 'Failed to replace old HCL DB with new". I don't find anything on web for this error. If you have an idea....
    Regards,
    Michael

    Reply
    • Jim says

      03/23/2023 at 8:08 pm

      I ran into the same issue. Does not seem to be any good examples of a WORKING vCSA_with_cluster_on_ESXi.json file.

      Reply
      • William Lam says

        03/24/2023 at 5:38 am

        This is actually due to parsing of the hardware after the HCL DB has been downloaded. While the error is a bit miss-leading, it was an issue I had ran into with one of my env and after filing a bug, it looks like this will be resolved in the upcoming 8.0 Update 1 release.

        Reply
  5. fabio071975 says

    02/07/2023 at 8:37 am

    HI William,
    I created a vSAN ESXi8 ESA Nested, (three hosts with everyone have one disk for the vSAN). After vSAN Enable i see a warning on "Operation Health" and all disks are in "Disk Mounting - Please Waiting for mounting disk no action is reuired. It is all ok? 🙂

    Reply

Thanks for the comment! Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Infrastructure Business Group (CIBG) at VMware. He focuses on Cloud Native technologies, Automation, Integration and Operation for the VMware Cloud based Software Defined Datacenters (SDDC)

Connect

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Recent

  • How to disable the Efficiency Cores (E-cores) on an Intel NUC? 03/24/2023
  • Changing the default HTTP(s) Reverse Proxy Ports on ESXi 8.0 03/22/2023
  • NFS Multi-Connections in vSphere 8.0 Update 1 03/20/2023
  • Quick Tip - How to download ESXi ISO image for all releases including patch updates? 03/15/2023
  • SSD with multiple NVMe namespaces for VMware Homelab 03/14/2023

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2023

 

Loading Comments...