WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud
  • Tanzu
    • Application Modernization
    • Tanzu services
    • Tanzu Community Edition
    • Tanzu Kubernetes Grid
    • vSphere with Tanzu
  • Home Lab
  • Nested Virtualization
  • Apple
You are here: Home / Automation / How to bootstrap vSAN Express Storage Architecture (ESA) on unsupported hardware?

How to bootstrap vSAN Express Storage Architecture (ESA) on unsupported hardware?

01.19.2023 by William Lam // Leave a Comment

I was recently chatting with a fellow colleague who asked an interesting question about the memory overhead between running vSAN Original Storage Architecture (OSA) versus the new vSAN Express Storage Architecture (ESA) from a VMware Homelab perspective. I honestly did not know the answer as I am only using vSAN OSA for my personal homelab. I was curious myself, especially its implicationn on small form factor (SFF) systems which typically max at out 64GB of memory.

Today, vSAN ESA is only officially supported when using vSAN ESA Ready Nodes which are all listed in the vSAN ESA HCL and the minimum amount of memory is 512GB. For the best possible experience and supported configurations, customers should only use approved vSAN ESA hardware and the use of any other systems will not yield the same benefits nor outcomes. As an aside, a fantastic resource for all things vSAN ESA can be found on the vSAN ESA TechZone page, which I highly recommend bookmarking as there is a lot of in-depth technical resources and collateral.

Disclaimer: This is not officially supported by VMware and is purely for educational purposes, use at your own risk.

With the quick disclaimers out of the way, I was curious if I could bootstrap vSAN ESA running on one of my recent Intel NUC 12 Pro which has 64GB of memory and two M.2 NVMe devices, both of which are NOT on the VMware HCL. While I could attempt to use the VCSA Installer which supports the bootstrapping of vSAN on a single ESXi host using the vSAN Easy Install method, it does a validation check against the vSAN ESA HCL and of course my hardware would fail immediately, so I decided to go with another method using the ESXi Shell, which can also help with ESXi Kickstart Automation.

Step 1 - Modify the default vSAN storage policy on the ESXi host in which you plan to provision your vCenter Server or other workloads. Once vCenter Server is up and running, you can then manage the vSAN Storage Policies there. Run the following ESXCLI commands in the ESXi Shell or via SSH to change the default vSAN Storage Policies allowing us to deploy workloads on a single 1-Node vSAN ESA host:

esxcli vsan policy setdefault -c vdisk -p "((\"hostFailuresToTolerate\" i1) (\"forceProvisioning\" i1))"
esxcli vsan policy setdefault -c vmnamespace -p "((\"hostFailuresToTolerate\" i1) (\"forceProvisioning\" i1))"

Step 2 - Enable vSAN traffic on your VMkernel interfaces

esxcli vsan network ip add -i vmk0

Step 3 - Create 1-Node vSAN ESA Cluster by running the following ESXCLI command:

esxcli vsan cluster new -x


Step 3 - Next, add the desired SSD device(s) into the vSAN ESA Storage Pool by using the following ESXCLI command:

esxcli vsan storagepool add -d eui.0000000001000000e4d25cd114ab5001 -d t10.ATA_____Dogfish_SSD_256GB_______________________KC20200927448_______


Note: To identify the SSD device ID, you can use the vdq -q command to query for list of SSD devices that are eligible.

If everything was setup correctly, you should now see a new vsanDatastore with the aggregated capacity of the SSD devices and you can start deploying workloads like the vCenter Server Appliance (VCSA) as shown in screenshot below which is running on my Intel NUC.


If you need to disable or delete the vSAN ESA setup, start by deleting all workloads from the vsanDatastore and then use the following ESXCLI commands to remove each SSD device, which must be done one at a time. Finally, perform the leave operation to completely disable vSAN ESA on your ESXi host.

esxcli vsan storagepool remove -d eui.0000000001000000e4d25cd114ab5001
esxcli vsan storagepool remove -d t10.ATA_____Dogfish_SSD_256GB_______________________KC20200927448_______
esxcli vsan cluster leave

Lastly, lets now circle back on the original question about the memory overhead between vSAN OSA and vSAN ESA for a small form factor kit like an Intel NUC, which only supports a maximum of 64GB memory. Below is the breakdown when using the latest ESXi 8.0a release, which uses ~4% of the physical memory for the ESXi installation itself.

vSAN OSA enabled with no additional workloads takes ~29% of the physical memory or 18.26GB


vSAN ESA enabled with no additional workloads takes ~51% of the physical memory or 32.64GB


While the memory overhead for SFF systems may not make sense for most homelabs setups, it is possible to run vSAN ESA on unsupported hardware for those that are interested or have more capable systems with more memory. For production grade hardware that is on vSAN ESA HCL, the minimum amount of memory is 512GB and memory overhead that is used is a tiny fraction of the overall physical memory, which is used for caching and may even be a fixed value.

More from my site

  • Enabling vSAN 8 Express Storage Architecture (ESA) using Nested ESXi
  • How to bootstrap ESXi compute only node and connect to vSAN HCI Mesh?
  • Nested ESXi installation using HTTPS boot over VirtualEFI in vSphere 8
  • ACPI motherboard layout requires EFI - Considerations for switching VM firmware in vSphere 8 
  • USB Network Native Driver for ESXi Fling now supports vSphere 8!

Categories // Automation, ESXi, Not Supported, VSAN, vSphere 8.0 Tags // ESXi 8.0, Express Storage Architecture, VSAN 8, vSphere 8.0

Thanks for the comment! Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Infrastructure Business Group (CIBG) at VMware. He focuses on Cloud Native technologies, Automation, Integration and Operation for the VMware Cloud based Software Defined Datacenters (SDDC)

Connect

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Recent

  • Changing the default HTTP(s) Reverse Proxy Ports on ESXi 8.0 03/22/2023
  • Quick Tip - How to download ESXi ISO image for all releases including patch updates? 03/15/2023
  • SSD with multiple NVMe namespaces for VMware Homelab 03/14/2023
  • Is my vSphere Cluster managed by vSphere Lifecycle Manager (vLCM) as a Desired Image or Baseline? 03/10/2023
  • Interesting VMware Homelab Kits for 2023 03/08/2023

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2023

 

Loading Comments...