With the ability to to share a single NVMe device for both NVMe Tiering and a local VMFS datastore ... I had an idea to push this further and see if I could also get an ESXi-OSData partition running on the same shared NVMe device! 🤔
Simliar to the previous blog post, the underlying use case is really for dev/test environment where you may not have a ton of NVMe devices to dedicate to the various ESXi functions, especially true for those using small form factor (SFF) systems like an ASUS NUC or simliar. Most of the mainstream SFF systems usually comes with two, maybe three NVMe slots if you are lucky.
This technique would allow you to boot ESXi off of USB and then have key functions like ESXi-OSData and NVMe Tiering on a single shared NVMe while freeing up the other NVMe devices for use with vSAN, which you should have dedicated devices for whether you are considering vSAN OSA or ESA.
Disclaimer: This is not officially supported by VMware, please use at your own risk.
Step 1 - Ensure that you have an empty NVMe device, you can not use an existing device with any existing partitions. You can use the vdq -q command to identify and retrieve SSD device name.
Step 2 - Download the createSharedNVMeTeiringOSDataAndVMFSPartitions.sh shell script to your ESXi host and update the four required variables:
- SSD_DEVICE - Name of the NVMe device from Step 1
- NVME_TIERING_SIZE_IN_GB - Specify the amount storage (GB) that you wish to use for NVMe Tiering
-
OSDATA_SIZE_IN_GB - Specify the amount storage (GB) that you wish to use for ESXi-OSData
- VMFS_DATASTORE_NAME - Name of the VMFS datastore to create on NVMe device
Ensure the script has executable permission (chmod +x /tmp/createSharedNVMeTeiringOSDataAndVMFSPartitions.sh) before attempting to run the script.
Note: Due to the complexity of the commands, the script will automatically print the commands AND then run the commands as shown in the screenshot below.
Here is an example of running the script for my setup where I have 1TB (913.15GB) NVMe and I am allocating 256GB for NVMe Tiering, 32GB for ESXi-OSData and the remainder space will be allocated for VMFS datastore.
If you have ESXi running on a USB device, which is how my setup is configured, you will notice there is an existing ESXi-OSData that is running on ramdisk annotated by the volume label LOCKER-XXX using the esxcli storage filesystem list command. After running the script, you will see a secondary ESXi-OSData volume and the very last command is to update the ESXi-OSData configuration location to point to our new partition.
Using the ESXi Host Client, we can see the three partitions that we have now created:
Step 4 - Enable the NVMe Tiering feature, if you have not already by running the following ESXCLI command:
esxcli system settings kernel set -s MemoryTiering -v TRUE
Step 5 - Configure the desired NVMe Tiering percentage (25-400) based off of your physical DRAM configuration by running the following command:
esxcli system settings advanced set -o /Mem/TierNvmePct -i 400
Step 6 - Finally, reboot for the NVMe Tiering settings and new ESXi-OSData configuration to go into effect. Once your ESXi host reboots, you will now have a single NVMe device supporting NVMe Tiering, ESXi-OSData and local VMFS datastore for you to use for workloads!
Step 7 - If you initially installed ESXi on a USB device and did NOT configure ESXi-OSData volume, one additional step is needed to copy over the packages and vmware directory into the new ESXi-OSData volume. You can use following ESXCLI command esxcli storage filesystem list to view the LOCKER-* volume path and then OSDATA-* is your new ESXi-OSData volume as shown in the earlier screenshot.
In this example, the LOCKER-6755b968-c118cea2-656e-88aedd7138d4 mount point is /vmfs/volumes/6755b968-c118cea2-656e-88aedd7138d4 and OSDATA-6755c79c-20be01ee-f3e2-88aedd7138d4 mount is /vmfs/volumes/6755c79c-20be01ee-f3e2-88aedd7138d4
Run the following commands to copy the two directories:
cp -rf /vmfs/volumes/6755b968-c118cea2-656e-88aedd7138d4/packages /vmfs/volumes/6755c79c-20be01ee-f3e2-88aedd7138d4
cp -rf /vmfs/volumes/6755b968-c118cea2-656e-88aedd7138d4/vmware /vmfs/volumes/6755c79c-20be01ee-f3e2-88aedd7138d4
Neb says
Just wanted to say thanks for writing this up, as I was wondering if this was possible after reading the other writeup. It'll be great for the homelab!