As someone who is always on the lookout for interesting and clever ways to make the most out of your vSphere homelab investment, I was surprised there has not been more noise about the new NVMe Tiering capability in vSphere 8.0 Update 3!?
NVMe Tiering is currently in Tech Preview and it enables ESXi to use an NVMe device as a secondary tier of memory for your workloads, which IMHO makes it one of the killer features in vSphere 8.0 Update 3, especially with some interesting implications for Homelabs!
As the old saying goes, a picture is worth a thousand words ...
Picture on the left shows a system with 64GB of memory (DRAM) that is available before enabling NVMe Tiering and on the right, the amount of memory that is available after enabling the NVMe Tiering which is whopping 480GB! 🤯
For my initial setup, I used an older Intel NUC 12 Enthusiast as it allows for up to 3 x NVMe devices, which I have allocated for the ESXi installation, Workload Datastore and NVMe Tiering. The maximum amount of physical DRAM memory that the Intel NUC 12 Enthusiast is capable of is 64GB which I have fully maxed out on the system and I am using a 1TB NVMe device for NVMe Tiering, which is how I was able to get to 318GB of memory for my physical ESXi host running on the Intel NUC!
So how usable is the Intel NUC with the "extra" available memory? ... Well, I figure I should put it through a real test and I was able to successfully deploy a fully operational VMware Cloud Foundation (VCF) Holodeck solution! 😎
Since the Intel NUC is a consumer platform, I was surprised at how responsive the deployment was and the overall speed of the deployment, it took a little over ~2hrs to complete and it was fully accessible without noticing any real performance degradation when logging into SDDC Manager or the vSphere UI.
My second experiment included a more recent hardware platform with the ASUS PN64-E1 which had 96GB of DRAM and after enabling NVMe Tiering on the same 1TB NVMe device, I was able to reach the 480GB (which is actually from the screenshot at the very top of the blog post).
Note: I opted to leave all CPU cores enabled and I did observe the overall deployment took a bit longer than the Intel 12th Generation CPU and I had to retry the bringup operation a couple of times with Cloud Builder as the NSX VM had to be rebooted. It eventually did complete, so if you are using an Intel-based 13th Gen or later, you may want to disable the E-Cores, even though I had more physical DRAM, the impact was more of the CPU than actual memory, which speaks volumes on how robust the NVMe Tiering capability is!
While I was able to supercharge several of my consumer-grade systems, just imagine the possibilities with a more powerful system and server grade CPU and memory or what this could mean for the Edge!? The possibilities are truly endless, not to mention the types of workloads vSphere can now enable at a much lower cost! 🙌
Have I piqued your interests in upgrading to latest vSphere 8.0 Update 3 and take advantage of the new NVMe Tiering capability? What additional workloads might you be able to run now?
Below are the steps to configure NVMe Tiering:
Step 0 - Ensure that you have a single NVMe device that is not in use or partition before enabling NVMe Tiering, you can not share the device with any existing functions. You should also review KB 95944 article for additional considerations and restrictions before using NVMe Tiering. Ensure you also have supported CPU processor which must be either Intel Ice Lake and later or AMD Milan and later, if you are using older CPU generations, it is possible you may run into issues when enabling and/or using NVMe Tiering.
Step 1 - Enable the NVMe Tiering feature by running the following ESXCLI command:
esxcli system settings kernel set -s MemoryTiering -v TRUE
Step 2 - Configure a specific NVMe device for use with NVMe Tiering by running the following command and providing the path to your NVMe device:
esxcli system tierdevice create -d /vmfs/devices/disks/t10.NVMe____Samsung_SSD_960_EVO_1TB_________________8AC1B17155382500
Note: After enabling NVMe Tiering for your NVMe device, you can see which device is configured by using "esxcli system tierdevice list" and this is a one time operation, which means if you reinstall ESXi or move the NVMe device, it will still contain the partition that marks the device for NVMe Tiering.
Step 3 - Configure the desired NVMe Tiering percentage (25-400) based off of your physical DRAM configuration by running the following command:
esxcli system settings advanced set -o /Mem/TierNvmePct -i 400
Note: To learn more about the NVMe Tiering percentage configuration, please see the PDF document at the bottom of this KB 95944 article
Step 4 - Reboot the ESXi host for the changes to go into effect and after ESXi fully boots up, you will see the updated memory capacity that has been enabled by your NVMe device.
Dennis Faucher says
That's a great feature. Thanks for the post.
Paul Braren says
I agree, this looks rather promising for tinkerers.
Simon SHAW says
How can you afford vSphere ESXi at home now there's no free trial?
Bard says
cheapest option is VMUG Advantage, it's a 180$ (200, but it's often on discount) license for home/lab use of ALL VMware products. Including of course esxi and vCenter (but also vSan, Tanzu, etc). Might not include Horizon anymore tho since they are selling it afaik (or sold it already?), but have not verified that.
Steffen says
+1 on what Bard wrote about VMUG Advantage.
And only ESXi was free, not vCenter, vSAN etc.. So the change introduced by Broadcom isnt that big of a thing IMHO. With VMUG Advantage you have access to a plethora of products for personal use and knowledge gaining, its been a great way since a longtime but still so unknown.
David Nixon says
Tiering is awesome. Not ready for production (no RAID options) but for labs, its is a game changer! Previous testing, we can have 1:4 (RAM:TIER) and clients will not see any change. With RAM being half the cost of hardware...
Bruce Ferrell says
Gee Whiz! Isn't this simply swapping?
*IX has been doing this for decades. It's sneered at now, but it's still there AND used.
Everything old is new again... Especially if it get a fresh new coat of bike shed paint.
David Nixon says
Nope. Swapping is not intelligent. This is moving inactive pages rather than just clobbering until under a threshold. Plus, page moves down and then has to copy back before use. There is no copy back here.
Jason says
Well, looks like Linux has memory tiering as well.
Fred says
Hi,
I have 3 Nuke 12 PRO with 1 SDD 256GB (ESX OS) + 1 MVME 1TB (vSAN CACHE) + 1 SSD 4TB (vSAN DATA). + 64GB RAM.
About you, what is the best config to implement the RAM tiering ?
Fred says
I would like use the vSAN also.
William Lam says
There's a few options ... Intel NUC or any other kit that has Thunderbolt 3/4, you could get TB chassis and get more devices that can then be used. If that isn't something you're interested in AND you're looking to use vSAN, then you'll need to default to vSAN ESA since OSA requires at least two disks and that'll limit your ability to use NVMe Tiering and we typically recommend ESXi run on reliable device. If you can get NVMe device that supports multiple-namespaces https://williamlam.com/2023/03/ssd-with-multiple-nvme-namespaces-for-vmware-homelab.html then you could slice up the device for additional functions OR run ESXi on USB, which would leave you with slot for NVMe Tiering
arvindjagannath says
Memory tiering is more finely granular in classifying pages, and actively promotes and demotes pages, and tries to dampen the effects of page faults on performance.
(Memory tiering with vSAN is under discussions)
arvindjagannath says
Thanks for the post, William
ryzenlike says
Thank you for the interesting article. Regarding the performance of the 13th generation CPU, disabling the E-core reduces the total CPU frequency. Is it still better performance with only the P-core?
William Lam says
As with anything in tech ... it'll depend on your use case and workloads on whether you need more cores or simply the performance of the P-Cores. Definitely worth testing both and seeing what works the best for your setup
ryzenlike says
thank you for your reply.
I understand that high core count and high frequency alone do not guarantee performance.
mattheldstab says
Cool stuff -- thanks for the post, William!
Nathan Daggett says
Can I provision an nvme oF data store from powerstore and have my hosts all access it? can this be used to improve vdi performance in my Heathcare enterprise?
Alex says
Isnt this going to wear out consumer-grade SSD really fast?
If i use 25% of a 2 TB SSD it is wise to use to use the rest of the SSD for VMs or better leave it unused for TRIM / block-reallocation / whatever?
Will this work with SATA-SSDs too? This wont break any speed records but maybe i want to test something over night that requires more RAM than I have in my NUC12WS?
SemoTech says
Hey William, this is great stuff but I noticed something strange, in ESXi 7.0U3q on a MacMini 2014 with an internal SATA SSD and external 250GB USB drive (backup storage for a VM), the Host automatically shows that "Virtual Flash" is enabled with a capacity of 19.75 GB. Yet in ESXi 8.0U3 on a MacMini 2018 with the internal PCIe NVMe and both TB3 and USB connected storage, the "Virtual Flash" feature is not working. Any ideas why or how to enable Virtual Flash for the ESXi 8 MacMini 2018 as well? I tried to "Add Capacity" using vCenter for the host but no available drives were shown even tho all storage is already visible under Devices and marked as SSD/Flash. Do you happen to have a tutorial on enabling a Host's "Virtual Flash" for ESXi 8? Thanks.
Loren says
This is great stuff! Just enabled this on my home lab server. Took the memory from 256GB to just over 1TB! Going to put it through it's paces. Nested VCF here I come!
Sho says
Nice to meet you.
I set up DRAM tiering using the method above and deployed VCF5.1, but after a few hours the nested ESX hung up, and after rebooting it damaged VCENTER, NSX, etc., and the cluster did not start normally.
If you know, could you please let me know how you got the VCF environment running?
Thank you for reading.
Rafael says
Pretty much have retired ESXi from my home lab.
Sho says
Thanks for the great article.
I have one question: the technical guide says that using vmxnet3 will slow down the network.
Do you know how much of an impact this has based on actual measurements using VLC LAB?
Thomas says
Hi William,
If I were to buy a spare NVME, get a compatible enclosure, and connect it to my laptop, will this allow me to take advantage of the extra RAM? Very interested in this for my home lab.
William Lam says
Must be seen as PCIe device (eg no USB)
Thomas says
Awesome! Thank you so much for replying. Will this also work in my nested homelab environment that's running in workstation as well?
David Nixon says
I tried it with a server (FC disks) into a nested environment and it worked.
Thomas says
Awesome, thank you so much! I am gonna get another NVME and give it a shot!
Duncan says
Wow, this is amazing. If only was was possible to allow the NVME to do double duty and still operate as part of VSAN. I'm assuming it's one or the other?
David Nixon says
vSAN and memory tiering are mutually exclusive by host.
Jason says
Tried it with a 32G Optane NVMe on a Nuc6 with 64G ram, tried different percentage betweeen 24 to 45. While the commands work and the tier device is seen, there is no differences in the ram detected. Would be curious if there is a minimum requirement for the NVMe , etc.
William Lam says
Jason - It should just work. Can you provide the vm-support bundle with direct download link, I can get this to Engr to see what's going on
Jason says
Send you the link to Googledrive share a couple of days ago. Didn't get a confirmation if you have received.
William Lam says
I didn’t get anything, can I ask how you’d shared it?
Jason says
I send the link to this email address - info[dot]virtuallyghetto[at]gmail[dot]com . File shared on GoogleDrive without login required
William Lam says
Just emailed you
Chris Childerhose says
Hi Jason,
Did you ever get this to work? I have older NUCs but NVME drives in each of them to use for tiering and mine is the same. Set it up, reboot but RAM size never changes.
Chris
Jason says
No I didn't. I swipe mine out from one of those with a dual core i5 or i7 that I bought by mistake.
Every commands works but it just doesn't see the tier memory. ESXi see the NVMe as "NVMe Optane Memory", not sure if it's the usual description. Anyway, the 32G really just have 28G of capacity and it would be cool to test out but otherwise just not worth that much effort. Put in a actual NVMe SSD and everything just works. However, there are a few consideration.
Memory Reservation doesn't work - Make senses.
Suspending VM doesn't work - kind of sxxxs
Tim says
I had some older 800GB Fusion IO Mezz cards laying around I think this is a perfect use for them. Thanks!
Former Vmware Fanboy says
i can't even get access to our business licenses, never mind homelab...vmware is such a joke now
vDudeJon says
Does this consume the whole disk or can you use the remainder as a local datastore?
William Lam says
Updated the blog post see Step 0 🙂
MtGimper says
That's a shame it takes the whole drive. Hopefully something that changes as the product matures.
justinmpaul says
Being able to see the shipment trends for ram/box that goes out the door here at HPE, I have to wonder if anyone except home lab folks really care about this? Don't get be wrong, it's cool for home labs, but it just seems like something added in to try to lure back some of the fanboys that have split up with Broadcom.
David Nixon says
Non-homelab user here. I’m SUPER EXCITED about this. In testing PMEM we found that only 25% of our allocated RAM is actually active (really active, not the misleading active memory counter). However, the lack of RAID or RAID-ish functionality is keeping me from even testing I our enterprise. I can’t have a drive go out and take down the entire server. I hear Windows doesn’t take kindly to half of its RAM going offline. Hopefully vSphere Next will fix this. Taking a 2TB of RAM server down to 512GB or 1TB will cut 30-50% of the server price.
George says
If the NVMe SSD is not composed of RAID, it is better not to open it to 400. You will find that on memory system or DB writing is very slow (my NVMe SSD is 7200MB/s). ESXi also recommends not to exceed the upper limit of the memory capacity, so 25 - 100 is the best . In addition, VMs that use GPU and need to bind memory cannot use NVMe Tiering.