As someone who is always on the lookout for interesting and clever ways to make the most out of your vSphere homelab investment, I was surprised there has not been more noise about the new NVMe Tiering capability in vSphere 8.0 Update 3!?
NVMe Tiering is currently in Tech Preview and it enables ESXi to use an NVMe device as a secondary tier of memory for your workloads, which IMHO makes it one of the killer features in vSphere 8.0 Update 3, especially with some interesting implications for Homelabs!
As the old saying goes, a picture is worth a thousand words ...
Picture on the left shows a system with 64GB of memory (DRAM) that is available before enabling NVMe Tiering and on the right, the amount of memory that is available after enabling the NVMe Tiering which is whopping 480GB! π€―
For my initial setup, I used an older Intel NUC 12 Enthusiast as it allows for up to 3 x NVMe devices, which I have allocated for the ESXi installation, Workload Datastore and NVMe Tiering. The maximum amount of physical DRAM memory that the Intel NUC 12 Enthusiast is capable of is 64GB which I have fully maxed out on the system and I am using a 1TB NVMe device for NVMe Tiering, which is how I was able to get to 318GB of memory for my physical ESXi host running on the Intel NUC!
So how usable is the Intel NUC with the "extra" available memory? ... Well, I figure I should put it through a real test and I was able to successfully deploy a fully operational VMware Cloud Foundation (VCF) Holodeck solution! π
Since the Intel NUC is a consumer platform, I was surprised at how responsive the deployment was and the overall speed of the deployment, it took a little over ~2hrs to complete and it was fully accessible without noticing any real performance degradation when logging into SDDC Manager or the vSphere UI.
My second experiment included a more recent hardware platform with the ASUS PN64-E1 which had 96GB of DRAM and after enabling NVMe Tiering on the same 1TB NVMe device, I was able to reach the 480GB (which is actually from the screenshot at the very top of the blog post).
Note: I opted to leave all CPU cores enabled and I did observe the overall deployment took a bit longer than the Intel 12th Generation CPU and I had to retry the bringup operation a couple of times with Cloud Builder as the NSX VM had to be rebooted. It eventually did complete, so if you are using an Intel-based 13th Gen or later, you may want to disable the E-Cores, even though I had more physical DRAM, the impact was more of the CPU than actual memory, which speaks volumes on how robust the NVMe Tiering capability is!
While I was able to supercharge several of my consumer-grade systems, just imagine the possibilities with a more powerful system and server grade CPU and memory or what this could mean for the Edge!? The possibilities are truly endless, not to mention the types of workloads vSphere can now enable at a much lower cost! π
Have I piqued your interests in upgrading to latest vSphere 8.0 Update 3 and take advantage of the new NVMe Tiering capability? What additional workloads might you be able to run now?
Below are the steps to configure NVMe Tiering:
Step 0 - Ensure that you have a single NVMe device that is not in use or partition before enabling NVMe Tiering, you can not share the device with any existing functions. You should also review KB 95944 article for additional considerations and restrictions before using NVMe Tiering.
UPDATE (09/13/24) - Ensure you also have a supported CPU processor or you may not be able to use NVMe Tiering even after successfully configuring. Please see this blog post HERE for more details.
Step 1 - Enable the NVMe Tiering feature by running the following ESXCLI command:
esxcli system settings kernel set -s MemoryTiering -v TRUE
Step 2 - Configure a specific NVMe device for use with NVMe Tiering by running the following command and providing the path to your NVMe device:
esxcli system tierdevice create -d /vmfs/devices/disks/t10.NVMe____Samsung_SSD_960_EVO_1TB_________________8AC1B17155382500
Note: After enabling NVMe Tiering for your NVMe device, you can see which device is configured by using "esxcli system tierdevice list" and this is a one time operation, which means if you reinstall ESXi or move the NVMe device, it will still contain the partition that marks the device for NVMe Tiering.
Step 3 - Configure the desired NVMe Tiering percentage (25-400) based off of your physical DRAM configuration by running the following command:
esxcli system settings advanced set -o /Mem/TierNvmePct -i 400
Note: To learn more about the NVMe Tiering percentage configuration, please see the PDF document at the bottom of this KB 95944 article
Step 4 - Reboot the ESXi host for the changes to go into effect and after ESXi fully boots up, you will see the updated memory capacity that has been enabled by your NVMe device.
Dennis Faucher says
That's a great feature. Thanks for the post.
Paul Braren says
I agree, this looks rather promising for tinkerers.
Simon SHAW says
How can you afford vSphere ESXi at home now there's no free trial?
Bard says
cheapest option is VMUG Advantage, it's a 180$ (200, but it's often on discount) license for home/lab use of ALL VMware products. Including of course esxi and vCenter (but also vSan, Tanzu, etc). Might not include Horizon anymore tho since they are selling it afaik (or sold it already?), but have not verified that.
Steffen says
+1 on what Bard wrote about VMUG Advantage.
And only ESXi was free, not vCenter, vSAN etc.. So the change introduced by Broadcom isnt that big of a thing IMHO. With VMUG Advantage you have access to a plethora of products for personal use and knowledge gaining, its been a great way since a longtime but still so unknown.
David Nixon says
Tiering is awesome. Not ready for production (no RAID options) but for labs, its is a game changer! Previous testing, we can have 1:4 (RAM:TIER) and clients will not see any change. With RAM being half the cost of hardware...
Bruce Ferrell says
Gee Whiz! Isn't this simply swapping?
*IX has been doing this for decades. It's sneered at now, but it's still there AND used.
Everything old is new again... Especially if it get a fresh new coat of bike shed paint.
David Nixon says
Nope. Swapping is not intelligent. This is moving inactive pages rather than just clobbering until under a threshold. Plus, page moves down and then has to copy back before use. There is no copy back here.
Jason says
Well, looks like Linux has memory tiering as well.
Mehran Dehghan says
If it uses memory pages on nvme after running out of memory then you are right, but I think it uses nvme as 2nd or 3rd tier and buffers data on it to give more chance to VMs to reside their active data on RAM.
Fred says
Hi,
I have 3 Nuke 12 PRO with 1 SDD 256GB (ESX OS) + 1 MVME 1TB (vSAN CACHE) + 1 SSD 4TB (vSAN DATA). + 64GB RAM.
About you, what is the best config to implement the RAM tiering ?
Fred says
I would like use the vSAN also.
William Lam says
There's a few options ... Intel NUC or any other kit that has Thunderbolt 3/4, you could get TB chassis and get more devices that can then be used. If that isn't something you're interested in AND you're looking to use vSAN, then you'll need to default to vSAN ESA since OSA requires at least two disks and that'll limit your ability to use NVMe Tiering and we typically recommend ESXi run on reliable device. If you can get NVMe device that supports multiple-namespaces https://williamlam.com/2023/03/ssd-with-multiple-nvme-namespaces-for-vmware-homelab.html then you could slice up the device for additional functions OR run ESXi on USB, which would leave you with slot for NVMe Tiering
arvindjagannath says
Memory tiering is more finely granular in classifying pages, and actively promotes and demotes pages, and tries to dampen the effects of page faults on performance.
(Memory tiering with vSAN is under discussions)
arvindjagannath says
Thanks for the post, William
ryzenlike says
Thank you for the interesting article. Regarding the performance of the 13th generation CPU, disabling the E-core reduces the total CPU frequency. Is it still better performance with only the P-core?
William Lam says
As with anything in tech ... it'll depend on your use case and workloads on whether you need more cores or simply the performance of the P-Cores. Definitely worth testing both and seeing what works the best for your setup
ryzenlike says
thank you for your reply.
I understand that high core count and high frequency alone do not guarantee performance.
mattheldstab says
Cool stuff -- thanks for the post, William!
Nathan Daggett says
Can I provision an nvme oF data store from powerstore and have my hosts all access it? can this be used to improve vdi performance in my Heathcare enterprise?
Alex says
Isnt this going to wear out consumer-grade SSD really fast?
If i use 25% of a 2 TB SSD it is wise to use to use the rest of the SSD for VMs or better leave it unused for TRIM / block-reallocation / whatever?
Will this work with SATA-SSDs too? This wont break any speed records but maybe i want to test something over night that requires more RAM than I have in my NUC12WS?
SemoTech says
Hey William, this is great stuff but I noticed something strange, in ESXi 7.0U3q on a MacMini 2014 with an internal SATA SSD and external 250GB USB drive (backup storage for a VM), the Host automatically shows that "Virtual Flash" is enabled with a capacity of 19.75 GB. Yet in ESXi 8.0U3 on a MacMini 2018 with the internal PCIe NVMe and both TB3 and USB connected storage, the "Virtual Flash" feature is not working. Any ideas why or how to enable Virtual Flash for the ESXi 8 MacMini 2018 as well? I tried to "Add Capacity" using vCenter for the host but no available drives were shown even tho all storage is already visible under Devices and marked as SSD/Flash. Do you happen to have a tutorial on enabling a Host's "Virtual Flash" for ESXi 8? Thanks.
Loren says
This is great stuff! Just enabled this on my home lab server. Took the memory from 256GB to just over 1TB! Going to put it through it's paces. Nested VCF here I come!
Sho says
Nice to meet you.
I set up DRAM tiering using the method above and deployed VCF5.1, but after a few hours the nested ESX hung up, and after rebooting it damaged VCENTER, NSX, etc., and the cluster did not start normally.
If you know, could you please let me know how you got the VCF environment running?
Thank you for reading.
Rafael says
Pretty much have retired ESXi from my home lab.
Sho says
Thanks for the great article.
I have one question: the technical guide says that using vmxnet3 will slow down the network.
Do you know how much of an impact this has based on actual measurements using VLC LAB?
Thomas says
Hi William,
If I were to buy a spare NVME, get a compatible enclosure, and connect it to my laptop, will this allow me to take advantage of the extra RAM? Very interested in this for my home lab.
William Lam says
Must be seen as PCIe device (eg no USB)
Thomas says
Awesome! Thank you so much for replying. Will this also work in my nested homelab environment that's running in workstation as well?
David Nixon says
I tried it with a server (FC disks) into a nested environment and it worked.
Thomas says
Awesome, thank you so much! I am gonna get another NVME and give it a shot!
Duncan says
Wow, this is amazing. If only was was possible to allow the NVME to do double duty and still operate as part of VSAN. I'm assuming it's one or the other?
David Nixon says
vSAN and memory tiering are mutually exclusive by host.
Jason says
Tried it with a 32G Optane NVMe on a Nuc6 with 64G ram, tried different percentage betweeen 24 to 45. While the commands work and the tier device is seen, there is no differences in the ram detected. Would be curious if there is a minimum requirement for the NVMe , etc.
William Lam says
Jason - It should just work. Can you provide the vm-support bundle with direct download link, I can get this to Engr to see what's going on
Jason says
Send you the link to Googledrive share a couple of days ago. Didn't get a confirmation if you have received.
William Lam says
I didnβt get anything, can I ask how youβd shared it?
Jason says
I send the link to this email address - info[dot]virtuallyghetto[at]gmail[dot]com . File shared on GoogleDrive without login required
William Lam says
Just emailed you
Chris Childerhose says
Hi Jason,
Did you ever get this to work? I have older NUCs but NVME drives in each of them to use for tiering and mine is the same. Set it up, reboot but RAM size never changes.
Chris
Jason says
No I didn't. I swipe mine out from one of those with a dual core i5 or i7 that I bought by mistake.
Every commands works but it just doesn't see the tier memory. ESXi see the NVMe as "NVMe Optane Memory", not sure if it's the usual description. Anyway, the 32G really just have 28G of capacity and it would be cool to test out but otherwise just not worth that much effort. Put in a actual NVMe SSD and everything just works. However, there are a few consideration.
Memory Reservation doesn't work - Make senses.
Suspending VM doesn't work - kind of sxxxs
Chris Childerhose says
I am using an NVME 1TB drive and it does not work. Now my NUCs are Skull Canyon with i7-6770HQ CPU so I am assuming I will need to wait until I can bypass some settings as William mentions in the other blog. Oh well worth a try.
Tim says
I had some older 800GB Fusion IO Mezz cards laying around I think this is a perfect use for them. Thanks!
Former Vmware Fanboy says
i can't even get access to our business licenses, never mind homelab...vmware is such a joke now
vDudeJon says
Does this consume the whole disk or can you use the remainder as a local datastore?
William Lam says
Updated the blog post see Step 0 π
MtGimper says
That's a shame it takes the whole drive. Hopefully something that changes as the product matures.
justinmpaul says
Being able to see the shipment trends for ram/box that goes out the door here at HPE, I have to wonder if anyone except home lab folks really care about this? Don't get be wrong, it's cool for home labs, but it just seems like something added in to try to lure back some of the fanboys that have split up with Broadcom.
David Nixon says
Non-homelab user here. Iβm SUPER EXCITED about this. In testing PMEM we found that only 25% of our allocated RAM is actually active (really active, not the misleading active memory counter). However, the lack of RAID or RAID-ish functionality is keeping me from even testing I our enterprise. I canβt have a drive go out and take down the entire server. I hear Windows doesnβt take kindly to half of its RAM going offline. Hopefully vSphere Next will fix this. Taking a 2TB of RAM server down to 512GB or 1TB will cut 30-50% of the server price.
George says
If the NVMe SSD is not composed of RAID, it is better not to open it to 400. You will find that on memory system or DB writing is very slow (my NVMe SSD is 7200MB/s). ESXi also recommends not to exceed the upper limit of the memory capacity, so 25 - 100 is the best . In addition, VMs that use GPU and need to bind memory cannot use NVMe Tiering.
PeterGibbins says
Ran commands and they all executed correctly but additional ram is not showing after reboot.
William Lam says
I suspect you're facing the following https://williamlam.com/2024/09/quick-tip-nvme-tiering-configured-but-not-working.html
PetterGibbons says
Yes, looks like the cpu in the host I was using does not support vMMR. Thanks for the info
Luis says
This is interesting. I'm running on "Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz" and I can enable NVMe Memory Tiering, but I lose the Intel Virtualization. Once i try to boot the Nested ESXi i get the typical "Failed - VMware ESX does not support nested virtualization on this host."
Errors
VMware ESX does not support nested virtualization on this host.
Module 'VMMon' power on failed.
Failed to start the virtual machine."
Chris B says
Really looking forward to trying this. I can see some real use cases for this and it could be pretty impactful at the edge or in subscale deployments. I note the KB says not to run in a vSAN environment, has anyone tried it in a vSAN ESA environment? Does anyone know the vSAN support roadmap?
Fred says
Hi,
it's working for me !!
Intel NUC 12 Pro with an Ugreen enclosure TB4 40Gb/s and a NVME SAMSUNG 990 PRO 1Tb
https://www.ugreen.com/collections/enclosures/products/ugreen-40gbps-m-2-nvme-enclosure-with-cooling-fan
chris says
Hi, fantastic solution for VCF! One question: if I use a 2TB external nvme (ugreen 40gbps enclosure) how much memory I will add? About 1tb considering 400% setting?
William Lam says
Please re-read blog post, this is explained in detail
Shariful says
Hi,
In your setup are those esxi1-4 runs any VM. If they are running vm - did you enabled nested virtualization? According to Broadcom kb article - nested virtualization is not supported in Vmve tiering.
Fred says
I have a 64GB in my NUC.
So 64 + 64x4 = 320GB
Renato says
Works prefectly also on AMD ChangWang CW56-58, I purchased 3 after reading Will review https://williamlam.com/2023/01/esxi-on-amd-changwang-cw56-58.html .
320GB !!!
k d says
Although the system indicates it has 490GB of memory, I am unable to reserve, for example, 48GB for a VM. The VM fails to boot, displaying the error "The host does not have sufficient memory resources to satisfy the reservation." I suspect this may be a bug. If ESXi labels nvme tiered memory as 'memory,' then it should permit users to reserve it, note I tried setting memory reservation in vcenter. I am now persuaded that memory tiering is ineffective for a homelab setup where I need to run large memory-intensive VMs, despite its potential to increase overall throughput and reduce ownership costs when dealing with numerous small VMs of equal importance.
William Lam says
Can you provide support bundle?
k d says
https://drive.google.com/file/d/1LvWbdMqdMK57xCRfQOe_uqLYhBeYDsjU/view?usp=drive_link
William Lam says
Thanks. I've shared this w/lead Engr. Lets see what he comes back with
csmith334f5a0272 says
Hi William,
is the tech preview feature memory tiering removed in the ESXi 8.0d release?
I can't use it anymore after the update.
William Lam says
Not afaik. What issue are you seeing?
csmith334f5a0272 says
esxcli system settings kernel set -s MemoryTiering -v TRUE
is not recognized as a command anymore.
esxcli system settings kernel set -s swMemoryTiering -v TRUE
does it.
But esxcli system tierdevice create -d /vmfs/devices/disks/[your device]
is also not recognized anymore.
I am looking for the new command to create the tiering device...
William Lam says
Let me ping Engr about this
William Lam says
Can you please provide link to support bundle post-upgrade?
csmith334f5a0272 says
I used the newest ISO from the Broadcom Portal available under my entitlement.
VMware-VMvisor-Installer-8.0d-24118393.x86_64.iso
https://support.broadcom.com/web/ecx/solutiondetails?patchId=5484
But now i see that it is a release that is not intended for general use. The release is specifically for compliance with Common Criteria assurance components.
Maybe that is the problem.
I guess i will go back one version to 8.0U3b or do you have another idea?
William Lam says
Are you customer or partner? The latest 80U3 release can be found https://knowledge.broadcom.com/external/article/316595/build-numbers-and-versions-of-vmware-esx.html
David Nixon says
8.0d-24118393.x86 doesn't have it. You need
ESXi 8.0.3 P04 ESXi 8.0 Update 3b 2024/09/17 24280767
William Lam says
David is correct. 8.0d is NOT part of 8.0 Update 3 branch, so you basically went backwards in your update π
This is why I pointed to KB which outlines all current ESXi releases and if you're on 8.0 Update 3, then the latest in that branch is 3b (as noted by David) and NVMe Tiering was introduced in 8.0 Update 3 and later, so this would explain why you're not seeing the commands
csmith334f5a0272 says
That is exactly from where i got it. I am a customer with an entitlement for ESXi.
csmith334f5a0272 says
I guess i wait for the reply of the Engr to you
William Lam says
Need support bundle as requested
csmith334f5a0272 says
Will provide it this evening
Chris says
Ah ok. Got it. I was looking more at the release date and less for the release version π I will then use the 8.0 Update 3 version.
Thank you and David for your explanation and help π
devil-it says
is it possible to use this mechanism to later start ESXi hosts nested on such a physical host? Or will it not work?
William Lam says
Did you read the blog post? Itβs using NVMe Tiering w/Nested ESXi π
Plamen Iliev says
Hi William, is there still requirements for 2 NICs on that single box or one is enough?