WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple
You are here: Home / ESXi / NVMe Tiering in vSphere 8.0 Update 3 is a Homelab game changer!

NVMe Tiering in vSphere 8.0 Update 3 is a Homelab game changer!

08.05.2024 by William Lam // 88 Comments

As someone who is always on the lookout for interesting and clever ways to make the most out of your vSphere homelab investment, I was surprised there has not been more noise about the new NVMe Tiering capability in vSphere 8.0 Update 3!?

NVMe Tiering is currently in Tech Preview and it enables ESXi to use an NVMe device as a secondary tier of memory for your workloads, which IMHO makes it one of the killer features in vSphere 8.0 Update 3, especially with some interesting implications for Homelabs!

As the old saying goes, a picture is worth a thousand words ...


Picture on the left shows a system with 64GB of memory (DRAM) that is available before enabling NVMe Tiering and on the right, the amount of memory that is available after enabling the NVMe Tiering which is whopping 480GB! ๐Ÿคฏ

For my initial setup, I used an older Intel NUC 12 Enthusiast as it allows for up to 3 x NVMe devices, which I have allocated for the ESXi installation, Workload Datastore and NVMe Tiering. The maximum amount of physical DRAM memory that the Intel NUC 12 Enthusiast is capable of is 64GB which I have fully maxed out on the system and I am using a 1TB NVMe device for NVMe Tiering, which is how I was able to get to 318GB of memory for my physical ESXi host running on the Intel NUC!

So how usable is the Intel NUC with the "extra" available memory? ... Well, I figure I should put it through a real test and I was able to successfully deploy a fully operational VMware Cloud Foundation (VCF) Holodeck solution! ๐Ÿ˜Ž


Since the Intel NUC is a consumer platform, I was surprised at how responsive the deployment was and the overall speed of the deployment, it took a little over ~2hrs to complete and it was fully accessible without noticing any real performance degradation when logging into SDDC Manager or the vSphere UI.

My second experiment included a more recent hardware platform with the ASUS PN64-E1 which had 96GB of DRAM and after enabling NVMe Tiering on the same 1TB NVMe device, I was able to reach the 480GB (which is actually from the screenshot at the very top of the blog post).

Note: I opted to leave all CPU cores enabled and I did observe the overall deployment took a bit longer than the Intel 12th Generation CPU and I had to retry the bringup operation a couple of times with Cloud Builder as the NSX VM had to be rebooted. It eventually did complete, so if you are using an Intel-based 13th Gen or later, you may want to disable the E-Cores, even though I had more physical DRAM, the impact was more of the CPU than actual memory, which speaks volumes on how robust the NVMe Tiering capability is!

While I was able to supercharge several of my consumer-grade systems, just imagine the possibilities with a more powerful system and server grade CPU and memory or what this could mean for the Edge!? The possibilities are truly endless, not to mention the types of workloads vSphere can now enable at a much lower cost! ๐Ÿ™Œ

Have I piqued your interests in upgrading to latest vSphere 8.0 Update 3 and take advantage of the new NVMe Tiering capability? What additional workloads might you be able to run now?

Below are the steps to configure NVMe Tiering:

Step 0 - Ensure that you have a single NVMe device that is not in use or partition before enabling NVMe Tiering, you can not share the device with any existing functions. You should also review KB 95944 article for additional considerations and restrictions before using NVMe Tiering.

UPDATE (09/13/24) - Ensure you also have a supported CPU processor or you may not be able to use NVMe Tiering even after successfully configuring. Please see this blog post HERE for more details.

Step 1 - Enable the NVMe Tiering feature by running the following ESXCLI command:

esxcli system settings kernel set -s MemoryTiering -v TRUE

Step 2 - Configure a specific NVMe device for use with NVMe Tiering by running the following command and providing the path to your NVMe device:

esxcli system tierdevice create -d /vmfs/devices/disks/t10.NVMe____Samsung_SSD_960_EVO_1TB_________________8AC1B17155382500

Note: After enabling NVMe Tiering for your NVMe device, you can see which device is configured by using "esxcli system tierdevice list" and this is a one time operation, which means if you reinstall ESXi or move the NVMe device, it will still contain the partition that marks the device for NVMe Tiering.

Step 3 - Configure the desired NVMe Tiering percentage (25-400) based off of your physical DRAM configuration by running the following command:

esxcli system settings advanced set -o /Mem/TierNvmePct -i 400

Note: To learn more about the NVMe Tiering percentage configuration, please see the PDF document at the bottom of this KB 95944 article


Step 4 - Reboot the ESXi host for the changes to go into effect and after ESXi fully boots up, you will see the updated memory capacity that has been enabled by your NVMe device.

More from my site

  • Useful NVMe Tiering reporting using vSphere 8.0 Update 3 APIs
  • ESXi on GMKtec NucBox K11
  • Quick Tip - VMware Cloud Foundation (VCF) Bringup fails without persistent ESX-OSData
  • Enhancements to VMware Cloud Foundation (VCF) & vSphere Automated Lab Deployment Scripts
  • vSAN ESA hardware mock VIB for physical ESXi deployment for VMware Cloud Foundation (VCF)

Categories // ESXi, Home Lab, Nested Virtualization, VMware Cloud Foundation, vSphere 8.0 Tags // NVMe, VMware Cloud Foundation, vSphere 8.0 Update 3

Comments

  1. *protectedDennis Faucher says

    08/05/2024 at 9:29 am

    That's a great feature. Thanks for the post.

    Reply
    • *protectedPaul Braren says

      08/05/2024 at 7:46 pm

      I agree, this looks rather promising for tinkerers.

      Reply
      • *protectedSimon SHAW says

        08/06/2024 at 6:28 pm

        How can you afford vSphere ESXi at home now there's no free trial?

        Reply
        • *protectedBard says

          08/07/2024 at 2:44 am

          cheapest option is VMUG Advantage, it's a 180$ (200, but it's often on discount) license for home/lab use of ALL VMware products. Including of course esxi and vCenter (but also vSan, Tanzu, etc). Might not include Horizon anymore tho since they are selling it afaik (or sold it already?), but have not verified that.

          Reply
        • *protectedSteffen says

          08/18/2024 at 6:35 am

          +1 on what Bard wrote about VMUG Advantage.

          And only ESXi was free, not vCenter, vSAN etc.. So the change introduced by Broadcom isnt that big of a thing IMHO. With VMUG Advantage you have access to a plethora of products for personal use and knowledge gaining, its been a great way since a longtime but still so unknown.

          Reply
  2. *protectedDavid Nixon says

    08/05/2024 at 9:32 am

    Tiering is awesome. Not ready for production (no RAID options) but for labs, its is a game changer! Previous testing, we can have 1:4 (RAM:TIER) and clients will not see any change. With RAM being half the cost of hardware...

    Reply
  3. *protectedBruce Ferrell says

    08/05/2024 at 9:52 am

    Gee Whiz! Isn't this simply swapping?
    *IX has been doing this for decades. It's sneered at now, but it's still there AND used.

    Everything old is new again... Especially if it get a fresh new coat of bike shed paint.

    Reply
    • *protectedDavid Nixon says

      08/05/2024 at 10:01 am

      Nope. Swapping is not intelligent. This is moving inactive pages rather than just clobbering until under a threshold. Plus, page moves down and then has to copy back before use. There is no copy back here.

      Reply
      • *protectedJason says

        08/24/2024 at 10:52 am

        Well, looks like Linux has memory tiering as well.

        Reply
    • *protectedMehran Dehghan says

      09/28/2024 at 2:35 pm

      If it uses memory pages on nvme after running out of memory then you are right, but I think it uses nvme as 2nd or 3rd tier and buffers data on it to give more chance to VMs to reside their active data on RAM.

      Reply
  4. *protectedFred says

    08/05/2024 at 2:24 pm

    Hi,

    I have 3 Nuke 12 PRO with 1 SDD 256GB (ESX OS) + 1 MVME 1TB (vSAN CACHE) + 1 SSD 4TB (vSAN DATA). + 64GB RAM.
    About you, what is the best config to implement the RAM tiering ?

    Reply
    • *protectedFred says

      08/05/2024 at 2:25 pm

      I would like use the vSAN also.

      Reply
    • William Lam says

      08/05/2024 at 6:33 pm

      There's a few options ... Intel NUC or any other kit that has Thunderbolt 3/4, you could get TB chassis and get more devices that can then be used. If that isn't something you're interested in AND you're looking to use vSAN, then you'll need to default to vSAN ESA since OSA requires at least two disks and that'll limit your ability to use NVMe Tiering and we typically recommend ESXi run on reliable device. If you can get NVMe device that supports multiple-namespaces https://williamlam.com/2023/03/ssd-with-multiple-nvme-namespaces-for-vmware-homelab.html then you could slice up the device for additional functions OR run ESXi on USB, which would leave you with slot for NVMe Tiering

      Reply
  5. *protectedarvindjagannath says

    08/05/2024 at 2:54 pm

    Memory tiering is more finely granular in classifying pages, and actively promotes and demotes pages, and tries to dampen the effects of page faults on performance.
    (Memory tiering with vSAN is under discussions)

    Reply
    • *protectedarvindjagannath says

      08/05/2024 at 2:54 pm

      Thanks for the post, William

      Reply
  6. *protectedryzenlike says

    08/05/2024 at 3:53 pm

    Thank you for the interesting article. Regarding the performance of the 13th generation CPU, disabling the E-core reduces the total CPU frequency. Is it still better performance with only the P-core?

    Reply
    • William Lam says

      08/05/2024 at 6:35 pm

      As with anything in tech ... it'll depend on your use case and workloads on whether you need more cores or simply the performance of the P-Cores. Definitely worth testing both and seeing what works the best for your setup

      Reply
      • *protectedryzenlike says

        08/05/2024 at 11:47 pm

        thank you for your reply.
        I understand that high core count and high frequency alone do not guarantee performance.

        Reply
  7. *protectedmattheldstab says

    08/05/2024 at 7:34 pm

    Cool stuff -- thanks for the post, William!

    Reply
  8. *protectedNathan Daggett says

    08/05/2024 at 8:07 pm

    Can I provision an nvme oF data store from powerstore and have my hosts all access it? can this be used to improve vdi performance in my Heathcare enterprise?

    Reply
  9. *protectedAlex says

    08/06/2024 at 1:58 am

    Isnt this going to wear out consumer-grade SSD really fast?
    If i use 25% of a 2 TB SSD it is wise to use to use the rest of the SSD for VMs or better leave it unused for TRIM / block-reallocation / whatever?
    Will this work with SATA-SSDs too? This wont break any speed records but maybe i want to test something over night that requires more RAM than I have in my NUC12WS?

    Reply
  10. *protectedSemoTech says

    08/06/2024 at 10:14 am

    Hey William, this is great stuff but I noticed something strange, in ESXi 7.0U3q on a MacMini 2014 with an internal SATA SSD and external 250GB USB drive (backup storage for a VM), the Host automatically shows that "Virtual Flash" is enabled with a capacity of 19.75 GB. Yet in ESXi 8.0U3 on a MacMini 2018 with the internal PCIe NVMe and both TB3 and USB connected storage, the "Virtual Flash" feature is not working. Any ideas why or how to enable Virtual Flash for the ESXi 8 MacMini 2018 as well? I tried to "Add Capacity" using vCenter for the host but no available drives were shown even tho all storage is already visible under Devices and marked as SSD/Flash. Do you happen to have a tutorial on enabling a Host's "Virtual Flash" for ESXi 8? Thanks.

    Reply
  11. *protectedLoren says

    08/06/2024 at 6:59 pm

    This is great stuff! Just enabled this on my home lab server. Took the memory from 256GB to just over 1TB! Going to put it through it's paces. Nested VCF here I come!

    Reply
    • *protectedSho says

      08/15/2024 at 11:10 pm

      Nice to meet you.

      I set up DRAM tiering using the method above and deployed VCF5.1, but after a few hours the nested ESX hung up, and after rebooting it damaged VCENTER, NSX, etc., and the cluster did not start normally.

      If you know, could you please let me know how you got the VCF environment running?

      Thank you for reading.

      Reply
  12. *protectedRafael says

    08/06/2024 at 8:25 pm

    Pretty much have retired ESXi from my home lab.

    Reply
  13. *protectedSho says

    08/06/2024 at 8:49 pm

    Thanks for the great article.

    I have one question: the technical guide says that using vmxnet3 will slow down the network.

    Do you know how much of an impact this has based on actual measurements using VLC LAB?

    Reply
  14. *protectedThomas says

    08/07/2024 at 1:22 pm

    Hi William,

    If I were to buy a spare NVME, get a compatible enclosure, and connect it to my laptop, will this allow me to take advantage of the extra RAM? Very interested in this for my home lab.

    Reply
    • William Lam says

      08/07/2024 at 2:05 pm

      Must be seen as PCIe device (eg no USB)

      Reply
      • *protectedThomas says

        08/07/2024 at 2:46 pm

        Awesome! Thank you so much for replying. Will this also work in my nested homelab environment that's running in workstation as well?

        Reply
        • *protectedDavid Nixon says

          08/07/2024 at 3:00 pm

          I tried it with a server (FC disks) into a nested environment and it worked.

          Reply
          • *protectedThomas says

            08/07/2024 at 8:45 pm

            Awesome, thank you so much! I am gonna get another NVME and give it a shot!

  15. *protectedDuncan says

    08/08/2024 at 6:40 am

    Wow, this is amazing. If only was was possible to allow the NVME to do double duty and still operate as part of VSAN. I'm assuming it's one or the other?

    Reply
    • *protectedDavid Nixon says

      08/08/2024 at 6:45 am

      vSAN and memory tiering are mutually exclusive by host.

      Reply
  16. *protectedJason says

    08/08/2024 at 7:03 am

    Tried it with a 32G Optane NVMe on a Nuc6 with 64G ram, tried different percentage betweeen 24 to 45. While the commands work and the tier device is seen, there is no differences in the ram detected. Would be curious if there is a minimum requirement for the NVMe , etc.

    Reply
    • William Lam says

      08/08/2024 at 12:13 pm

      Jason - It should just work. Can you provide the vm-support bundle with direct download link, I can get this to Engr to see what's going on

      Reply
      • *protectedJason says

        08/13/2024 at 11:16 am

        Send you the link to Googledrive share a couple of days ago. Didn't get a confirmation if you have received.

        Reply
        • William Lam says

          08/13/2024 at 11:22 am

          I didnโ€™t get anything, can I ask how youโ€™d shared it?

          Reply
          • *protectedJason says

            08/13/2024 at 4:43 pm

            I send the link to this email address - info[dot]virtuallyghetto[at]gmail[dot]com . File shared on GoogleDrive without login required

          • William Lam says

            08/13/2024 at 5:37 pm

            Just emailed you

    • *protectedChris Childerhose says

      08/16/2024 at 11:21 am

      Hi Jason,

      Did you ever get this to work? I have older NUCs but NVME drives in each of them to use for tiering and mine is the same. Set it up, reboot but RAM size never changes.

      Chris

      Reply
      • *protectedJason says

        08/24/2024 at 11:02 am

        No I didn't. I swipe mine out from one of those with a dual core i5 or i7 that I bought by mistake.
        Every commands works but it just doesn't see the tier memory. ESXi see the NVMe as "NVMe Optane Memory", not sure if it's the usual description. Anyway, the 32G really just have 28G of capacity and it would be cool to test out but otherwise just not worth that much effort. Put in a actual NVMe SSD and everything just works. However, there are a few consideration.
        Memory Reservation doesn't work - Make senses.
        Suspending VM doesn't work - kind of sxxxs

        Reply
        • *protectedChris Childerhose says

          09/23/2024 at 12:14 pm

          I am using an NVME 1TB drive and it does not work. Now my NUCs are Skull Canyon with i7-6770HQ CPU so I am assuming I will need to wait until I can bypass some settings as William mentions in the other blog. Oh well worth a try.

          Reply
  17. *protectedTim says

    08/08/2024 at 10:41 am

    I had some older 800GB Fusion IO Mezz cards laying around I think this is a perfect use for them. Thanks!

    Reply
  18. *protectedFormer Vmware Fanboy says

    08/08/2024 at 2:20 pm

    i can't even get access to our business licenses, never mind homelab...vmware is such a joke now

    Reply
  19. *protectedvDudeJon says

    08/22/2024 at 6:53 am

    Does this consume the whole disk or can you use the remainder as a local datastore?

    Reply
    • William Lam says

      08/22/2024 at 8:28 am

      Updated the blog post see Step 0 ๐Ÿ™‚

      Reply
      • *protectedMtGimper says

        08/22/2024 at 1:18 pm

        That's a shame it takes the whole drive. Hopefully something that changes as the product matures.

        Reply
  20. *protectedjustinmpaul says

    08/26/2024 at 2:46 pm

    Being able to see the shipment trends for ram/box that goes out the door here at HPE, I have to wonder if anyone except home lab folks really care about this? Don't get be wrong, it's cool for home labs, but it just seems like something added in to try to lure back some of the fanboys that have split up with Broadcom.

    Reply
    • *protectedDavid Nixon says

      08/26/2024 at 5:03 pm

      Non-homelab user here. Iโ€™m SUPER EXCITED about this. In testing PMEM we found that only 25% of our allocated RAM is actually active (really active, not the misleading active memory counter). However, the lack of RAID or RAID-ish functionality is keeping me from even testing I our enterprise. I canโ€™t have a drive go out and take down the entire server. I hear Windows doesnโ€™t take kindly to half of its RAM going offline. Hopefully vSphere Next will fix this. Taking a 2TB of RAM server down to 512GB or 1TB will cut 30-50% of the server price.

      Reply
  21. *protectedGeorge says

    08/29/2024 at 11:48 pm

    If the NVMe SSD is not composed of RAID, it is better not to open it to 400. You will find that on memory system or DB writing is very slow (my NVMe SSD is 7200MB/s). ESXi also recommends not to exceed the upper limit of the memory capacity, so 25 - 100 is the best . In addition, VMs that use GPU and need to bind memory cannot use NVMe Tiering.

    Reply
  22. *protectedPeterGibbins says

    09/12/2024 at 10:47 pm

    Ran commands and they all executed correctly but additional ram is not showing after reboot.

    Reply
    • William Lam says

      09/13/2024 at 5:42 pm

      I suspect you're facing the following https://williamlam.com/2024/09/quick-tip-nvme-tiering-configured-but-not-working.html

      Reply
      • *protectedPetterGibbons says

        09/15/2024 at 7:12 pm

        Yes, looks like the cpu in the host I was using does not support vMMR. Thanks for the info

        Reply
        • *protectedLuis says

          10/04/2024 at 11:40 am

          This is interesting. I'm running on "Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz" and I can enable NVMe Memory Tiering, but I lose the Intel Virtualization. Once i try to boot the Nested ESXi i get the typical "Failed - VMware ESX does not support nested virtualization on this host."
          Errors
          VMware ESX does not support nested virtualization on this host.
          Module 'VMMon' power on failed.
          Failed to start the virtual machine."

          Reply
  23. *protectedChris B says

    09/18/2024 at 4:14 am

    Really looking forward to trying this. I can see some real use cases for this and it could be pretty impactful at the edge or in subscale deployments. I note the KB says not to run in a vSAN environment, has anyone tried it in a vSAN ESA environment? Does anyone know the vSAN support roadmap?

    Reply
  24. *protectedFred says

    09/24/2024 at 1:18 pm

    Hi,

    it's working for me !!
    Intel NUC 12 Pro with an Ugreen enclosure TB4 40Gb/s and a NVME SAMSUNG 990 PRO 1Tb
    https://www.ugreen.com/collections/enclosures/products/ugreen-40gbps-m-2-nvme-enclosure-with-cooling-fan

    Reply
  25. *protectedchris says

    09/25/2024 at 6:27 am

    Hi, fantastic solution for VCF! One question: if I use a 2TB external nvme (ugreen 40gbps enclosure) how much memory I will add? About 1tb considering 400% setting?

    Reply
    • William Lam says

      09/25/2024 at 8:06 am

      Please re-read blog post, this is explained in detail

      Reply
      • *protectedShariful says

        10/08/2024 at 3:50 pm

        Hi,
        In your setup are those esxi1-4 runs any VM. If they are running vm - did you enabled nested virtualization? According to Broadcom kb article - nested virtualization is not supported in Vmve tiering.

        Reply
    • *protectedFred says

      09/26/2024 at 2:58 am

      I have a 64GB in my NUC.
      So 64 + 64x4 = 320GB

      Reply
  26. *protectedRenato says

    10/05/2024 at 11:35 am

    Works prefectly also on AMD ChangWang CW56-58, I purchased 3 after reading Will review https://williamlam.com/2023/01/esxi-on-amd-changwang-cw56-58.html .
    320GB !!!

    Reply
  27. *protectedk d says

    10/17/2024 at 8:52 pm

    Although the system indicates it has 490GB of memory, I am unable to reserve, for example, 48GB for a VM. The VM fails to boot, displaying the error "The host does not have sufficient memory resources to satisfy the reservation." I suspect this may be a bug. If ESXi labels nvme tiered memory as 'memory,' then it should permit users to reserve it, note I tried setting memory reservation in vcenter. I am now persuaded that memory tiering is ineffective for a homelab setup where I need to run large memory-intensive VMs, despite its potential to increase overall throughput and reduce ownership costs when dealing with numerous small VMs of equal importance.

    Reply
    • William Lam says

      10/17/2024 at 8:54 pm

      Can you provide support bundle?

      Reply
      • *protectedk d says

        10/18/2024 at 7:44 am

        https://drive.google.com/file/d/1LvWbdMqdMK57xCRfQOe_uqLYhBeYDsjU/view?usp=drive_link

        Reply
        • William Lam says

          10/18/2024 at 8:32 am

          Thanks. I've shared this w/lead Engr. Lets see what he comes back with

          Reply
          • *protectedk d says

            01/09/2025 at 8:00 pm

            Any updates on this? It's been months. Thanks.

  28. *protectedcsmith334f5a0272 says

    11/04/2024 at 4:22 pm

    Hi William,

    is the tech preview feature memory tiering removed in the ESXi 8.0d release?
    I can't use it anymore after the update.

    Reply
    • William Lam says

      11/04/2024 at 4:32 pm

      Not afaik. What issue are you seeing?

      Reply
      • *protectedcsmith334f5a0272 says

        11/04/2024 at 4:39 pm

        esxcli system settings kernel set -s MemoryTiering -v TRUE

        is not recognized as a command anymore.

        esxcli system settings kernel set -s swMemoryTiering -v TRUE

        does it.

        But esxcli system tierdevice create -d /vmfs/devices/disks/[your device]

        is also not recognized anymore.

        I am looking for the new command to create the tiering device...

        Reply
        • William Lam says

          11/04/2024 at 6:16 pm

          Let me ping Engr about this

          Reply
        • William Lam says

          11/04/2024 at 6:54 pm

          Can you please provide link to support bundle post-upgrade?

          Reply
          • *protectedcsmith334f5a0272 says

            11/05/2024 at 1:31 am

            I used the newest ISO from the Broadcom Portal available under my entitlement.

            VMware-VMvisor-Installer-8.0d-24118393.x86_64.iso

            https://support.broadcom.com/web/ecx/solutiondetails?patchId=5484

            But now i see that it is a release that is not intended for general use. The release is specifically for compliance with Common Criteria assurance components.

            Maybe that is the problem.

            I guess i will go back one version to 8.0U3b or do you have another idea?

          • William Lam says

            11/05/2024 at 4:09 am

            Are you customer or partner? The latest 80U3 release can be found https://knowledge.broadcom.com/external/article/316595/build-numbers-and-versions-of-vmware-esx.html

          • *protectedDavid Nixon says

            11/05/2024 at 4:58 am

            8.0d-24118393.x86 doesn't have it. You need
            ESXi 8.0.3 P04 ESXi 8.0 Update 3b 2024/09/17 24280767

          • William Lam says

            11/05/2024 at 7:42 am

            David is correct. 8.0d is NOT part of 8.0 Update 3 branch, so you basically went backwards in your update ๐Ÿ™‚

            This is why I pointed to KB which outlines all current ESXi releases and if you're on 8.0 Update 3, then the latest in that branch is 3b (as noted by David) and NVMe Tiering was introduced in 8.0 Update 3 and later, so this would explain why you're not seeing the commands

  29. *protectedcsmith334f5a0272 says

    11/05/2024 at 4:27 am

    That is exactly from where i got it. I am a customer with an entitlement for ESXi.

    Reply
  30. *protectedcsmith334f5a0272 says

    11/05/2024 at 4:28 am

    I guess i wait for the reply of the Engr to you

    Reply
    • William Lam says

      11/05/2024 at 4:36 am

      Need support bundle as requested

      Reply
      • *protectedcsmith334f5a0272 says

        11/05/2024 at 4:43 am

        Will provide it this evening

        Reply
  31. *protectedChris says

    11/05/2024 at 1:26 pm

    Ah ok. Got it. I was looking more at the release date and less for the release version ๐Ÿ™ˆ I will then use the 8.0 Update 3 version.
    Thank you and David for your explanation and help ๐Ÿ‘

    Reply
  32. *protecteddevil-it says

    11/13/2024 at 6:33 am

    is it possible to use this mechanism to later start ESXi hosts nested on such a physical host? Or will it not work?

    Reply
    • William Lam says

      11/13/2024 at 12:14 pm

      Did you read the blog post? Itโ€™s using NVMe Tiering w/Nested ESXi ๐Ÿ™‚

      Reply
  33. *protectedPlamen Iliev says

    11/29/2024 at 4:07 am

    Hi William, is there still requirements for 2 NICs on that single box or one is enough?

    Reply
  34. *protectedRichard John Hughes says

    12/22/2024 at 5:32 pm

    Will this work on ARM? I have ESXi 7 & 8 running on the Radxa Rock5a right now. I'll have to try the Rock 5b that I have NVME on.

    Reply
    • William Lam says

      12/22/2024 at 5:42 pm

      Good question! Iโ€™m not entirely sure โ€ฆ I mean, itโ€™s same code base, so it should just โ€œworkโ€. This would be huge for some of the smaller kits, please report back!

      Reply
      • *protectedRichard John Hughes says

        12/24/2024 at 9:34 am

        I now have ESXi 8 running on both the Rock5A & Rock5B using the appropriate mico SD card containing https://github.com/edk2-porting/edk2-rk3588?tab=readme-ov-file

        The Rock5A does not need a USB NIC, the Rock5B needs a USB NIC.

        The Rock5B will boot off the NVME, but you need to install ESXi8 with the NVME in a USB case & then attach it to the board. ๐Ÿ™

        Reply
  35. *protectedFred says

    05/04/2025 at 10:09 am

    Hi,

    I find this solution to work with USB4 MVME Enclosure.
    My Hardware design :
    Intel NUC 12 PRO Full Size with 1 ssd 256Gb SATA (ESX OS), 1 ssd 4Tb SATA (VSAN DATA), 1 SSD MVME 1Tb PCIE (VSAN CACHE) and 1 SSD MVME 1Tb USB4 Enclosure.

    1) I put the ESX on Maintenance Mode with evacute all datas.
    2) Delete the VSAN Disk Group from the node.
    3) Go to ssh to the node.
    4) esxcli system settings kernel set -s MemoryTiering -v TRUE
    5) esxcli system tierdevice create -d /vmfs/devices/disks/t10.NVMe____Samsung_SSD_990_PRO_1TB_________________8AC1B17155382500 (VSAN Cache SSD MVME)
    6) esxcli system settings advanced set -o /Mem/TierNvmePct -i 400
    7) Reboot the ESX
    8) Check the activation of the memory tiering -> Ok
    9) Shutdown the ESX
    10) Remove the MVME 1Tb (vSAN Cache) of the NUC
    11) Put the new MVME 1Tb on the NUC (New vSAN Cache)
    12) Put the MVME 1TB (Old vSAN cache) into the USB4 Enclosure
    13) Connect the USB4 Enclosure to the NUC
    14) PowerOn the NUC
    15) After the ESX is up, go to the node via SSH
    16) esxcli hardware usb passthrough device list
    17) esxcli hardware usb passthrough device disable -d 1:4:1058:1140 (Bus#:Dev#:vendorId:productId (eg. 1:4:1058:1140))
    18) Reboot the ESX
    19) Check the activation of the memory tiering -> Ok
    20) Recreate the disk group for the vSAN

    I know that is not recomanded but It's working find for me without issue since 2 weeks.

    Rgds

    Reply
    • William Lam says

      05/04/2025 at 10:50 am

      Good to know!

      You might be able to simplify the setup by following https://williamlam.com/2024/12/sharing-a-single-nvme-device-with-nvme-tiering-esxi-osdata-vmfs-datastore.html

      Reply

Leave a Reply to PetterGibbonsCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Tokenย  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...