WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple
You are here: Home / ESXi / Quick Tip - NVMe Tiering configured but not working?

Quick Tip - NVMe Tiering configured but not working?

09.13.2024 by William Lam // 10 Comments

Since publishing my NVMe Tiering in vSphere 8.0 Update 3 is a Homelab game changer blog post, the feedback and responses have been absolutely phenomenal!

It will be just a matter of time until we can start using RAID with NVME Tiering !

When it happens, it will be a HUGR game change!

BTW, I'm already using it on my Lab Environment!!

It's F**** awesome! pic.twitter.com/h6Np972RcQ

— Chris ✈️🇧🇷🇵🇹🇺🇸🌍 (@crismsantos) September 4, 2024

In fact, during VMware Explore, I had a number of users share with me in person that they not only updated to vSphere 8.0 Update 3 after learning about the feature but they were extremely happy that they could have their hardware was even more capable with just a software upgrade and workloads varied from general infrastructure VMs to the full VMware Cloud Foundation (VCF) stack.

Right before VMware Explore, I did have a couple of users who reported that after successfully configuring NVMe Tiering and rebooting their ESXi host, they noticed the memory capacity did not change. After sharing the details along with vm-support bundles, Engineering has identified the root cause.

The current implementation of NVMe Tiering requires the vMMR capability, which can be found in both datacenter processors such as Intel Broadwell, Skylake, Cascade Lake, Ice Lake and newer or AMD Milan and newer. For some Intel-based consumer processors, while they may be in the same family, such as Skylake, they actually do NOT contain the vMMR capability like their datacenter peers.

If your CPU processor does not have vMMR, then NVMe Tiering today will not work after enabling. If the following entry is found in the vmkernel log after you have enabled NVMe Tiering, then it means your CPU process is not capable of supporting vMMR:

grep MemHwCounters /var/log/vmkernel

2024-08-11T16:03:18.906Z In(182) vmkernel: cpu2:1048835)MemHwCounters: 501: Matching PCI device not found on this host. Exiting.
2024-08-11T16:03:18.907Z In(182) vmkernel: cpu2:1048835)MemHwCounters: 791: Error status = Not supported
2024-08-11T16:03:18.907Z In(182) vmkernel: cpu2:1048835)MemHwCounters: 1017: Error status = Not supported

The good news is that this limitation will be removed in the future and in fact, this feedback from our community has improve not only the implementation of NVMe Tiering, such that vMMR will not be required but also provide better logging output.

I still have my Intel NUC 10 (Frost Canyon) which is Skylake-based but fortunately, I was able to successfully enabled NVMe Tiering with that kit, so it definitely varies on the consumer Intel CPU on which may or may not have vMMR.

More from my site

  • Sharing a single NVMe device with NVMe Tiering? 
  • Useful NVMe Tiering reporting using vSphere 8.0 Update 3 APIs
  • How much Virtual Machine memory is using NVMe Tiering?
  • New ESXi-Arm Fling based on 8.0 Update 3b
  • NVMe Tiering in vSphere 8.0 Update 3 is a Homelab game changer!

Categories // ESXi, Home Lab Tags // ESXi 8.0 Update 3, NVMe

Comments

  1. *protectedCedric Jucker says

    09/18/2024 at 4:42 am

    My NUC Coffe lake is working fine too, so you know. UPdated to 8.0.3sb version this morning...

    Reply
  2. *protectedDag Kvello says

    09/20/2024 at 4:45 am

    I'm testing this with "Local" NVMe over Fabric devices (32GB Fiber Channel). I've set opp dedicated volumes pr. ESXi host and forced them to be local.

    esxcli storage hpp device set --mark-device-local=1 --device=eui.800xxxxx

    This allowed me to enable Tiering 😀

    I have one question. Is there any way to see how much "tiering" I'm using pr. host ?

    Reply
  3. *protectedDavid Biacsi-Schön says

    09/23/2024 at 4:38 am

    I set it up on in a cluster of two Minisforum MS-01 mini PCs. However, live storage vMotion seems not working; the log says:

    Migrate: FSR is not yet supported on a system with Software Memory Tiering enabled.

    Workaround was to choose "Change both compute resource and storage" and to select the cluster as the compute resource.

    Is this issue (FSR not supported with NVMe memory tiering) known?

    Reply
    • William Lam says

      09/23/2024 at 6:43 am

      Yes, there’s a list of known consideration for Tech Preview https://knowledge.broadcom.com/external/article?legacyId=95944 which most will go away when the feature goes GA

      Reply
  4. *protectedAbbed Sedkaoui says

    09/24/2024 at 10:39 am

    William i figured out why the Nested ESXi OVA PANIC exiting with Signal 8 on Memory Tiering enabled Host, that's because it's too new feature, the OVF template need a bump up in the specification of the Guest OS and Compatibility.
    Specifically
    Guest OS "VMware ESXi 8.0 or later"
    line 24
    ``` ```
    and
    Compatibility "ESXi 8.0 virtual machine"
    line 33
    ``` vmx-20```

    All NestedESXi OVA can benefit from vSphere 8.0u3 Memory Tiering Innovation by updating these 2 lines !

    Just out of curiosity i tried the oldest Nested_ESXi6.0u3 which date from 2017 and it run on Memory Tiering enabled Host!

    Reply
    • *protectedAbbed Sedkaoui says

      09/24/2024 at 10:43 am

      The wanted line are
      ` `
      ` vmx-20`

      Reply
      • *protectedAbbed Sedkaoui says

        09/24/2024 at 10:49 am

        well try again 😀
        line 24
        OperatingSystemSection ovf:id="104" ovf:version="8" vmw:osType="vmkernel8Guest"
        line 33
        vmx-20

        Reply
  5. *protectedPHNZ says

    10/26/2024 at 10:49 pm

    Do you have more details on confirming the specific feature for CPUs to support this? I found some mention of Optane Persistent Memory (PMem) support but this might be for the tiering from ESXi 6.7 up. For example, the Intel Xeon E-2400 mention NOT supporting Optane PMem, but not sure if they will work with this feature or not. Looking to replace an old Skylake based home server which says it doesn't support this feature (from the log file message)

    Reply
  6. *protectedAndrea T. says

    12/04/2024 at 8:11 am

    If someone is interested in using only a partition of a big device for memory tiering the following commands should be used:

    First create 2 partition on the target device (warning it will fully erase it)

    partedUtil setptbl /vmfs/devices/disks/{id} gpt \
    "1 2048 536870911 B3676DDDA38A4CD6B970718D7F873811 0" \
    "2 536870912 3907029134 AA31E02A400F11DB9590000C2911D1B8 0 "

    This command will create (on a 2tb drive) 1 partition of 256gb for using with memory tiering (the file system type B3676DDDA38A4CD6B970718D7F873811 will do the magic)
    the second partition will be used for a datastore

    Then mount the datastore with

    vmkfstools -C vmfs6 -S datastore2 /vmfs/devices/disks/{id}:2

    esxcli system settings advanced set -o /Mem/TierNvmePct -i 400
    esxcli system settings kernel set -s MemoryTiering -v TRUE
    reboot

    Non need to execute the esxcli system tierdevice create -d /vmfs/devices/disks/{id}

    running

    esxcli system tierdevice list
    will show the new 256 drive

    Best
    Andrea

    Reply
    • William Lam says

      12/09/2024 at 10:37 am

      Thanks for sharing Andrea! Just did a quick post https://williamlam.com/2024/12/sharing-a-single-nvme-device-with-nvme-tiering.html w/some tweaks to make it easier to consume more generically. Cheers!

      Reply

Thanks for the comment!Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025