WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple
You are here: Home / ESXi / ESXi on Intel NUC 12 Extreme (Dragon Canyon)

ESXi on Intel NUC 12 Extreme (Dragon Canyon)

02.24.2022 by William Lam // 39 Comments

As teased back in January, Intel has been working on a new Intel NUC ...

1st native 10GbE Intel NUC! 🐉 🥳🤐🤫 pic.twitter.com/E4lyeaFhpU

— William Lam (@lamw.bsky.social | @*protected email*) (@lamw) January 11, 2022

Today, Intel has officially launched one of their new 12th generation Intel NUCs called the Intel NUC 12 Extreme formally code named Dragon Canyon. Some may also notice that the Intel NUC 12 Extreme looks very similiar to last years Intel NUC 11 Extreme (Beast Canyon), but there are definitely a number of differences both internally and externally.

Here is your first look at the new Intel NUC 12 Extreme and what it means for those interested in using it for a VMware Homelab.

Compute

The Intel NUC 12 Extreme includes the new Intel 12th Generation Alder Lake CPU which is also the first consumer Intel CPU that introduces a new hybrid "big.LITTLE" CPU architecture. This new hybrid CPU architecture integrates two types of CPU cores: Performance-cores (P-cores) and Efficiency-cores (E-cores) into the same physical CPU die. To learn more about how this new hybrid CPU design works, check out this resource from Intel.

The Intel NUC 12 Extreme will be available in the following configurations starting in the second quarter of 2022 (per their press release):

  • NUC12DCMi9 - Intel Core i9-12900 Processor (up to 5.10 GHz)
    • 16-Core (8P+8E), 24-Thread, 30M cache
  • NUC12DCMi7 - Intel Core i7-12700 Processor (up to 4.90 GHz)
    • 12-Core (8P+4E), 20-Thread, 30M cache

Both systems will be able to support up to a maximum of 64GB of memory using two SO-DIMM memory modules, which is similiar to the previous generation of Intel NUCs. Although the CPUs can technically go up to 128GB of memory, there has not been any confirmation or even rumors that we will be seeing a single 64GB SO-DIMM any time soon 🙁

The first question that I am sure many of you will have (or have already asked) is whether the ESXi CPU Scheduler will understand this new hybrid CPU architecture? For the answer, skip to the ESXi section at the bottom for more details.

Network

The most significant update in my opinion for the Intel NUC 12 Extreme is the networking as it re-introduces support for two onboard network adapters, which should come in handy for running a VMware Homelab. Furthermore, something that customers have been asking for quite some time now is support for a 10GbE option and the Intel NUC 12 Extreme finally delivers!

The Intel NUC 12 Extreme includes 1 x 2.5GbE (Intel I225-LM), which is only available on the i9 model, and 1 x 10GbE (Marvell AQC113) interface as shown in the picture below.


Before folks get too excited, I do have some slightly bad news to share if you are considering ESXi with the 10GbE option. The inbox Marvell driver for ESXi does not currently support this particular consumer 10GbE network adapter. I had reached out to the Marvell team to see if they have any plans to support this device but unfortunately they currently do not. If this is something you would like to see supported, please reach out to Marvell directly and share your feedback with them.

Although the 10GbE interface can not be leveraged by ESXi directly, all hope is not lost. Customers can still use the network adapter in passthrough mode and make it available to a specific VM. In my setup, I was able to configure a Windows 10 VM and after installing the required Marvell driver, I was able to use the 10GbE interface from within the VM.


For the 2.5GbE network adapter, we have better news. Although the network adapter is similiar to the one used in the NUC 11, there were some minor differences that gave us some initial issues. Luckily, we were able to get the device working and you simply just need to have an updated version of the Community Networking Driver for ESXi Fling (requires v.1.2.7 or greater) for enablement. If you need help creating a customized ESXi ISO that contains Community Networking Driver, please see this blog post for more details.

Additional networking can also be added using a number of different options including: 2 x PCIe slots,  2 x Thunderbolt 4 ports (see 10GbE Thunderbolt options for ESXi) and there are plenty of USB ports as well (see USB Networking options for ESXi).

Storage

The storage options are still plentiful with the latest Intel NUC 12 Extreme, especially for those interested in running vSAN or having additional VMFS datastores. Up to 3 x M.2 NVMe devices can be installed on the Intel NUC 12 Extreme supporting PCIe x4 Gen 4, two of which are installed inside of the NUC Compute Element right next to the CPU and Memory as shown below.


For those of you that are familiar with last year's Intel NUC 11 Extreme, you may recall it supports up to 4 x M.2 NVMe devices, which is fantastic for a VMware Homelab. With the Intel NUC 12 Extreme, there is a regression in this capability and most likely due to the increase size of the new CPU. With the Intel NUC 11 Extreme, 3 x M.2 could be installed within the NUC Compute Element but as you can see with the Intel NUC 12 Extreme, we have lost one of the M.2 slots.

Additionally, the Intel NUC 12 Extreme has also consolidated where additional M.2 devices can be installed. With the Intel NUC 11 Extreme, there was an easy to access slot beneath the chassis that could support up to an M.2 22x110 form factor. The Intel NUC 12 Extreme has removed that slot or rather the M.2 connector since the slot still exists but I am not sure of its use when opening it up.

The third M.2 in the Intel NUC 12 Extreme has been relocated directly on the back of the NUC Compute Element as shown in the picture below. To access the M.2 slot, you will need to remove the side panel and the single screw that holds both the M.2 and the cover in place.


Even with these changes, there are still plenty of storage expandability options with the Intel NUC 12 Extreme. You can use either the 2 x PCIe slots and/or the 2 x Thunderbolt 4 ports (See Thunderbolt storage options for ESXi) to add additional storage.

Graphics

The Intel NUC 12 Extreme can support up to a 12" length discrete GPU and is dual-slot capable for those with additional graphic requirements from VDI, rendering to playing with AI/ML with Kubernetes. For the integrated graphics, the Intel NUC 12 Extreme includes an Intel UHD Graphics 770, which shows up as an Alder Lake GT1. I was really crossing my fingers that the iGPU passthrough would function out of the box unlike the previous generations of the Intel NUC 11.

UPDATE (11/17/22) - Please see this blog post here for updated details on how to use the iGPU in passthrough mode with an Ubuntu VM.


Using a fully patched Windows 10 VM, it automatically detected the iGPU and even prompted to install the Intel Graphics Control Center without having to manually load the device driver, which was quite nice. As you can see from the screenshot below, even the Windows Device Manager has properly detected the device.


Now, the real test is whether this will survive a VM reboot?
Sadly, it looks like we are still facing the same iGPU passthrough issue that we saw in the Intel NUC 11, but the behavior in the Intel NUC 12 Extreme is far more extreme (no pun intended). Instead of being able to boot into Windows and seeing the typical Error Code 43 when navigating to Windows Device Manager when using the NUC 11 Extreme, the Windows VM now BSOD (Blue Screen of Death) and the following message is displayed "SYSTEM THREAD EXCEPTION NOT HANDLED" when using the Intel NUC 12 Extreme.


Intel has already been made aware of these driver problems but currently there is not a workaround for these issues.

Form Factor

The chassis used in the Intel NUC 12 Extreme is the same as the Intel NUC 11 Extreme, coming in at 357 x 189 x 120 mm (8L). Check out this blog post for a more detailed look and size comparison to other Intel NUCs.

ESXi

Let me start off by answering the question that I had posed at the beginning of this article on whether the ESXi CPU Scheduler understands the new Intel Alder Lake hybrid CPU architecture? The short answer is no, ESXi is currently not aware of this new architecture and it currently expects all cores within a CPU package to have uniform characteristics.

It is recommended to disable the E-cores within the Intel NUC BIOs following the instructions HERE to prevent ESXi from PSOD'ing due to non-uniform CPU cores, which will result in following error "Fatal CPU mismatch on feature". If for some reason you prefer not to disable either the P-cores or E-Cores, then you can add the following ESXi kernel option cpuUniformityHardCheckPanic=FALSE to workaround the issue which needs to be appended to the existing kernel line by pressing SHIFT+O during the boot up. Please see this video HERE for the detailed instructions for applying the workaround.

Below is a screenshot running the latest ESXi 7.0 Update 3 release on the Intel NUC 12 Extreme, which does require the Community Networking Driver for ESXi Fling (at least v.1.2.5) for proper networking functionality.


Although we can workaround the PSOD, this is more of a hack since we really do not know what the behavior will be, since the ESXi CPU Scheduler was never designed to work with this new CPU architecture. From my very limited amount of testing, running a Windows VM and other basic workloads, I have not seen any significant difference, but it may vary based on the type and number of workloads. One thing I did notice was that ESXi was using the P-Core base frequency which in my setup was 2.40Ghz where as the E-Core base frequency is 1.80Ghz. With more workloads running, in theory you could see mixed performance if a single workload is getting scheduled between the two different types cores.

Although it is unclear whether this new type of CPU architecture will get adopted in the Enterprise datacenter, but we can certainly expect to see this trend continue in the consumer space which also includes Apple's recent Apple Silicon processors. I can definitely see the benefits of this type of hybrid CPU architecture benefiting Edge deployments and perhaps that is the next logical segment to see some form of Enterprise support?

More from my site

  • GPU Passthrough with Nested ESXi
  • VMware Cloud Foundation 5.0 running on Intel NUC
  • Frigate NVR with Coral TPU & iGPU passthrough using ESXi on Intel NUC
  • ESXi on Intel NUC 13 Pro (Arena Canyon)
  • How to disable the Efficiency Cores (E-cores) on an Intel NUC?

Categories // ESXi, Home Lab, vSphere 7.0 Tags // Dragon Canyon, Intel NUC

Comments

  1. *protectedChristopher T says

    02/24/2022 at 11:44 am

    Did you try booting with E-cores disabled? Is it supported in bios to disable e-cores?

    Reply
    • William Lam says

      02/25/2022 at 9:42 am

      No, I don't believe you can the disable E-Cores, at least with the BIOS version I've got, there's not an option. You can specify the number of cores and that could potentially even out between P/E Cores, not had a chance to dig further into that setting

      Reply
  2. *protectedJohnny says

    02/24/2022 at 3:04 pm

    This is quite the reason we switch to Proxmox, fully support straight out of the box. Why esxi can't do proper passthrough is quite bad decision.

    Reply
    • William Lam says

      02/24/2022 at 5:54 pm

      This is not an ESXi issue, it’s an issue with the graphics driver, which has already been reported to Intel

      Reply
  3. *protectedMichael Brassen says

    03/04/2022 at 9:48 am

    Question, how exactly did you get your hands on version 1.2.5 of the community-network-drivers. I only have access to 1.2.2.

    Reply
    • lamw says

      03/04/2022 at 10:34 am

      Hi Michael,

      We're currently working on getting v1.2.5 released (which contains the update to support Alder Lake based systems)

      Reply
      • *protectedGuo says

        03/07/2022 at 11:55 am

        Do you have an expected release date?

        Reply
        • William Lam says

          03/07/2022 at 7:25 pm

          No

          Reply
          • lamw says

            03/15/2022 at 9:09 am

            v1.2.7 Driver is now available https://flings.vmware.com/community-networking-driver-for-esxi

  4. *protectedGuo says

    03/11/2022 at 8:04 am

    Is it possible to make the Panic=FALSE permanent?

    I tried edit the boot.cfg like this page
    https://copydata.tips/2020/07/vsphere-esxi-7-0-installed-on-your-older-hardware-unsupported/

    but not lucky on a reboot.

    Reply
    • lamw says

      03/11/2022 at 10:22 am

      Yes, you can make the change permanent after the system boots by running the following ESXCLI command: esxcli system settings kernel set -s cpuUniformityHardCheckPanic -v FALS

      I'll update the blog post with this info

      Reply
      • *protectedGuo says

        03/11/2022 at 9:48 pm

        Thanks, but it's not working for me.

        The shift+o at boot worked. Otherwise I wouldn't be looking for make it permanent .

        I typed "esxcli system settings kernel set -s cpuUniformityHardCheckPanic -v FALSE" at SSH and console shell both.

        After a not lucky reboot, I checked "esxcli system settings kernel | less". The cpuUniformityHardCheckPanic is FALSE for both config and runtime while the default is a TRUE.

        Do you have any idea why this will happen? The shift+o will work while esxcli system settings not work on the same machine.

        Reply
        • *protectedGuo says

          03/12/2022 at 9:13 pm

          I make a fresh install and it works this time.

          Sad thing is the hyper threading is not active. But a E-core thread should perform much better than a hypered-thread.

          Reply
        • *protectedAl says

          11/12/2022 at 9:10 am

          NUC 12 wall street canyon - I'm having the exact same issue. For the life of me I can't seem to figure it out. Shift+O w/the Panic=FALSE works to get things started. I've entered the esxcli system settings kernel set -s cpuUniformityHardCheckPanic -v FALSE command and verified that configured=false, runtime=false. Not sure if you can change the default = true. I've tried to build the 7.03 and 8.0. Both with the same issue

          Reply
          • William Lam says

            11/12/2022 at 9:21 am

            You need to apply kernel setting during the initial boot (before you install) AND after you've rebooted (so it'll boot for you to make the change permanently). You most likely missed the second occurrence

  5. *protectedRico Roodenburg says

    05/14/2022 at 3:45 am

    Hi,

    Thanks for the cpuUniformityHardCheckPanic tip!

    Do you also see some 100% random spikes in the CPU monitor?
    12th Gen Intel(R) Core(TM) i7-12700 (8P+4E).

    I've installed vSphere 7 Update 3d.

    Can't figure it out, I don't think it is by the vm's, since they are "clean" installed without any workload (yes, they have tools installed).

    By the way, thanks for the great community network drivers (Ethernet Connection (17) I219-LM)!

    Greetings,
    Rico

    Reply
  6. *protectedBenjamin says

    08/12/2022 at 12:34 am

    Hello, Is there someone who tried to install ESXi on NUC12 using RAID1 with intel VMD? Seems that ESX installer do not show any disk even using custom image with last iavmd driver

    Reply
    • *protectedJerome says

      12/05/2022 at 5:32 pm

      Hello Benjamin, did you find out any solution for NVMe RAID 1 using Intel RST VMD Controller? I tried the latest available drivers from Intel but they are dated from 2019... and I'm unable to find any solutions on the web.
      Best

      Reply
      • *protectedbenjamin says

        12/06/2022 at 8:02 am

        Hi Jerome, unfortunately, I was not able to find any solution and I'm running now on one SSD only without raid...

        Reply
  7. *protectedSpencer says

    08/20/2022 at 4:59 pm

    Curious, given the CPU architectural differences and lack of official ESXI support, would a NUC 12 still outperform a NUC 11? I want to purchase a new NUC and I’m not sure what would be a better option. Any suggestions?

    Reply
  8. *protectedPaul says

    08/25/2022 at 10:39 am

    Adding my .2c

    Got ESXi 7.0U3 deployed on my i9-12900K. Disabled CPU uniformity check....all good.

    Issues crop up when you start loading up the host it up with VM's that require multiple vCPU's. (Think nested esxi hosts)
    At this point the physical host will PSOD randomly with the the same CPU mismatch error.

    So while the hack does allow you boot and run esxi on i9, there is instability when you load it up.

    Maybe ESXi 8 will have something that can accommodate this new CPU architecture. (here's hoping)

    Reply
  9. *protectedmaxdxs says

    10/16/2022 at 12:44 pm

    IF you have to choose, to hold ESXI in a minipc what would you pick?nuc11, nuc12 or ryzen 6xxx?

    Reply
  10. *protectedxhomer says

    10/21/2022 at 10:37 am

    I guess if you run just 1 core VMs is not a problem and you can skip uniformcpu check but what happen when you run more than 1 core VM per example 2 core VM and the VM get scheduled with 1 performance core and 1 efficient core? Are your running 2-4 core VM with older OS like window 2018,2012,2016, windwos 10, linux, etc... I guess this is a problem for most OSes, because windows 10 had a problem with P+E cores and you have to use windows 11 so the SO understands P vs E cores.

    Reply
  11. *protectedgerd says

    10/30/2022 at 2:43 am

    Hello,
    Thanks for the detail information.
    Is it possible to disable the add on GPU (not the Intel onboard GPU) or a PCIex slot in BIOS.
    I am searching this feature for a long time now. Most time I do not need the discrete GPU and it would be nice to disable it in this situation for energy saving reasons...

    Reply
  12. *protectedmaxdxs says

    11/06/2022 at 7:30 pm

    the e-cores issues was solved? do you recommend to use an i5 or i7 for 12th?

    Reply
  13. *protectedgbmaryland says

    12/30/2022 at 4:59 pm

    I've got ESXi fired up on a NUC 12 Pro i5 based system. So far it works well enough and I've not had any significant issues. I'm a little concerned in that I'm wondering what happens if you try to install VCF on a NUC 12 cluster with all of the E and P cores.

    Has anyone gotten the VCF nested ESXi environment to work with NUC 12s?

    Reply
  14. *protectedLapaj Go says

    02/05/2023 at 3:13 am

    The iGPU passthrough works just fine with 31.0.101.4032, problem is that the stupid windows keep forcing 31.0.101.2079 driver which causes BSoD. You need to manually uninstall it using "pnputil.exe /remove-device oemxxx.inf" tool. I guess Microsoft and Intel are too lazy to figure it out or at least let us know.

    Reply
    • *protectedLapaj Go says

      02/09/2023 at 4:43 pm

      Every single time Intel Driver & Support Assistant gets me up to date on my drivers, Microsoft Windows Update acts like a total a$$hole and drags me back to a driver that is almost a YEAR old. It just did it to me AGAIN. And it's like you can say "no" to windows update - it just does WHATEVER it wants with YOUR system.

      Reply
  15. *protectedBob says

    02/13/2023 at 1:20 pm

    Just got myself a "Intel NUC 12 Extreme" with the latest BIOS update and esxi, be it 6.5, 6.7 or 7.0 it just hangs at " using simple 'offset' uefi rts mapping policy" screen, this is after apply cpuUniformityHardCheckPanic=False. Anything I did not do right to get it installed? Please help. Thanks.

    Reply
    • William Lam says

      02/13/2023 at 4:05 pm

      Have you checked your boot media? Try another USB device … I’ve seen this come up in past which isn’t specific to NUC

      Reply
      • *protectedBob says

        02/16/2023 at 1:34 pm

        AFter 3 different USB sticks, it now can book into the installation properly. Thanks. Now just need to get the right driver for the 10G NIC as my NUC does not come with the 2.5G NIC.

        Reply
  16. *protectedVolker says

    02/14/2023 at 5:11 pm

    In Proxmox 7.3 they have no problems with E & P Kernel 😉 try and enjoy it!

    Reply
  17. *protectedMark says

    03/12/2023 at 5:22 pm

    I noticed on the Wall Street Canyon that with the 101.4146 driver it doesn't throw the BSOD thread error. When it starts and having svga present it has error code 43 in dev mgr. If I disable and then reenable then it shows normal. but I can't seem to get any displays recognized. It seems really close to working but not sure what else it might be

    Reply
  18. *protectedohhno says

    04/12/2023 at 11:46 am

    @volker
    do you use new nuc 12 exreme with proxmox?
    is it free and fully supported?
    can nuc 12 raid 1 with 2 x m.2 NVMe?

    Reply
  19. *protectedAmir says

    05/29/2023 at 12:33 pm

    Hi William,
    I'm using NUC12DCMi9 as my office lab, but I have an issue.
    Have you guys faced any issues with RAID?
    once I made a RAID 1, ESX couldn't reach the storage & it doesn't show up either in the storage or in storage devices. I tried ESXi 8 U1 & 8. even injected the VMD driver to ESXi 7.6. but none of them worked. will appreciate it if you can give me a tip to find any workaround.

    Reply
    • William Lam says

      05/29/2023 at 1:07 pm

      I don’t use any RAID or VMD, not sure it buys you much for lab env … also this feature is for Xeon-based CPU, which none of NUCs are, so YMMV. Suggest looking at https://core.vmware.com/blog/using-intel-vmd-driver-vsphere-create-nvme-raid1 if you’ve not and see if everything checks out

      Reply
      • *protectedAmir says

        05/30/2023 at 1:13 pm

        Thank you for your quick reply!

        Reply
  20. *protectedAshton says

    07/29/2023 at 6:18 pm

    Hello,

    Thank you for posting. Is there a place we can track the status of the driver bug with intel or esxi official support?

    Thanks,
    Ashton

    Reply
    • William Lam says

      07/30/2023 at 7:58 am

      ESXi is not officially supported on any of the Intel NUCs. Hardware certifications is performed by hardware partners and submitted to VMware HCL. While I've shared the details on the Windows graphics issue, I haven't heard any plans to resolve it. For now, if you want to leverage the iGPU for passthrough, it'll need to be a Linux guests. You can always try posting on Intel forums but I suspect you'll get a "this is not supported" response

      Reply

Thanks for the comment!Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...