WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud
  • Tanzu
    • Application Modernization
    • Tanzu services
    • Tanzu Community Edition
    • Tanzu Kubernetes Grid
    • vSphere with Tanzu
  • Home Lab
  • Nested Virtualization
  • Apple
You are here: Home / ESXi / ESXi on Intel NUC 12 Extreme (Dragon Canyon)

ESXi on Intel NUC 12 Extreme (Dragon Canyon)

02.24.2022 by William Lam // 27 Comments

As teased back in January, Intel has been working on a new Intel NUC ...

1st native 10GbE Intel NUC! 🐉 🥳🤐🤫 pic.twitter.com/E4lyeaFhpU

— William Lam (@*protected email*) (@lamw) January 11, 2022

Today, Intel has officially launched one of their new 12th generation Intel NUCs called the Intel NUC 12 Extreme formally code named Dragon Canyon. Some may also notice that the Intel NUC 12 Extreme looks very similiar to last years Intel NUC 11 Extreme (Beast Canyon), but there are definitely a number of differences both internally and externally.

Here is your first look at the new Intel NUC 12 Extreme and what it means for those interested in using it for a VMware Homelab.

Compute

The Intel NUC 12 Extreme includes the new Intel 12th Generation Alder Lake CPU which is also the first consumer Intel CPU that introduces a new hybrid "big.LITTLE" CPU architecture. This new hybrid CPU architecture integrates two types of CPU cores: Performance-cores (P-cores) and Efficiency-cores (E-cores) into the same physical CPU die. To learn more about how this new hybrid CPU design works, check out this resource from Intel.

The Intel NUC 12 Extreme will be available in the following configurations starting in the second quarter of 2022 (per their press release):

  • NUC12DCMi9 - Intel Core i9-12900 Processor (up to 5.10 GHz)
    • 16-Core (8P+8E), 24-Thread, 30M cache
  • NUC12DCMi7 - Intel Core i7-12700 Processor (up to 4.90 GHz)
    • 12-Core (8P+4E), 20-Thread, 30M cache

Both systems will be able to support up to a maximum of 64GB of memory using two SO-DIMM memory modules, which is similiar to the previous generation of Intel NUCs. Although the CPUs can technically go up to 128GB of memory, there has not been any confirmation or even rumors that we will be seeing a single 64GB SO-DIMM any time soon 🙁

The first question that I am sure many of you will have (or have already asked) is whether the ESXi CPU Scheduler will understand this new hybrid CPU architecture? For the answer, skip to the ESXi section at the bottom for more details.

Network

The most significant update in my opinion for the Intel NUC 12 Extreme is the networking as it re-introduces support for two onboard network adapters, which should come in handy for running a VMware Homelab. Furthermore, something that customers have been asking for quite some time now is support for a 10GbE option and the Intel NUC 12 Extreme finally delivers!

The Intel NUC 12 Extreme includes 1 x 2.5GbE (Intel I225-LM), which is only available on the i9 model, and 1 x 10GbE (Marvell AQC113) interface as shown in the picture below.


Before folks get too excited, I do have some slightly bad news to share if you are considering ESXi with the 10GbE option. The inbox Marvell driver for ESXi does not currently support this particular consumer 10GbE network adapter. I had reached out to the Marvell team to see if they have any plans to support this device but unfortunately they currently do not. If this is something you would like to see supported, please reach out to Marvell directly and share your feedback with them.

Although the 10GbE interface can not be leveraged by ESXi directly, all hope is not lost. Customers can still use the network adapter in passthrough mode and make it available to a specific VM. In my setup, I was able to configure a Windows 10 VM and after installing the required Marvell driver, I was able to use the 10GbE interface from within the VM.


For the 2.5GbE network adapter, we have better news. Although the network adapter is similiar to the one used in the NUC 11, there were some minor differences that gave us some initial issues. Luckily, we were able to get the device working and you simply just need to have an updated version of the Community Networking Driver for ESXi Fling (requires v.1.2.7 or greater) for enablement. If you need help creating a customized ESXi ISO that contains Community Networking Driver, please see this blog post for more details.

Additional networking can also be added using a number of different options including: 2 x PCIe slots,  2 x Thunderbolt 4 ports (see 10GbE Thunderbolt options for ESXi) and there are plenty of USB ports as well (see USB Networking options for ESXi).

Storage

The storage options are still plentiful with the latest Intel NUC 12 Extreme, especially for those interested in running vSAN or having additional VMFS datastores. Up to 3 x M.2 NVMe devices can be installed on the Intel NUC 12 Extreme supporting PCIe x4 Gen 4, two of which are installed inside of the NUC Compute Element right next to the CPU and Memory as shown below.


For those of you that are familiar with last year's Intel NUC 11 Extreme, you may recall it supports up to 4 x M.2 NVMe devices, which is fantastic for a VMware Homelab. With the Intel NUC 12 Extreme, there is a regression in this capability and most likely due to the increase size of the new CPU. With the Intel NUC 11 Extreme, 3 x M.2 could be installed within the NUC Compute Element but as you can see with the Intel NUC 12 Extreme, we have lost one of the M.2 slots.

Additionally, the Intel NUC 12 Extreme has also consolidated where additional M.2 devices can be installed. With the Intel NUC 11 Extreme, there was an easy to access slot beneath the chassis that could support up to an M.2 22x110 form factor. The Intel NUC 12 Extreme has removed that slot or rather the M.2 connector since the slot still exists but I am not sure of its use when opening it up.

The third M.2 in the Intel NUC 12 Extreme has been relocated directly on the back of the NUC Compute Element as shown in the picture below. To access the M.2 slot, you will need to remove the side panel and the single screw that holds both the M.2 and the cover in place.


Even with these changes, there are still plenty of storage expandability options with the Intel NUC 12 Extreme. You can use either the 2 x PCIe slots and/or the 2 x Thunderbolt 4 ports (See Thunderbolt storage options for ESXi) to add additional storage.

Graphics

The Intel NUC 12 Extreme can support up to a 12" length discrete GPU and is dual-slot capable for those with additional graphic requirements from VDI, rendering to playing with AI/ML with Kubernetes. For the integrated graphics, the Intel NUC 12 Extreme includes an Intel UHD Graphics 770, which shows up as an Alder Lake GT1. I was really crossing my fingers that the iGPU passthrough would function out of the box unlike the previous generations of the Intel NUC 11.

UPDATE (11/17/22) - Please see this blog post here for updated details on how to use the iGPU in passthrough mode with an Ubuntu VM.


Using a fully patched Windows 10 VM, it automatically detected the iGPU and even prompted to install the Intel Graphics Control Center without having to manually load the device driver, which was quite nice. As you can see from the screenshot below, even the Windows Device Manager has properly detected the device.


Now, the real test is whether this will survive a VM reboot?
Sadly, it looks like we are still facing the same iGPU passthrough issue that we saw in the Intel NUC 11, but the behavior in the Intel NUC 12 Extreme is far more extreme (no pun intended). Instead of being able to boot into Windows and seeing the typical Error Code 43 when navigating to Windows Device Manager when using the NUC 11 Extreme, the Windows VM now BSOD (Blue Screen of Death) and the following message is displayed "SYSTEM THREAD EXCEPTION NOT HANDLED" when using the Intel NUC 12 Extreme.


Intel has already been made aware of these driver problems but currently there is not a workaround for these issues.

Form Factor

The chassis used in the Intel NUC 12 Extreme is the same as the Intel NUC 11 Extreme, coming in at 357 x 189 x 120 mm (8L). Check out this blog post for a more detailed look and size comparison to other Intel NUCs.

ESXi

Let me start off by answering the question that I had posed at the beginning of this article on whether the ESXi CPU Scheduler understands the new Intel Alder Lake hybrid CPU architecture? The short answer is no, ESXi is currently not aware of this new architecture and it currently expects all cores within a CPU package to have uniform characteristics.

In fact, if you attempt to boot ESXi on an Alder Lake CPU, it will actually PSOD (Purple Screen of Death) and you will see a message about "Fatal CPU mismatch on feature" which is due to the different CPU properties across both the P-Cores and E-Cores. Luckily, there is a workaround to disable the CPU uniformity check that ESXi performs as part of its boot up and the following kernel option cpuUniformityHardCheckPanic=FALSE needs to be appended during the boot up (SHIFT+O during boot up or simply add it to boot.cfg when creating your ESXi bootable installer).

Note: Once ESXi has been successfully installed, you can permanently set the kernel option by running the following ESXCLI command: esxcli system settings kernel set -s cpuUniformityHardCheckPanic -v FALSE

Here is a video demonstrating the ESXi workaround for those interested in seeing the workaround applied visually.

Below is a screenshot running the latest ESXi 7.0 Update 3 release on the Intel NUC 12 Extreme, which does require the Community Networking Driver for ESXi Fling (at least v.1.2.5) for proper networking functionality.


Although we can workaround the PSOD, this is more of a hack since we really do not know what the behavior will be, since the ESXi CPU Scheduler was never designed to work with this new CPU architecture. From my very limited amount of testing, running a Windows VM and other basic workloads, I have not seen any significant difference, but it may vary based on the type and number of workloads. One thing I did notice was that ESXi was using the P-Core base frequency which in my setup was 2.40Ghz where as the E-Core base frequency is 1.80Ghz. With more workloads running, in theory you could see mixed performance if a single workload is getting scheduled between the two different types cores.

Although it is unclear whether this new type of CPU architecture will get adopted in the Enterprise datacenter, but we can certainly expect to see this trend continue in the consumer space which also includes Apple's recent Apple Silicon processors. I can definitely see the benefits of this type of hybrid CPU architecture benefiting Edge deployments and perhaps that is the next logical segment to see some form of Enterprise support?

More from my site

  • ESXi on Intel NUC 12 Enthusiast (Serpent Canyon)
  • Updated findings for passthrough of Intel NUC Integrated Graphics (iGPU) with ESXi
  • ESXi on Intel NUC 12 Pro (Wall Street Canyon)
  • Considerations for future vSphere Homelabs due to upcoming removal of SD card/USB support for ESXi
  • ESXi on Intel NUC 11 Extreme (Beast Canyon)

Categories // ESXi, Home Lab, vSphere 7.0 Tags // Dragon Canyon, Intel NUC

Comments

  1. Christopher T says

    02/24/2022 at 11:44 am

    Did you try booting with E-cores disabled? Is it supported in bios to disable e-cores?

    Reply
    • William Lam says

      02/25/2022 at 9:42 am

      No, I don't believe you can the disable E-Cores, at least with the BIOS version I've got, there's not an option. You can specify the number of cores and that could potentially even out between P/E Cores, not had a chance to dig further into that setting

      Reply
  2. Johnny says

    02/24/2022 at 3:04 pm

    This is quite the reason we switch to Proxmox, fully support straight out of the box. Why esxi can't do proper passthrough is quite bad decision.

    Reply
    • William Lam says

      02/24/2022 at 5:54 pm

      This is not an ESXi issue, it’s an issue with the graphics driver, which has already been reported to Intel

      Reply
  3. Michael Brassen says

    03/04/2022 at 9:48 am

    Question, how exactly did you get your hands on version 1.2.5 of the community-network-drivers. I only have access to 1.2.2.

    Reply
    • lamw says

      03/04/2022 at 10:34 am

      Hi Michael,

      We're currently working on getting v1.2.5 released (which contains the update to support Alder Lake based systems)

      Reply
      • Guo says

        03/07/2022 at 11:55 am

        Do you have an expected release date?

        Reply
        • William Lam says

          03/07/2022 at 7:25 pm

          No

          Reply
          • lamw says

            03/15/2022 at 9:09 am

            v1.2.7 Driver is now available https://flings.vmware.com/community-networking-driver-for-esxi

  4. Guo says

    03/11/2022 at 8:04 am

    Is it possible to make the Panic=FALSE permanent?

    I tried edit the boot.cfg like this page
    https://copydata.tips/2020/07/vsphere-esxi-7-0-installed-on-your-older-hardware-unsupported/

    but not lucky on a reboot.

    Reply
    • lamw says

      03/11/2022 at 10:22 am

      Yes, you can make the change permanent after the system boots by running the following ESXCLI command: esxcli system settings kernel set -s cpuUniformityHardCheckPanic -v FALS

      I'll update the blog post with this info

      Reply
      • Guo says

        03/11/2022 at 9:48 pm

        Thanks, but it's not working for me.

        The shift+o at boot worked. Otherwise I wouldn't be looking for make it permanent .

        I typed "esxcli system settings kernel set -s cpuUniformityHardCheckPanic -v FALSE" at SSH and console shell both.

        After a not lucky reboot, I checked "esxcli system settings kernel | less". The cpuUniformityHardCheckPanic is FALSE for both config and runtime while the default is a TRUE.

        Do you have any idea why this will happen? The shift+o will work while esxcli system settings not work on the same machine.

        Reply
        • Guo says

          03/12/2022 at 9:13 pm

          I make a fresh install and it works this time.

          Sad thing is the hyper threading is not active. But a E-core thread should perform much better than a hypered-thread.

          Reply
        • Al says

          11/12/2022 at 9:10 am

          NUC 12 wall street canyon - I'm having the exact same issue. For the life of me I can't seem to figure it out. Shift+O w/the Panic=FALSE works to get things started. I've entered the esxcli system settings kernel set -s cpuUniformityHardCheckPanic -v FALSE command and verified that configured=false, runtime=false. Not sure if you can change the default = true. I've tried to build the 7.03 and 8.0. Both with the same issue

          Reply
          • William Lam says

            11/12/2022 at 9:21 am

            You need to apply kernel setting during the initial boot (before you install) AND after you've rebooted (so it'll boot for you to make the change permanently). You most likely missed the second occurrence

  5. Rico Roodenburg says

    05/14/2022 at 3:45 am

    Hi,

    Thanks for the cpuUniformityHardCheckPanic tip!

    Do you also see some 100% random spikes in the CPU monitor?
    12th Gen Intel(R) Core(TM) i7-12700 (8P+4E).

    I've installed vSphere 7 Update 3d.

    Can't figure it out, I don't think it is by the vm's, since they are "clean" installed without any workload (yes, they have tools installed).

    By the way, thanks for the great community network drivers (Ethernet Connection (17) I219-LM)!

    Greetings,
    Rico

    Reply
  6. Benjamin says

    08/12/2022 at 12:34 am

    Hello, Is there someone who tried to install ESXi on NUC12 using RAID1 with intel VMD? Seems that ESX installer do not show any disk even using custom image with last iavmd driver

    Reply
    • Jerome says

      12/05/2022 at 5:32 pm

      Hello Benjamin, did you find out any solution for NVMe RAID 1 using Intel RST VMD Controller? I tried the latest available drivers from Intel but they are dated from 2019... and I'm unable to find any solutions on the web.
      Best

      Reply
      • benjamin says

        12/06/2022 at 8:02 am

        Hi Jerome, unfortunately, I was not able to find any solution and I'm running now on one SSD only without raid...

        Reply
  7. Spencer says

    08/20/2022 at 4:59 pm

    Curious, given the CPU architectural differences and lack of official ESXI support, would a NUC 12 still outperform a NUC 11? I want to purchase a new NUC and I’m not sure what would be a better option. Any suggestions?

    Reply
  8. Paul says

    08/25/2022 at 10:39 am

    Adding my .2c

    Got ESXi 7.0U3 deployed on my i9-12900K. Disabled CPU uniformity check....all good.

    Issues crop up when you start loading up the host it up with VM's that require multiple vCPU's. (Think nested esxi hosts)
    At this point the physical host will PSOD randomly with the the same CPU mismatch error.

    So while the hack does allow you boot and run esxi on i9, there is instability when you load it up.

    Maybe ESXi 8 will have something that can accommodate this new CPU architecture. (here's hoping)

    Reply
  9. maxdxs says

    10/16/2022 at 12:44 pm

    IF you have to choose, to hold ESXI in a minipc what would you pick?nuc11, nuc12 or ryzen 6xxx?

    Reply
  10. xhomer says

    10/21/2022 at 10:37 am

    I guess if you run just 1 core VMs is not a problem and you can skip uniformcpu check but what happen when you run more than 1 core VM per example 2 core VM and the VM get scheduled with 1 performance core and 1 efficient core? Are your running 2-4 core VM with older OS like window 2018,2012,2016, windwos 10, linux, etc... I guess this is a problem for most OSes, because windows 10 had a problem with P+E cores and you have to use windows 11 so the SO understands P vs E cores.

    Reply
  11. gerd says

    10/30/2022 at 2:43 am

    Hello,
    Thanks for the detail information.
    Is it possible to disable the add on GPU (not the Intel onboard GPU) or a PCIex slot in BIOS.
    I am searching this feature for a long time now. Most time I do not need the discrete GPU and it would be nice to disable it in this situation for energy saving reasons...

    Reply
  12. maxdxs says

    11/06/2022 at 7:30 pm

    the e-cores issues was solved? do you recommend to use an i5 or i7 for 12th?

    Reply
  13. gbmaryland says

    12/30/2022 at 4:59 pm

    I've got ESXi fired up on a NUC 12 Pro i5 based system. So far it works well enough and I've not had any significant issues. I'm a little concerned in that I'm wondering what happens if you try to install VCF on a NUC 12 cluster with all of the E and P cores.

    Has anyone gotten the VCF nested ESXi environment to work with NUC 12s?

    Reply
  14. Lapaj Go says

    02/05/2023 at 3:13 am

    The iGPU passthrough works just fine with 31.0.101.4032, problem is that the stupid windows keep forcing 31.0.101.2079 driver which causes BSoD. You need to manually uninstall it using "pnputil.exe /remove-device oemxxx.inf" tool. I guess Microsoft and Intel are too lazy to figure it out or at least let us know.

    Reply

Thanks for the comment! Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Infrastructure Business Group (CIBG) at VMware. He focuses on Cloud Native technologies, Automation, Integration and Operation for the VMware Cloud based Software Defined Datacenters (SDDC)

Connect

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automated ESXi Installation with a USB Network Adapter using Kickstart 02/01/2023
  • How to bootstrap ESXi compute only node and connect to vSAN HCI Mesh? 01/31/2023
  • Quick Tip - Easily move or copy VMs between two Free ESXi hosts? 01/30/2023
  • vSphere with Tanzu using Intel Arc GPU 01/26/2023
  • Quick Tip - Automating allowed and not allowed Datastores for use with vSphere Cluster Services (vCLS) 01/25/2023

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2023

 

Loading Comments...