As teased back in January, Intel has been working on a new Intel NUC ...
1st native 10GbE Intel NUC! 🐉 🥳🤐🤫 pic.twitter.com/E4lyeaFhpU
— William Lam (@lamw.bsky.social | @*protected email*) (@lamw) January 11, 2022
Today, Intel has officially launched one of their new 12th generation Intel NUCs called the Intel NUC 12 Extreme formally code named Dragon Canyon. Some may also notice that the Intel NUC 12 Extreme looks very similiar to last years Intel NUC 11 Extreme (Beast Canyon), but there are definitely a number of differences both internally and externally.
Here is your first look at the new Intel NUC 12 Extreme and what it means for those interested in using it for a VMware Homelab.
Compute
The Intel NUC 12 Extreme includes the new Intel 12th Generation Alder Lake CPU which is also the first consumer Intel CPU that introduces a new hybrid "big.LITTLE" CPU architecture. This new hybrid CPU architecture integrates two types of CPU cores: Performance-cores (P-cores) and Efficiency-cores (E-cores) into the same physical CPU die. To learn more about how this new hybrid CPU design works, check out this resource from Intel.
The Intel NUC 12 Extreme will be available in the following configurations starting in the second quarter of 2022 (per their press release):
- NUC12DCMi9 - Intel Core i9-12900 Processor (up to 5.10 GHz)
- 16-Core (8P+8E), 24-Thread, 30M cache
- NUC12DCMi7 - Intel Core i7-12700 Processor (up to 4.90 GHz)
- 12-Core (8P+4E), 20-Thread, 30M cache
Both systems will be able to support up to a maximum of 64GB of memory using two SO-DIMM memory modules, which is similiar to the previous generation of Intel NUCs. Although the CPUs can technically go up to 128GB of memory, there has not been any confirmation or even rumors that we will be seeing a single 64GB SO-DIMM any time soon 🙁
The first question that I am sure many of you will have (or have already asked) is whether the ESXi CPU Scheduler will understand this new hybrid CPU architecture? For the answer, skip to the ESXi section at the bottom for more details.
Network
The most significant update in my opinion for the Intel NUC 12 Extreme is the networking as it re-introduces support for two onboard network adapters, which should come in handy for running a VMware Homelab. Furthermore, something that customers have been asking for quite some time now is support for a 10GbE option and the Intel NUC 12 Extreme finally delivers!
The Intel NUC 12 Extreme includes 1 x 2.5GbE (Intel I225-LM), which is only available on the i9 model, and 1 x 10GbE (Marvell AQC113) interface as shown in the picture below.
Before folks get too excited, I do have some slightly bad news to share if you are considering ESXi with the 10GbE option. The inbox Marvell driver for ESXi does not currently support this particular consumer 10GbE network adapter. I had reached out to the Marvell team to see if they have any plans to support this device but unfortunately they currently do not. If this is something you would like to see supported, please reach out to Marvell directly and share your feedback with them.
Although the 10GbE interface can not be leveraged by ESXi directly, all hope is not lost. Customers can still use the network adapter in passthrough mode and make it available to a specific VM. In my setup, I was able to configure a Windows 10 VM and after installing the required Marvell driver, I was able to use the 10GbE interface from within the VM.
For the 2.5GbE network adapter, we have better news. Although the network adapter is similiar to the one used in the NUC 11, there were some minor differences that gave us some initial issues. Luckily, we were able to get the device working and you simply just need to have an updated version of the Community Networking Driver for ESXi Fling (requires v.1.2.7 or greater) for enablement. If you need help creating a customized ESXi ISO that contains Community Networking Driver, please see this blog post for more details.
Additional networking can also be added using a number of different options including: 2 x PCIe slots, 2 x Thunderbolt 4 ports (see 10GbE Thunderbolt options for ESXi) and there are plenty of USB ports as well (see USB Networking options for ESXi).
Storage
The storage options are still plentiful with the latest Intel NUC 12 Extreme, especially for those interested in running vSAN or having additional VMFS datastores. Up to 3 x M.2 NVMe devices can be installed on the Intel NUC 12 Extreme supporting PCIe x4 Gen 4, two of which are installed inside of the NUC Compute Element right next to the CPU and Memory as shown below.
For those of you that are familiar with last year's Intel NUC 11 Extreme, you may recall it supports up to 4 x M.2 NVMe devices, which is fantastic for a VMware Homelab. With the Intel NUC 12 Extreme, there is a regression in this capability and most likely due to the increase size of the new CPU. With the Intel NUC 11 Extreme, 3 x M.2 could be installed within the NUC Compute Element but as you can see with the Intel NUC 12 Extreme, we have lost one of the M.2 slots.
Additionally, the Intel NUC 12 Extreme has also consolidated where additional M.2 devices can be installed. With the Intel NUC 11 Extreme, there was an easy to access slot beneath the chassis that could support up to an M.2 22x110 form factor. The Intel NUC 12 Extreme has removed that slot or rather the M.2 connector since the slot still exists but I am not sure of its use when opening it up.
The third M.2 in the Intel NUC 12 Extreme has been relocated directly on the back of the NUC Compute Element as shown in the picture below. To access the M.2 slot, you will need to remove the side panel and the single screw that holds both the M.2 and the cover in place.
Even with these changes, there are still plenty of storage expandability options with the Intel NUC 12 Extreme. You can use either the 2 x PCIe slots and/or the 2 x Thunderbolt 4 ports (See Thunderbolt storage options for ESXi) to add additional storage.
Graphics
The Intel NUC 12 Extreme can support up to a 12" length discrete GPU and is dual-slot capable for those with additional graphic requirements from VDI, rendering to playing with AI/ML with Kubernetes. For the integrated graphics, the Intel NUC 12 Extreme includes an Intel UHD Graphics 770, which shows up as an Alder Lake GT1. I was really crossing my fingers that the iGPU passthrough would function out of the box unlike the previous generations of the Intel NUC 11.
UPDATE (11/17/22) - Please see this blog post here for updated details on how to use the iGPU in passthrough mode with an Ubuntu VM.
Using a fully patched Windows 10 VM, it automatically detected the iGPU and even prompted to install the Intel Graphics Control Center without having to manually load the device driver, which was quite nice. As you can see from the screenshot below, even the Windows Device Manager has properly detected the device.
Now, the real test is whether this will survive a VM reboot?
Sadly, it looks like we are still facing the same iGPU passthrough issue that we saw in the Intel NUC 11, but the behavior in the Intel NUC 12 Extreme is far more extreme (no pun intended). Instead of being able to boot into Windows and seeing the typical Error Code 43 when navigating to Windows Device Manager when using the NUC 11 Extreme, the Windows VM now BSOD (Blue Screen of Death) and the following message is displayed "SYSTEM THREAD EXCEPTION NOT HANDLED" when using the Intel NUC 12 Extreme.
Intel has already been made aware of these driver problems but currently there is not a workaround for these issues.
Form Factor
The chassis used in the Intel NUC 12 Extreme is the same as the Intel NUC 11 Extreme, coming in at 357 x 189 x 120 mm (8L). Check out this blog post for a more detailed look and size comparison to other Intel NUCs.
ESXi
Let me start off by answering the question that I had posed at the beginning of this article on whether the ESXi CPU Scheduler understands the new Intel Alder Lake hybrid CPU architecture? The short answer is no, ESXi is currently not aware of this new architecture and it currently expects all cores within a CPU package to have uniform characteristics.
It is recommended to disable the E-cores within the Intel NUC BIOs following the instructions HERE to prevent ESXi from PSOD'ing due to non-uniform CPU cores, which will result in following error "Fatal CPU mismatch on feature". If for some reason you prefer not to disable either the P-cores or E-Cores, then you can add the following ESXi kernel option cpuUniformityHardCheckPanic=FALSE to workaround the issue which needs to be appended to the existing kernel line by pressing SHIFT+O during the boot up. Please see this video HERE for the detailed instructions for applying the workaround.
Below is a screenshot running the latest ESXi 7.0 Update 3 release on the Intel NUC 12 Extreme, which does require the Community Networking Driver for ESXi Fling (at least v.1.2.5) for proper networking functionality.
Although we can workaround the PSOD, this is more of a hack since we really do not know what the behavior will be, since the ESXi CPU Scheduler was never designed to work with this new CPU architecture. From my very limited amount of testing, running a Windows VM and other basic workloads, I have not seen any significant difference, but it may vary based on the type and number of workloads. One thing I did notice was that ESXi was using the P-Core base frequency which in my setup was 2.40Ghz where as the E-Core base frequency is 1.80Ghz. With more workloads running, in theory you could see mixed performance if a single workload is getting scheduled between the two different types cores.
Although it is unclear whether this new type of CPU architecture will get adopted in the Enterprise datacenter, but we can certainly expect to see this trend continue in the consumer space which also includes Apple's recent Apple Silicon processors. I can definitely see the benefits of this type of hybrid CPU architecture benefiting Edge deployments and perhaps that is the next logical segment to see some form of Enterprise support?
Christopher T says
Did you try booting with E-cores disabled? Is it supported in bios to disable e-cores?
William Lam says
No, I don't believe you can the disable E-Cores, at least with the BIOS version I've got, there's not an option. You can specify the number of cores and that could potentially even out between P/E Cores, not had a chance to dig further into that setting
Johnny says
This is quite the reason we switch to Proxmox, fully support straight out of the box. Why esxi can't do proper passthrough is quite bad decision.
William Lam says
This is not an ESXi issue, it’s an issue with the graphics driver, which has already been reported to Intel
Michael Brassen says
Question, how exactly did you get your hands on version 1.2.5 of the community-network-drivers. I only have access to 1.2.2.
lamw says
Hi Michael,
We're currently working on getting v1.2.5 released (which contains the update to support Alder Lake based systems)
Guo says
Do you have an expected release date?
William Lam says
No
lamw says
v1.2.7 Driver is now available https://flings.vmware.com/community-networking-driver-for-esxi
Guo says
Is it possible to make the Panic=FALSE permanent?
I tried edit the boot.cfg like this page
https://copydata.tips/2020/07/vsphere-esxi-7-0-installed-on-your-older-hardware-unsupported/
but not lucky on a reboot.
lamw says
Yes, you can make the change permanent after the system boots by running the following ESXCLI command: esxcli system settings kernel set -s cpuUniformityHardCheckPanic -v FALS
I'll update the blog post with this info
Guo says
Thanks, but it's not working for me.
The shift+o at boot worked. Otherwise I wouldn't be looking for make it permanent .
I typed "esxcli system settings kernel set -s cpuUniformityHardCheckPanic -v FALSE" at SSH and console shell both.
After a not lucky reboot, I checked "esxcli system settings kernel | less". The cpuUniformityHardCheckPanic is FALSE for both config and runtime while the default is a TRUE.
Do you have any idea why this will happen? The shift+o will work while esxcli system settings not work on the same machine.
Guo says
I make a fresh install and it works this time.
Sad thing is the hyper threading is not active. But a E-core thread should perform much better than a hypered-thread.
Al says
NUC 12 wall street canyon - I'm having the exact same issue. For the life of me I can't seem to figure it out. Shift+O w/the Panic=FALSE works to get things started. I've entered the esxcli system settings kernel set -s cpuUniformityHardCheckPanic -v FALSE command and verified that configured=false, runtime=false. Not sure if you can change the default = true. I've tried to build the 7.03 and 8.0. Both with the same issue
William Lam says
You need to apply kernel setting during the initial boot (before you install) AND after you've rebooted (so it'll boot for you to make the change permanently). You most likely missed the second occurrence
Rico Roodenburg says
Hi,
Thanks for the cpuUniformityHardCheckPanic tip!
Do you also see some 100% random spikes in the CPU monitor?
12th Gen Intel(R) Core(TM) i7-12700 (8P+4E).
I've installed vSphere 7 Update 3d.
Can't figure it out, I don't think it is by the vm's, since they are "clean" installed without any workload (yes, they have tools installed).
By the way, thanks for the great community network drivers (Ethernet Connection (17) I219-LM)!
Greetings,
Rico
Benjamin says
Hello, Is there someone who tried to install ESXi on NUC12 using RAID1 with intel VMD? Seems that ESX installer do not show any disk even using custom image with last iavmd driver
Jerome says
Hello Benjamin, did you find out any solution for NVMe RAID 1 using Intel RST VMD Controller? I tried the latest available drivers from Intel but they are dated from 2019... and I'm unable to find any solutions on the web.
Best
benjamin says
Hi Jerome, unfortunately, I was not able to find any solution and I'm running now on one SSD only without raid...
Spencer says
Curious, given the CPU architectural differences and lack of official ESXI support, would a NUC 12 still outperform a NUC 11? I want to purchase a new NUC and I’m not sure what would be a better option. Any suggestions?
Paul says
Adding my .2c
Got ESXi 7.0U3 deployed on my i9-12900K. Disabled CPU uniformity check....all good.
Issues crop up when you start loading up the host it up with VM's that require multiple vCPU's. (Think nested esxi hosts)
At this point the physical host will PSOD randomly with the the same CPU mismatch error.
So while the hack does allow you boot and run esxi on i9, there is instability when you load it up.
Maybe ESXi 8 will have something that can accommodate this new CPU architecture. (here's hoping)
maxdxs says
IF you have to choose, to hold ESXI in a minipc what would you pick?nuc11, nuc12 or ryzen 6xxx?
xhomer says
I guess if you run just 1 core VMs is not a problem and you can skip uniformcpu check but what happen when you run more than 1 core VM per example 2 core VM and the VM get scheduled with 1 performance core and 1 efficient core? Are your running 2-4 core VM with older OS like window 2018,2012,2016, windwos 10, linux, etc... I guess this is a problem for most OSes, because windows 10 had a problem with P+E cores and you have to use windows 11 so the SO understands P vs E cores.
gerd says
Hello,
Thanks for the detail information.
Is it possible to disable the add on GPU (not the Intel onboard GPU) or a PCIex slot in BIOS.
I am searching this feature for a long time now. Most time I do not need the discrete GPU and it would be nice to disable it in this situation for energy saving reasons...
maxdxs says
the e-cores issues was solved? do you recommend to use an i5 or i7 for 12th?
gbmaryland says
I've got ESXi fired up on a NUC 12 Pro i5 based system. So far it works well enough and I've not had any significant issues. I'm a little concerned in that I'm wondering what happens if you try to install VCF on a NUC 12 cluster with all of the E and P cores.
Has anyone gotten the VCF nested ESXi environment to work with NUC 12s?
Lapaj Go says
The iGPU passthrough works just fine with 31.0.101.4032, problem is that the stupid windows keep forcing 31.0.101.2079 driver which causes BSoD. You need to manually uninstall it using "pnputil.exe /remove-device oemxxx.inf" tool. I guess Microsoft and Intel are too lazy to figure it out or at least let us know.
Lapaj Go says
Every single time Intel Driver & Support Assistant gets me up to date on my drivers, Microsoft Windows Update acts like a total a$$hole and drags me back to a driver that is almost a YEAR old. It just did it to me AGAIN. And it's like you can say "no" to windows update - it just does WHATEVER it wants with YOUR system.
Bob says
Just got myself a "Intel NUC 12 Extreme" with the latest BIOS update and esxi, be it 6.5, 6.7 or 7.0 it just hangs at " using simple 'offset' uefi rts mapping policy" screen, this is after apply cpuUniformityHardCheckPanic=False. Anything I did not do right to get it installed? Please help. Thanks.
William Lam says
Have you checked your boot media? Try another USB device … I’ve seen this come up in past which isn’t specific to NUC
Bob says
AFter 3 different USB sticks, it now can book into the installation properly. Thanks. Now just need to get the right driver for the 10G NIC as my NUC does not come with the 2.5G NIC.
Volker says
In Proxmox 7.3 they have no problems with E & P Kernel 😉 try and enjoy it!
Mark says
I noticed on the Wall Street Canyon that with the 101.4146 driver it doesn't throw the BSOD thread error. When it starts and having svga present it has error code 43 in dev mgr. If I disable and then reenable then it shows normal. but I can't seem to get any displays recognized. It seems really close to working but not sure what else it might be
ohhno says
@volker
do you use new nuc 12 exreme with proxmox?
is it free and fully supported?
can nuc 12 raid 1 with 2 x m.2 NVMe?
Amir says
Hi William,
I'm using NUC12DCMi9 as my office lab, but I have an issue.
Have you guys faced any issues with RAID?
once I made a RAID 1, ESX couldn't reach the storage & it doesn't show up either in the storage or in storage devices. I tried ESXi 8 U1 & 8. even injected the VMD driver to ESXi 7.6. but none of them worked. will appreciate it if you can give me a tip to find any workaround.
William Lam says
I don’t use any RAID or VMD, not sure it buys you much for lab env … also this feature is for Xeon-based CPU, which none of NUCs are, so YMMV. Suggest looking at https://core.vmware.com/blog/using-intel-vmd-driver-vsphere-create-nvme-raid1 if you’ve not and see if everything checks out
Amir says
Thank you for your quick reply!
Ashton says
Hello,
Thank you for posting. Is there a place we can track the status of the driver bug with intel or esxi official support?
Thanks,
Ashton
William Lam says
ESXi is not officially supported on any of the Intel NUCs. Hardware certifications is performed by hardware partners and submitted to VMware HCL. While I've shared the details on the Windows graphics issue, I haven't heard any plans to resolve it. For now, if you want to leverage the iGPU for passthrough, it'll need to be a Linux guests. You can always try posting on Intel forums but I suspect you'll get a "this is not supported" response