As teased back in January, Intel has been working on a new Intel NUC ...
1st native 10GbE Intel NUC! 🐉 🥳🤐🤫 pic.twitter.com/E4lyeaFhpU
— William Lam (@*protected email*) (@lamw) January 11, 2022
Today, Intel has officially launched one of their new 12th generation Intel NUCs called the Intel NUC 12 Extreme formally code named Dragon Canyon. Some may also notice that the Intel NUC 12 Extreme looks very similiar to last years Intel NUC 11 Extreme (Beast Canyon), but there are definitely a number of differences both internally and externally.
Here is your first look at the new Intel NUC 12 Extreme and what it means for those interested in using it for a VMware Homelab.
The Intel NUC 12 Extreme includes the new Intel 12th Generation Alder Lake CPU which is also the first consumer Intel CPU that introduces a new hybrid "big.LITTLE" CPU architecture. This new hybrid CPU architecture integrates two types of CPU cores: Performance-cores (P-cores) and Efficiency-cores (E-cores) into the same physical CPU die. To learn more about how this new hybrid CPU design works, check out this resource from Intel.
The Intel NUC 12 Extreme will be available in the following configurations starting in the second quarter of 2022 (per their press release):
- NUC12DCMi9 - Intel Core i9-12900 Processor (up to 5.10 GHz)
- 16-Core (8P+8E), 24-Thread, 30M cache
- NUC12DCMi7 - Intel Core i7-12700 Processor (up to 4.90 GHz)
- 12-Core (8P+4E), 20-Thread, 30M cache
Both systems will be able to support up to a maximum of 64GB of memory using two SO-DIMM memory modules, which is similiar to the previous generation of Intel NUCs. Although the CPUs can technically go up to 128GB of memory, there has not been any confirmation or even rumors that we will be seeing a single 64GB SO-DIMM any time soon 🙁
The first question that I am sure many of you will have (or have already asked) is whether the ESXi CPU Scheduler will understand this new hybrid CPU architecture? For the answer, skip to the ESXi section at the bottom for more details.
The most significant update in my opinion for the Intel NUC 12 Extreme is the networking as it re-introduces support for two onboard network adapters, which should come in handy for running a VMware Homelab. Furthermore, something that customers have been asking for quite some time now is support for a 10GbE option and the Intel NUC 12 Extreme finally delivers!
The Intel NUC 12 Extreme includes 1 x 2.5GbE (Intel I225-LM), which is only available on the i9 model, and 1 x 10GbE (Marvell AQC113) interface as shown in the picture below.
Before folks get too excited, I do have some slightly bad news to share if you are considering ESXi with the 10GbE option. The inbox Marvell driver for ESXi does not currently support this particular consumer 10GbE network adapter. I had reached out to the Marvell team to see if they have any plans to support this device but unfortunately they currently do not. If this is something you would like to see supported, please reach out to Marvell directly and share your feedback with them.
Although the 10GbE interface can not be leveraged by ESXi directly, all hope is not lost. Customers can still use the network adapter in passthrough mode and make it available to a specific VM. In my setup, I was able to configure a Windows 10 VM and after installing the required Marvell driver, I was able to use the 10GbE interface from within the VM.
For the 2.5GbE network adapter, we have better news. Although the network adapter is similiar to the one used in the NUC 11, there were some minor differences that gave us some initial issues. Luckily, we were able to get the device working and you simply just need to have an updated version of the Community Networking Driver for ESXi Fling (requires v.1.2.7 or greater) for enablement. If you need help creating a customized ESXi ISO that contains Community Networking Driver, please see this blog post for more details.
Additional networking can also be added using a number of different options including: 2 x PCIe slots, 2 x Thunderbolt 4 ports (see 10GbE Thunderbolt options for ESXi) and there are plenty of USB ports as well (see USB Networking options for ESXi).
The storage options are still plentiful with the latest Intel NUC 12 Extreme, especially for those interested in running vSAN or having additional VMFS datastores. Up to 3 x M.2 NVMe devices can be installed on the Intel NUC 12 Extreme supporting PCIe x4 Gen 4, two of which are installed inside of the NUC Compute Element right next to the CPU and Memory as shown below.
For those of you that are familiar with last year's Intel NUC 11 Extreme, you may recall it supports up to 4 x M.2 NVMe devices, which is fantastic for a VMware Homelab. With the Intel NUC 12 Extreme, there is a regression in this capability and most likely due to the increase size of the new CPU. With the Intel NUC 11 Extreme, 3 x M.2 could be installed within the NUC Compute Element but as you can see with the Intel NUC 12 Extreme, we have lost one of the M.2 slots.
Additionally, the Intel NUC 12 Extreme has also consolidated where additional M.2 devices can be installed. With the Intel NUC 11 Extreme, there was an easy to access slot beneath the chassis that could support up to an M.2 22x110 form factor. The Intel NUC 12 Extreme has removed that slot or rather the M.2 connector since the slot still exists but I am not sure of its use when opening it up.
The third M.2 in the Intel NUC 12 Extreme has been relocated directly on the back of the NUC Compute Element as shown in the picture below. To access the M.2 slot, you will need to remove the side panel and the single screw that holds both the M.2 and the cover in place.
Even with these changes, there are still plenty of storage expandability options with the Intel NUC 12 Extreme. You can use either the 2 x PCIe slots and/or the 2 x Thunderbolt 4 ports (See Thunderbolt storage options for ESXi) to add additional storage.
The Intel NUC 12 Extreme can support up to a 12" length discrete GPU and is dual-slot capable for those with additional graphic requirements from VDI, rendering to playing with AI/ML with Kubernetes. For the integrated graphics, the Intel NUC 12 Extreme includes an Intel UHD Graphics 770, which shows up as an Alder Lake GT1. I was really crossing my fingers that the iGPU passthrough would function out of the box unlike the previous generations of the Intel NUC 11.
UPDATE (11/17/22) - Please see this blog post here for updated details on how to use the iGPU in passthrough mode with an Ubuntu VM.
Using a fully patched Windows 10 VM, it automatically detected the iGPU and even prompted to install the Intel Graphics Control Center without having to manually load the device driver, which was quite nice. As you can see from the screenshot below, even the Windows Device Manager has properly detected the device.
Now, the real test is whether this will survive a VM reboot?
Sadly, it looks like we are still facing the same iGPU passthrough issue that we saw in the Intel NUC 11, but the behavior in the Intel NUC 12 Extreme is far more extreme (no pun intended). Instead of being able to boot into Windows and seeing the typical Error Code 43 when navigating to Windows Device Manager when using the NUC 11 Extreme, the Windows VM now BSOD (Blue Screen of Death) and the following message is displayed "SYSTEM THREAD EXCEPTION NOT HANDLED" when using the Intel NUC 12 Extreme.
The chassis used in the Intel NUC 12 Extreme is the same as the Intel NUC 11 Extreme, coming in at 357 x 189 x 120 mm (8L). Check out this blog post for a more detailed look and size comparison to other Intel NUCs.
Let me start off by answering the question that I had posed at the beginning of this article on whether the ESXi CPU Scheduler understands the new Intel Alder Lake hybrid CPU architecture? The short answer is no, ESXi is currently not aware of this new architecture and it currently expects all cores within a CPU package to have uniform characteristics.
It is recommended to disable the E-cores within the Intel NUC BIOs following the instructions HERE to prevent ESXi from PSOD'ing due to non-uniform CPU cores, which will result in following error "Fatal CPU mismatch on feature". If for some reason you prefer not to disable either the P-cores or E-Cores, then you can add the following ESXi kernel option cpuUniformityHardCheckPanic=FALSE to workaround the issue which needs to be appended to the existing kernel line by pressing SHIFT+O during the boot up. Please see this video HERE for the detailed instructions for applying the workaround.
Below is a screenshot running the latest ESXi 7.0 Update 3 release on the Intel NUC 12 Extreme, which does require the Community Networking Driver for ESXi Fling (at least v.1.2.5) for proper networking functionality.
Although we can workaround the PSOD, this is more of a hack since we really do not know what the behavior will be, since the ESXi CPU Scheduler was never designed to work with this new CPU architecture. From my very limited amount of testing, running a Windows VM and other basic workloads, I have not seen any significant difference, but it may vary based on the type and number of workloads. One thing I did notice was that ESXi was using the P-Core base frequency which in my setup was 2.40Ghz where as the E-Core base frequency is 1.80Ghz. With more workloads running, in theory you could see mixed performance if a single workload is getting scheduled between the two different types cores.
Although it is unclear whether this new type of CPU architecture will get adopted in the Enterprise datacenter, but we can certainly expect to see this trend continue in the consumer space which also includes Apple's recent Apple Silicon processors. I can definitely see the benefits of this type of hybrid CPU architecture benefiting Edge deployments and perhaps that is the next logical segment to see some form of Enterprise support?