The Intel NUC Enthusiast product line is typically geared towards content creators and gamers. These Intel NUCs, include Skull Canyon, Hades Canyon and Phantom Canyon, are all equipped with an onboard discrete GPU.
The Intel NUC 12 Enthusiast, codenamed Serpent Canyon, is the latest offering in this product line and it is also the first Intel NUC that includes both an Intel CPU and an Intel GPU, which is based on the latest Intel Arc graphics.
Within the VMware Community, both the Skull Canyon and Hades Canyon were extremely popular due to the additional graphics and storage capabilities at the time. With my recent updated findings on iGPU passthrough for recent Intel NUCs with ESXi and the combined compute and graphics solution from this latest offering from Intel, Intel NUC 12 Enthusiast can make for a pretty powerful and capable VMware setup!
Let's take a closer look at the new Intel NUC 12 Enthusiast from a VMware point of view!
Compute
The Intel NUC 12 Enthusiast is a single SKU offering that includes the following CPU configuration:
- Intel 12th Generation Intel Core i7-12700H
- 14 Processor Cores (6P+8E), 20 threads, 24MB Intel Smart Cache, 35W TDP
- P-Cores: 4.7GHz Turbo; E-Cores : 3.5GHz Turbo
You can also find the complete technical CPU specifications here.
Network
Unlike the previous Intel NUC Enthusiast kits which included two onboard network adapters, the Intel NUC 12 Enthusiast has only a single network adapter using an Intel i225 (2.5GbE) which is similiar to other Intel NUC 12th Gen models. The network adapter is automatically recognized when using the latest ESXi 8.0 release or with the Community Networking Driver for ESXi Fling, if you want to run earlier ESXi 7.0 versions.
If you need additional networking, you can take advantage of the two Thunderbolt 4 ports using these Thunderbolt 10GbE solutions for ESXi or look at USB-based networking by using the popular USB Network Native Driver for ESXi Fling, supporting over two dozen types of USB-based network adapters.
Storage
The storage options on the Intel NUC 12 Enthusiast is quite plentiful for the form factor with the ability to add up to three NVMe devices: 2 x M.2 PCIe x4 Gen 4 (2280) and 1 x M.2 PCIe x4 Gen 3 (2280). This means you can easily setup vSAN and also have ESXi and the ESX-OSData volume residing on the third and dedicated NVMe device, future proofing your investment for future vSphere releases. You can also forgo vSAN and setup multiple local VMFS datastores, giving you the flexibility based on your requirements.
If you need even more storage, you can also use the two Thunderbolt 4 ports and add these Thunderbolt M.2 NVMe solutions for ESXi providing you with even more storage capacity and configuration options.
Graphics
The discrete GPU in the Intel NUC 12 Enthusiast is an Intel Arc 770M, the mobile version of their flagship Intel Arc 770 GPU, which has up to 96 Execution Units and supports Direct X 12.1, OpenGL 4.6 and OpenCL 3.0 graphics APIs.
The question that I am sure many of you are wondering about is whether this new discrete Intel GPU will work with ESXi ... and I am happy to share that it is indeed fully functional for GPU passthrough to a VM and specifically when using Ubuntu Linux. For further background, please take a look at my recent blog post which provides some updated findings to iGPU passthrough for recent Intel NUCs with ESXi.
Arc 770M
To passthrough and consume the discrete GPU, here are the high level instructions for setting up the VM:
Step 1 - Create and install Ubuntu Server 22.04 VM (recommend using 60GB storage or more, as additional packages will need to be installed). Once the OS has been installed, go ahead and shutdown the VM.
Step 2 - Enable passthrough of the iGPU under the ESXi Configure->Hardware->PCI Devices settings and then add a new PCI Device to the VM and select the iGPU. You can use either DirectPath IO or Dynamic DirectPath IO, it does not make a difference.
Step 3 - Optionally, if you wish to disable the default virtual graphics driver (svga), edit the VM and under VM Options->Advanced->Configuration Parameters change the following setting from true to false:
svga.present
Step 4 - Power on the VM and then follow these instructions for installing the Intel Graphic Drivers for Ubuntu 22.04 and once completed, you will now be able to successfully use the iGPU from within the Ubuntu VM.
Here is a screenshot of Ubunutu 22.04 VM which has the default virtual graphics disabled and is connected using a remote session utilizing the discrete GPU passthrough from an Intel NUC 12 Enthusiast running ESXi 8.0 (also works on latest ESXi 7.0 Update 3 release).
iGPU
The Intel NUC 12 Enthusiast also includes an integrated graphics (iGPU) device and I was also curious if I could passthrough this device to a different VM? Attempting to power on the VM with iGPU passthrough, I immediately received the following error:
Module DevicePowerOn power on failed. Failed to start the virtual machine.
I started to look at the VM logs (vmware.log) to see if there were any hints on why this was failing and saw the following:
PCIPassthru: Selected device 0000:03:00.0 is outside of the NUMA configuration
PCIPassthru: Device 0000:03:00.0 barIndex 0 type 3 realaddr 0x6b000000 size 16777216 flags 4
PCIPassthru: Device 0000:03:00.0 barIndex 2 type 3 realaddr 0x6000000000 size 17179869184 flags 12
PCIPassthru: Device has PCI Express Cap Version 2(size 60)
PCIPassthru: Registered a PCI device for 0000:03:00.0 vIRQ 0xff, physical MSI = Enabled (vmmInt = Enabled), IntrPin = 0
PCIPassthru: total number of pages needed (4198400) exceeds limit (917504), failing
The very last line got my attention and seems to indicate some memory limit was being reached and while searching online, I came across this VMware KB 2142307 outlining requirements for VMDirectPath I/O and Dynamic DirectPath I/O and decided to try one of the VM Advanced Settings:
pciPassthru.use64bitMMIO = "TRUE"
and that fixed the issue and allowed me to successfully power up the VM with the iGPU!
If we look at the VM logs again, we can see it has successfully created the IOMMU mapping for the iGPU:
PCIPassthru: Selected device 0000:03:00.0 is outside of the NUMA configuration
PCIPassthru: Device 0000:03:00.0 barIndex 0 type 3 realaddr 0x6b000000 size 16777216 flags 4
PCIPassthru: Device 0000:03:00.0 barIndex 2 type 3 realaddr 0x6000000000 size 17179869184 flags 12
PCIPassthru: Device has PCI Express Cap Version 2(size 60)
PCIPassthru: Registered a PCI device for 0000:03:00.0 vIRQ 0xff, physical MSI = Enabled (vmmInt = Enabled), IntrPin = 0
PCIPassthru: successfully created the IOMMU mappings
PCIPassthru: Attempted to program PCI cacheline size 32 not a power of 2 factor of original physical 64 for device 0000:03:00.0
To passthrough and consume the iGPU, here are the high level instructions for setting up the VM:
Step 1 - Create and install another Ubuntu Server 22.04 VM (recommend using 60GB storage or more, as additional packages will need to be installed). Once the OS has been installed, go ahead and shutdown the VM.
Step 2 - Enable passthrough of the iGPU under the ESXi Configure->Hardware->PCI Devices settings and then add a new PCI Device to the VM and select the iGPU. You can use either DirectPath IO or Dynamic DirectPath IO, it does not make a difference.
Step 3 - Optionally, if you wish to disable the default virtual graphics driver (svga), edit the VM and under VM Options->Advanced->Configuration Parameters change the following setting from true to false:
svga.present
Step 4 -Edit the VM and under VM Options->Advanced->Configuration Parameters add the following:
pciPassthru.use64bitMMIO = "TRUE"
Step 5 - Power on the VM and then follow these instructions for installing the Intel Graphic Drivers for Ubuntu 22.04 and once completed, you will now be able to successfully use the iGPU from within the Ubuntu VM.
Here is a screenshot of Ubunutu 22.04 VM which has the default virtual graphics disabled and is connected using a remote session utilizing the iGPU passthrough from an Intel NUC 12 Enthusiast running ESXi 8.0 (also works on latest ESXi 7.0 Update 3 release).
Now, how cool is that!? Passthrough of two GPUs to two completely different Ubuntu VMs running on ESXi! I can not wait to hear about the use cases folks have in mind with this additional graphics processing capability 😀
Here are some additional resources that I had used for setting up and accessing my two Ubuntu VMs that might be useful:
Customization
The classic Intel NUC skull or in the case of the Intel NUC 12 Enthusiast, the serpent can be customized using the Intel Software Studio Service if you have Windows installed on the physical Intel NUC 12 Enthusiast system.
The one nice thing about using the Intel Software Studio Service to customize the colors, which includes the mask but also the power button and status lights is that you immediately get real time feedback on what it actually looks like. If not, you can still customize the color settings by going into the BIOS but it does require a reboot to see the changes each time, which can be a bit time consuming.
Speaking of the mask, simliar to previous Intel NUC Enthusiast kits, it is also user replaceable with your own custom designs.
Included with the Intel NUC 12 Enthusiast is the skull and serpent, but you also create your own as you can see from the screenshot above 😀
Huge shoutout to the Simply NUC team for creating a custom WilliamLam.com logo which looks fantastic on the top (or side) of the Intel NUC 12 Enthusiast, as you can stand it up vertically as well.
Something to be aware of if you are in the market for an Intel NUC 12 Enthusiast is that if you purchase from Simply NUC, you can provide your own graphics and they will automatically print that for you as part of the order! To build and design your ideal Intel NUC 12 Enthusiast, please visit https://simplynuc.com/serpent-canyon/ for more details.
Form Factor
For those familiar with the classic 4x4 Intel NUC, the Intel NUC 12 Enthusiast is roughly about 1.5x of the classic Intel NUC, which is still pretty compact and but not as small as the classic Intel NUC and definitely much smaller than the recent NUC 9/11/12 Extreme. The Intel NUC 12 Enthusiast is ~2.5L volume with a measurement of 230 x 180 x 60 mm for those interested.
ESXi
Finally, no surprise here but the screenshot above shows ESXi 8.0 successfully running on the Intel NUC 12 Enthusiast, it has also been configured with vSAN using two of the NVMe devices with the third running ESXi and ESX-OSDATA. For those interested in running ESXi 8.0, no additional drivers are required as the Community Networking Driver for ESXi has been productized as part of the ESXi 8.0 release. If you wish to use the latest ESXi 7.x releases, it will require the use of the Community Networking Driver for ESXi Fling to recognize the onboard network device.
It is recommended to disable the E-cores within the Intel NUC BIOs following the instructions HERE to prevent ESXi from PSOD'ing due to non-uniform CPU cores, which will result in following error "Fatal CPU mismatch on feature". If for some reason you prefer not to disable either the P-cores or E-Cores, then you can add the following ESXi kernel option cpuUniformityHardCheckPanic=FALSE to workaround the issue which needs to be appended to the existing kernel line by pressing SHIFT+O during the boot up. Please see this video HERE for the detailed instructions for applying the workaround.
Thank you so much William as always! You guided us again with another cool instruction and honestly you don't know how I am thankful for that, because I recently bought a Lenovo ThinkCentre M90Q Gen 3 with the Core i9 and I could not boot the ESXi 8.0 just because of god damn stupid purple screen and after I read your article in the morning in the bathroom lol, I ran to my room and tried it and it did just work so flawlessly. 👌🙏
I am completely new to VM environments but would like to start. What are the practical uses for running a virtual machine if my boot operating system is Windows 11? I just got the NUC 12 Enthusiast and am going to set it up soon. Is it possible to use ESXi to run a Mac OS virtual machine with full graphics? I read that the limitations would be no FaceTime and iMessage, but that is fine. Thanks a lot for your informative post!
ESXi is a bare-metal Type-1 Hypervisor, you would not run that on top of Windows. If you want to use Windows, then look at VMware Workstation. For MacOS virtualization, you need to have Apple Hardware, there are protections in place and also you would be violating both VMware and Apple EULA using any other systems
Thanks for the quick and detailed reply. I'll look into the uses of a virtual Linux environment for my needs.
Hey William - do you have any write-ups on how much overall CPU capacity is being left on the table by disable E cores?
No
Ok one more - do you know if 'cpuUniformityHardCheckPanic=FALSE' allows for predictable core numbers with regard to E/P cores, so you can use Scheduling Affinity on the E and P cores by specifying core #?
There's no predictable behaviors given the ESXi scheduler is not aware of the underlying topology/etc. YMMV based on your testing/observations
Thanks William!
I was unable to configure iGPU passthrough of my intel Iris XE on an Intel NUC 13 Pro i7 using this guide with ubuntu on esxi 8. I also received an error that I must reserve all memory configured for the system, which I did change. Upon startup, the system freezes during loading screen. I attempted the passthrough setting indicated above as well as attempting to use hypvisor.cpuid.v0 = false setting. Worth noting that there is not a directpath io option as you indicate above, I can only add a PCI device or Dynamic PCI device. I also don't have an option like you mentioned in your other article to change GPU settings "Configure->Hardware->Graphics->Host Graphics and change the default graphics type to "Shared Direct""
You most likely didn’t follow instructions as I recently had to do this again and there’s no issues. In none of my instructions, it says adding any cpuid settings. Please careful go through articles outlining how to pass iGPU to Ubuntu 22.x
I tried multiple iterations, all end up the same. I only mentioned the hypervisor cpuid as an option that others had success with on older systems, however I get the same result whether this setting is there or not. I have done it with no special settings and also done it using the pciPassthru.use64bitMMIO = "TRUE" setting, latest build of ubuntu 22.04 LTS, fresh install, steps followed exactly, as soon as I add the passthrough and power back up the system hangs at black screen, I can not remote desktop or see a console screen, system does not respond or get all the way into the OS.
Worth noting - if I use linux mint I am able to add the gpu but am on an older kernel 5.15. As soon as I upgrade to 5.19.5 like ubuntu 22.04 is on, I immediately have the same problem there. I have reinstalled esxi 8.0u1, blown out the VMs, tried 23.04 and 22.04 LTS, same result. This is on an intel NUC 13 pro with the i7 1360p processor and 64gb of RAM. I provisioned various amounts of ram, and have tried various iterations of pciPassthru.64bitMMIOSizeGB = 64, 128, 32,etc. (as well as not having this there at all. The fact that I see it (but no driver loaded yet) in mint leads me to believe it's not a host setting, but it doesn't really explain how others can have success with no changes to the guest OS or host other than the items you mentioned. Hmmm, any other ideas? I'm grateful for any insight.
Trying arbitrary settings (which aren't applicable) also won't yield better results and probably cause more issues/frustrations.
As I've said before, I've done this successfully on my Intel NUC 13 Pro as well as ASUS PN64-E1 (which is also 13th Gen) by following https://williamlam.com/2022/11/updated-findings-for-passthrough-of-intel-nuc-integrated-graphics-igpu.html and specifically you'll want to use "Intel NUC 12 Pro" section.
I'm going to give you a high level process (which is literally re-stating the exact and I thought clear instructions in blog post) that you'll want to setup Ubunutu 22.04 install first (don't attach GPU as I know that give you strange behaviors and it sounds like you might have based on what you're describing). Once the OS is installed, shut it down and attach GPU and then go through Intel Driver setup. I know that original Intel docs has changed, but process can still be followed or you can simply access the newer page and follow those directions, which IIRC I used GPU Flex (since it works and recognizes the iGPU)
Since publishing my blog post, Ubuntu 23.04 can also be used which now comes w/Intel drivers, so that is another route but Ubuntu 22.04 instructions still work and I recently found you can even install their xpu-smi utility. One word of caution is that do NOT install Intel Drivers on top of the canned ones including Ubuntu 23.04 as I did run into some strange packaging errors, so with 23.04, there's less steps but I typically use 22.04.
Hi William, I just ordered two of these and plan to do a 2 node vsan cluster with a separate witness host. With the Thunderbolt 4 ports, do you know if those can just be connected together for 10g ethernet? Or would i need adapters?
You’ll need to get TB to Ethernet for that to work, you can’t just “connect” as there’s no ESXi driver for doing networking over TB itself like you would see on an Apple system
hi there,
i bought 2 weeks before the intel nuc enthusiast 12. it has an old bios from 2022. so i tried to update to the lastest, which i found because of the swap intel-asus only on the asus side.
the bios update went wrong, i cannot boot anymore from my windows 11 ssd (i installed windows 11 before), also i was not able to go to the bios again anymore.
so i sent it back to where i bought it and yesterday i got a new machine.
now i am afraid to install the new bios again.
then i want to register on asus with my nuc serial to get some help. but it is not recognised there (maybe because it was sold as an intel nuc).
i think the bios on the asus side for the intel nuc enthusiast 12 is faulty. now my question, does anyone here has the old intel bios update file(s) and can give it to me?