The ASUS NUC 14 Pro (formally known as Revel Canyon) is the first ASUS-based NUC since the acquisition of the NUC Division from Intel last fall. I know many of my readers have been requesting a review of the new ASUS NUCs, but to be honest, it has been pretty difficult to get samples directly from ASUS, which was not the case when I had worked with Intel.
In fact, it was thanks to the folks over at SimplyNUC who was kind enough to provide access to the ASUS NUC 14 Pro, so that I could do a proper review 🙏
In case you are reading this ASUS, I hope this was simply due to the initial integration of the NUC Division and in the future, getting early samples will be possible to help support our shared community of users.
At first glance, the exterior chassis of the ASUS NUC 14 Pro has minimal changes, it is the same classic 4x4 design that we have all come to love. The only noticeable change is the raised bevel on the top of the chassis and the removal of the etched Intel NUC brand which was located on top.
Compute
The ASUS NUC 14 Pro uses the new Intel 14th Generation (Meteor Lake) processors, which is part of the new Intel Core Ultra Processor (Series 1) brand and includes both Core Ultra 5 & Ultra 7 processors (with and without Intel vPro):
- Intel Core Ultra 5 (125H & 135H)
- Intel Core Ultra 7 (155H & 165H)
- Intel Core Ultra 5 w/vPro (135H)
- Intel Core Ultra 7 w/vPro (155H)
Unlike with previous generations of the Intel Hybrid CPU processors like the Intel 12th Gen (Alder Lake) or 13th Gen (Raptor Lake), where there are two types of CPU Cores: P-Core (Performance) and E-Core (Efficiency), Meteor Lake introduces a brand new architecture that now includes a third type of CPU core called Low-Power Efficiency Cores (LPE-Core).
For example, the ASUS NUC 14 Pro kit that I am using has an Intel Core Ultra 7 155H which includes 6 x P-Cores, 8 E-Cores & 2 x LPE-Cores. By using CPU-Z to benchmark the individual cores, I have observed the following association across P, E & LPE-Cores for those interested in applying specific CPU affinities:
An important thing to note is that LPE-Cores can NOT be disabled in the system BIOS and this means from an ESXi perspective, regardless of disabling the P-Cores or E-Cores, you will always have two types of non-uniform processors, which will require using the ESXi kernel option to boot a system with non-uniform CPU processors. Furthermore, in the past you could disable just the E-Cores which means only the P-Cores would be visible to ESXi and you could benefit from Hyperthreading.
With Meteor Lake processors, because LPE-Cores can not be disabled, you will not be able to benefit from Hyperthreading as ESXi will automatically disable that because of the non-uniform CPU cores. On a related topic to Hyperthreading, Intel has announced that they will be removing the Hyperthreading technology with the next generation of Intel processors with the Lunar Lake architecture, so just something to be aware of going forward.
For memory, the ASUS NUC 14 Pro can support up to 96GB (2 x SODIMM) using the new 48GB DDR5 non-binary SO-DIMM memory modules, which I have shared my experience with using the Mushkin Redline in the past.
Network
The classic 4x4 ASUS NUC 14 Pro comes with a single Intel i226 (2.5GbE) network adaptor that is fully recognized by ESXi 8.x and later. Unlike previous generations of the "Tall" SKU which would allow you to add a secondary Intel 2.5GbE via expansion module, the new ASUS NUC 14 Pro no longer includes M.2 B-Key and only has M.2 M-Key interface and there are no compatible expansion module as of 01/05/2025. With that said, if you would like to add a secondary built-in 2.5GbE network adaptor, you can look at this 3rd party lid accessory designed for the ASUS NUC 14 from GoRite that would work with either the 4x4 or Tall SKU.
For additional networking, you can also use the two Thunderbolt 4 ports with these Thunderbolt 10GbE solutions for ESXi or use USB-based networking with the popular USB Network Native Driver for ESXi Fling, supporting over two dozen types of USB-based network adapters.
Storage
The ASUS 14 NUC Pro supports 1 x M.2 PCIe x4 Gen 4 (2280) and 1 x M.2 PCIe x4 Gen 4 (2242) using the M-Key interface, which is completely brand new compared to the previous Intel NUC 13 pro which only supported an M.2 SATA using a B-Key interface. This means you can have two NVMe devices that can then be used in various combinations from vSAN OSA/ESA or local VMFS on one device and NVMe Tiering for the second device! If you go with the "Tall" SKU of the ASUS NUC 14 Pro, you will have an additional SATA interface supporting 2.5" SSD or HDD, giving you more storage deployment options.
Since the 2242 slot is now an M-Key interface, I needed to purchase a new NVMe device since my existing 2242 device was only for SATA. I got lucky with the Corsair MP600 (1TB) M.2 (2242) NVMe, which was fully recognized by ESXi 8.x and later.
If you need additional storage, you can also use the two Thunderbolt 4 ports and add these Thunderbolt M.2 NVMe solutions for ESXi providing you with more storage capacity and configuration options.
In terms of physical memory and storage installation, the ASUS NUC 14 Pro has a nice enhancement using a quick release toggle (highlighted in red above), removing the need to manually unscrew the four connected screws on each corner, which is how you normally would remove the bottom chassis from the last few generations of the Intel NUCs.
Graphics
The ASUS NUC 14 Pro includes an Intel Arc integrated graphics (iGPU) with eight Xe-Cores and can be successfully passthrough to an Ubuntu VM, which has support for the latest Intel Graphics Drivers. For setup instructions, please refer to this blog post HERE.
Note: iGPU passthrough to a Windows VM is not functional with the Intel Graphics Driver for Windows which throws Error Code 43.
AI Accelerator
An exciting new capability to the ASUS NUC 14 Pro is an integrated Neural Processing Unit or NPU that is built right into the Meteor Lake SoC (system-on-chip) that is optimized for low power AI Inferencing. The great news is that the Intel NPU can also be used by ESXi and for more setup details and usage, check out my recent blog post HERE.
For those interested in an ASUS NUC 14 Pro for the purpose of exploring and learning about AI/ML, SimplyNUC has a nice promotion where you can request a $500 discount code off of their SimplyNUC AI PC Development Kit.
ESXi
The latest release of ESXi 8.0 Update 3 runs on the ASUS NUC 14 Pro without any issues, no additional drivers are required as the Community Networking Driver for ESXi has been productized as part of the ESXi 8.0 release. If you want to install ESXi 7.x, you will need to use of the Community Networking Driver for ESXi Fling to recognize the onboard network devices.
As mentioned earlier, due to non-uniform CPU cores that will exists across P, E and LPE-Cores, you will need to apply the required ESXi kernel option to boot a system with non-uniform CPU processors.
Since the ASUS NUC 14 Pro is capable of 2 x NVMe devices, I was able to use one of them for enabling the new NVMe Tiering capability and with a configured ratio to be 400%, it gave me ~478GB of memory for my workloads! 😁
One thing that I was curious about was whether the new CPU architecture introduced with Meteor Lake would impact running heavier workloads like running VMware Cloud Foundation (VCF) using Nested ESXi on a single physical host, which is currently my go to workload when pushing the boundaries of NVMe Tiering as it requires at least 384GB of memory to deploy.
In my testing, I have observed that the lower-end SKUs of Meteor Lake may struggle with deploying VCF. I have experienced several PSODs while VCF was deploying vCenter Server, so the deployment never actually finishes. I did experiment with CPU affinity for a bit since the LPE-Cores on the lower-end SKU only run at 700Mhz compared to some of the higher end SKUs that can go up to 1Ghz, but the results was very inconsistent that I can not recommend doing this on just a single node. I think if you have several of the ASUS NUC 14 Pro, that you could certainly have a much better experience.
While running VCF in a nested environment on a single physical ASUS NUC 14 Pro may not be possible, it still offers plenty of resources through the use of NVMe Tiering to run more than what the physical DRAM can support, which is always a welcome addition when look at doing more with less! I will also be reviewing another ASUS-based NUC that I have had success with deploying VCF on a single host, so that might be of interest to my readers when that review is up. I also want to make it clear that you can definitely use NVMe Tiering to help deploy VCF on the physical ASUS NUC 14 Pro, rather than doing Nested ESXi and having several of these kits could be a nice alternative to larger systems.
Hi William.
Are you sure about the numbering of the three core types. I made a blog about NUC 14 some time ago but my performance tests showed the cores in another order.
https://vmoller.dk/index.php/2024/07/19/lab-problem-esxi-8-on-nuc-with-intel-gen-14-cpu-meteor-lake-cpu-overview/
/Christian
Thanks for the comment Christian. I just re-ran CPU-Z against my Ultra Core 7 (155H) and unfortunately, the core mapping is different than what I had initially published, BUT it also differs from what you've documented. I've got another NUC 14 kit, so going to see if there's any similarities ...
Great - looking forward to hear about your findings 🙂
/Christian
OK. The other NUC 14 kit is matching up to your original assessment and I had re-ran it on NUC 14 Pro and its now coming out to be same 😉
I've updated the blog post to show the correct association! Thanks for chiming in
No problem 🙂
Hello William
Very interesting article.
Do you think that Kingston FURY Impact PnP 64GB (2x32GB) 5600MT,s DDR5 CL40 SODIMM - KF556S40IBK2-64 would work ?
Thanks
Any SO-DIMM DDR5 will work, now that it’s more common than it was a year ago
thank you
Hi William.
Micron Crucial T500 2TB PCIe Gen4 NVMe M.2 SSD
Can it be recognized by ESXi 8.0 Update 3b?
Thanks.
Micron is typically fine, haven’t heard of issues
Hi William
Do you have any recommendation any mini PC which supports 96Gb right now? How about HP Mini 600 G9?
Thanks in advance.
Chang
Any system that supports DDR5 will be capable of 96GB (using dual non-binary 48GB SODIMM)
Hi, in the ESXI in NUC 14 Pro article you've mentioned that the ASUS NUC 14 Pro ‘’Tall’’ would allow adding a secondary Intel i226 (2.5GbE) network adaptor using the expansion module, however, as this NUC now use an M.2 M-Key expansion slot rather than B-Key, the previous expansion kit can’t be installed (referring to NUCIOALUWS). Do you know any part # that would fit the NUC 14 Pro ‘’Tall’’ version with an M-Key slot version?
Hi Alex,
That's a good point, you could try reaching out to GoRite (one of the authorized vendors to see if they plans to have M.2 M-Key option)
I just took a look at their site, while they don't have expansion port, I did see they have this lid+NIC option https://www.gorite.com/asus-nuc-14-pro-single-2-5g-rj45-industrial-server-nic-lid
Hi William,
I’ve contacted them, and the lid option is indeed the only option for a 2nd NIC on NUC 14. According to them, they don’t plan on coming up with a front panel version neither.
Thank for your site, as always a great source of information!
Cheers!
I just picked up my ASUS NUC Pro 14 and I am having a tough time nesting esxi in VM Workstation. All my BIOS settings are correct, but getting the error Failed to Start Virtual Machine. Do you have any recommendation on what to look for? Thanks for any input.
Have you looked at https://williamlam.com/2024/12/quick-tip-virtualized-intel-vt-x-ept-or-amd-v-rvi-is-not-supported-on-this-platform-for-vmware-workstation.html which might be applicable since you mentioned Workstation
Thanks man, it's amazing what a checkbox can do!
Mabye i can get some help here. I have been struggling with this for a while.
Hardware
ASUS NUC 14 Pro+ Kit - Ultra 5 125H
GPU Intel Meteor Lake Arc Graphics 7d55
Software
Ubuntu 24.04 kernel 6.12.11
ESXI 8.0U3
BIOS version: RVMTL357.0046.2024.1122.1109 (newest)
GPU pass trough in ESXI active
GPU drivers installed from: https://dgpu-docs.intel.com/driver/client/overview.html
Case:
I want to use GPU for Hardware acceleration
Problem:
GPU is not properly initialized regardless of whether I load xe or i915 as kernel driver. See logs below.
ffmpeg -hwaccel vaapi -i black_video.mp4 -vf "scale=1280:720,hwupload" -c:v h264_vaapi -pix_fmt yuv420p -b:v 1M -c:a aac output_hwaccel.mp4
[h264_vaapi @ 0x55d03dd185c0] Failed to map output buffers: 24 (internal encoding error).
[h264_vaapi @ 0x55d03dd185c0] Output failed: -5.
[vost#0:0/h264_vaapi @ 0x55d03dd18300] Error submitting video frame to the encoder
Error while filtering: Input/output error
And after trying using encoder with ffmpeg, this is added to dmesg |grep xe:
g [1454]
[ 75.509504] xe 0000:02:02.0: [drm] Xe device coredump has been created
[ 75.509505] xe 0000:02:02.0: [drm] Check your /sys/class/drm/card0/device/devcoredump/data
[ 75.509579] xe 0000:02:02.0: [drm] GT1: failed to get forcewake for coredump capture
[ 75.511612] xe 0000:02:02.0: [drm] GT1: Engine reset: engine_class=vcs, logical_mask: 0x3, guc_id=4
[ 75.511616] xe 0000:02:02.0: [drm] GT1: Timedout job: seqno=4294967169, lrc_seqno=4294967169, guc_id=4, flags=0x0 in ffmpeg
hwinfo --display:
12: PCI 202.0: 0300 VGA compatible controller (VGA)
[Created at pci.386]
Unique ID: LHB6.oQng9K+95x3
SysFS ID: /devices/pci0000:02/0000:02:02.0
SysFS BusID: 0000:02:02.0
Hardware Class: graphics card
Device Name: "pciPassthru0"
Model: "Intel VGA compatible controller"
Vendor: pci 0x8086 "Intel Corporation"
Device: pci 0x7d55
SubVendor: pci 0x1043 "ASUSTeK Computer Inc."
SubDevice: pci 0x88c8
Revision: 0x08
Driver: "xe"
Driver Modules: "xe"
Memory Range: 0xd0000000-0xd0ffffff (ro,non-prefetchable)
Memory Range: 0xc0000000-0xcfffffff (ro,non-prefetchable)
Memory Range: 0x000c0000-0x000dffff (rw,non-prefetchable,disabled)
IRQ: 34 (1434 events)
Module Alias: "pci:v00008086d00007D55sv00001043sd000088C8bc03sc00i00"
Driver Info #0:
Driver Status: i915 is active
Driver Activation Cmd: "modprobe i915"
Driver Info #1:
Driver Status: xe is active
Driver Activation Cmd: "modprobe xe"
Config Status: cfg=new, avail=yes, need=no, active=unknown
Primary display adapter: #12
Snip from clinfo:
Platform Numeric Version 0xc00000 (3.0.0)
Platform Extensions function suffix INTEL
Platform Host timer resolution 1ns
Platform External memory handle types DMA buffer
Platform Name Intel(R) OpenCL Graphics
Number of devices 1
Device Name Intel(R) Arc(TM) Graphics
Device Vendor Intel(R) Corporation
Device Vendor ID 0x8086
Device Version OpenCL 3.0 NEO
Device UUID 8680557d-0800-0000-0202-000000 000000
Driver UUID 32342e34-352e-3331-3734-300000 000000
Valid Device LUID No
Device LUID 1025-b2efff7f0000
Device Node Mask 0
Device Numeric Version 0xc00000 (3.0.0)
Driver Version 24.45.31740
Device OpenCL C Version OpenCL C 1.2
Device OpenCL C all versions OpenCL C
dmesg | grep xe:
[ 0.000000] Command line: BOOT_IMAGE=/vmlinuz-6.12.11-zabbly+ root=/dev/mapper/ubuntu--vg-ubuntu--lv ro quiet splash iommu=o xe.force_probe=7d55 and i915.force_probe=!7d55 vt.handoff=7
[ 0.000000] NX (Execute Disable) protection: active
[ 0.075140] MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs
[ 0.087139] ACPI: Reserving FACS table memory at [mem 0xea00000-0xea0003f]
[ 0.087139] ACPI: Reserving FACS table memory at [mem 0xea00000-0xea0003f]
[ 0.096470] Kernel command line: BOOT_IMAGE=/vmlinuz-6.12.11-zabbly+ root=/dev/mapper/ubuntu--vg-ubuntu--lv ro quiet splash iommu=o xe.force_probe=7d55 and i915.force_probe=!7d55 vt.handoff=7
[ 0.180262] __cpuhp_setup_state_cpuslocked+0xe4/0x2c0
[ 0.185694] PCI: ECAM [mem 0xe0000000-0xe7ffffff] (base 0xe0000000) for domain 0000 [bus 00-7f]
[ 0.256903] system 00:05: [mem 0xe0000000-0xe7ffffff] has been reserved
[ 3.521896] systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
[ 3.547219] systemd[1]: netplan-ovs-cleanup.service - OpenVSwitch configuration for cleanup was skipped because of an unmet condition check (ConditionFileIsExecutable=/usr/bin/ovs-vsctl).
[ 3.879544] RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
[ 4.269340] xe 0000:02:02.0: vgaarb: deactivate vga console
[ 4.269578] xe 0000:02:02.0: [drm] Found METEORLAKE (device ID 7d55) display version 14.00 stepping C0
[ 4.271762] xe 0000:02:02.0: [drm] Using GuC firmware from i915/mtl_guc_70.bin version 70.29.2
[ 4.282597] xe 0000:02:02.0: [drm] Using GuC firmware from i915/mtl_guc_70.bin version 70.29.2
[ 4.285878] xe 0000:02:02.0: [drm] Using HuC firmware from i915/mtl_huc_gsc.bin version 8.5.4
[ 4.287695] xe 0000:02:02.0: [drm] Using GSC firmware from i915/mtl_gsc_1.bin version 102.0.10.1878
[ 4.301277] xe 0000:02:02.0: Invalid PCI ROM data signature: expecting 0x52494350, got 0xcb80aa55
[ 4.301280] xe 0000:02:02.0: [drm] Failed to find VBIOS tables (VBT)
[ 4.319549] xe 0000:02:02.0: vgaarb: VGA decodes changed: olddecodes=io+mem,decodes=none:owns=io+mem
[ 4.329221] xe 0000:02:02.0: [drm] Finished loading DMC firmware i915/mtl_dmc.bin (v2.21)
[ 6.803388] xe 0000:02:02.0: [drm] [ENCODER:240:DDI A/PHY A] failed to retrieve link info, disabling eDP
[ 7.023499] xe 0000:02:02.0: [drm] vcs1 fused off
[ 7.023502] xe 0000:02:02.0: [drm] vcs3 fused off
[ 7.023503] xe 0000:02:02.0: [drm] vcs4 fused off
[ 7.023503] xe 0000:02:02.0: [drm] vcs5 fused off
[ 7.023504] xe 0000:02:02.0: [drm] vcs6 fused off
[ 7.023504] xe 0000:02:02.0: [drm] vcs7 fused off
[ 7.023505] xe 0000:02:02.0: [drm] vecs1 fused off
[ 7.023505] xe 0000:02:02.0: [drm] vecs2 fused off
[ 7.023506] xe 0000:02:02.0: [drm] vecs3 fused off
[ 7.093766] [drm] Initialized xe 1.1.0 for 0000:02:02.0 on minor 0
[ 7.226405] xe 0000:02:02.0: [drm] GT1: found GSC cv102.1.0
[ 8.113257] xe 0000:02:02.0: [drm] Allocated fbdev into stolen
[ 8.120151] fbcon: xedrmfb (fb0) is primary device
[ 8.120155] xe 0000:02:02.0: [drm] fb0: xedrmfb frame buffer device
[ 28.119430] xe 0000:02:02.0: [drm] *ERROR* GT1: GSC proxy component not bound!
dmesg | grep i915:
[ 0.000000] Command line: BOOT_IMAGE=/vmlinuz-6.12.11-zabbly+ root=/dev/mapper/ubuntu--vg-ubuntu--lv ro quiet splash iommu=o xe.force_probe=!7d55 and i915.force_probe=7d55 vt.handoff=7
[ 0.036985] Kernel command line: BOOT_IMAGE=/vmlinuz-6.12.11-zabbly+ root=/dev/mapper/ubuntu--vg-ubuntu--lv ro quiet splash iommu=o xe.force_probe=!7d55 and i915.force_probe=7d55 vt.handoff=7
[ 3.862708] i915 0000:02:02.0: [drm] Found METEORLAKE (device ID 7d55) display version 14.00 stepping C0
[ 3.863833] i915 0000:02:02.0: [drm] VT-d active for gfx access
[ 3.863837] i915 0000:02:02.0: vgaarb: deactivate vga console
[ 3.863865] i915 0000:02:02.0: [drm] Using Transparent Hugepages
[ 3.865574] i915 0000:02:02.0: Invalid PCI ROM data signature: expecting 0x52494350, got 0xcb80aa55
[ 3.865576] i915 0000:02:02.0: [drm] Failed to find VBIOS tables (VBT)
[ 3.889749] i915 0000:02:02.0: vgaarb: VGA decodes changed: olddecodes=io+mem,decodes=io+mem:owns=io+mem
[ 3.906976] i915 0000:02:02.0: [drm] Finished loading DMC firmware i915/mtl_dmc.bin (v2.21)
[ 5.955821] i915 0000:02:02.0: [drm] [ENCODER:240:DDI A/PHY A] failed to retrieve link info, disabling eDP
[ 5.970251] i915 0000:02:02.0: [drm] GT0: GuC firmware i915/mtl_guc_70.bin version 70.29.2
[ 5.980614] i915 0000:02:02.0: [drm] GT0: GUC: submission enabled
[ 5.980620] i915 0000:02:02.0: [drm] GT0: GUC: SLPC enabled
[ 5.980838] i915 0000:02:02.0: [drm] GT0: GUC: RC enabled
[ 21.029403] i915 0000:02:02.0: [drm] GPU HANG: ecode 12:0:00000000
[ 21.029656] i915 0000:02:02.0: [drm] GT0: Resetting chip for stopped heartbeat on bcs'0
[ 21.029855] i915 0000:02:02.0: [drm] GT0: GuC firmware i915/mtl_guc_70.bin version 70.29.2
[ 21.041055] i915 0000:02:02.0: [drm] GT0: GUC: submission enabled
[ 21.041060] i915 0000:02:02.0: [drm] GT0: GUC: SLPC enabled
[ 35.881480] i915 0000:02:02.0: [drm] GPU HANG: ecode 12:0:00000000
[ 35.881797] i915 0000:02:02.0: [drm] GT0: Resetting chip for stopped heartbeat on bcs'0
[ 35.882019] i915 0000:02:02.0: [drm] GT0: GuC firmware i915/mtl_guc_70.bin version 70.29.2
[ 35.894378] i915 0000:02:02.0: [drm] GT0: GUC: submission enabled
[ 35.894388] i915 0000:02:02.0: [drm] GT0: GUC: SLPC enabled
[ 50.812685] i915 0000:02:02.0: [drm] GPU HANG: ecode 12:0:00000000
[ 50.812865] i915 0000:02:02.0: [drm] GT0: Resetting chip for stopped heartbeat on bcs'0
[ 50.813081] i915 0000:02:02.0: [drm] GT0: GuC firmware i915/mtl_guc_70.bin version 70.29.2
[ 50.823229] i915 0000:02:02.0: [drm] GT0: GUC: submission enabled
[ 50.823238] i915 0000:02:02.0: [drm] GT0: GUC: SLPC enabled
[ 65.885545] i915 0000:02:02.0: [drm] GPU HANG: ecode 12:0:00000000
[ 65.885667] i915 0000:02:02.0: [drm] GT0: Resetting chip for stopped heartbeat on bcs'0
[ 65.885865] i915 0000:02:02.0: [drm] GT0: GuC firmware i915/mtl_guc_70.bin version 70.29.2
[ 65.897132] i915 0000:02:02.0: [drm] GT0: GUC: submission enabled
[ 65.897137] i915 0000:02:02.0: [drm] GT0: GUC: SLPC enabled
[ 80.728569] i915 0000:02:02.0: [drm] GPU HANG: ecode 12:0:00000000
[ 80.728693] i915 0000:02:02.0: [drm] GT0: Resetting chip for stopped heartbeat on bcs'0
[ 80.728893] i915 0000:02:02.0: [drm] GT0: GuC firmware i915/mtl_guc_70.bin version 70.29.2
[ 80.740646] i915 0000:02:02.0: [drm] GT0: GUC: submission enabled
[ 80.740650] i915 0000:02:02.0: [drm] GT0: GUC: SLPC enabled
[ 95.830329] i915 0000:02:02.0: [drm] GPU HANG: ecode 12:0:00000000
[ 95.830548] i915 0000:02:02.0: [drm] GT0: Resetting chip for stopped heartbeat on bcs'0
[ 95.830747] i915 0000:02:02.0: [drm] GT0: GuC firmware i915/mtl_guc_70.bin version 70.29.2
[ 95.841532] i915 0000:02:02.0: [drm] GT0: GUC: submission enabled
[ 95.841534] i915 0000:02:02.0: [drm] GT0: GUC: SLPC enabled
[ 96.043062] i915 0000:02:02.0: [drm] CI tainted: 0x9 by intel_gt_set_wedged_on_init+0x34/0x50 [i915]
[ 96.097484] [drm] Initialized i915 1.6.0 for 0000:02:02.0 on minor 0
[ 97.870724] fbcon: i915drmfb (fb0) is primary device
[ 97.870728] i915 0000:02:02.0: [drm] fb0: i915drmfb frame buffer device
Instructions on how to consume the Graphics is already outlined in the blog post ... I'm not sure why you're attempting to use ESXi SR-IOV Drivers which are NOT applicable for Intel Consumer GPU (see https://williamlam.com/2023/11/esxi-support-for-intel-igpu-with-sr-iov.html for details)
Thanks for your answer. It makes sense it doesn't work if I use the wrong drivers. I have read this blog post again, but still cannot see where I can find the information about where the consumer drivers are located. Can you point me to exactly where I can find them?
As of Ubuntu 24.04, the drivers are simply "baked" in and just works OOTB, sorry for that not being clear. I've done this a few times with previous versions of Ubuntu, which required drivers (see https://williamlam.com/2022/11/updated-findings-for-passthrough-of-intel-nuc-integrated-graphics-igpu.html for high level steps) but since using newer releases, those steps are needed anymore. I've not tried for other Linux-based systems, so YMMV based on how well they're supported by Intel
Well, both a fresh Ubuntu 24.04 kernel 6.8.0-52 generic and and 24.10 kernel 6.11 with kernel driver i915 gives me:
64.789019] i915 0000:02:02.0: [drm] GPU HANG: ecode 12:0:00000000
[ 64.789135] i915 0000:02:02.0: [drm] GT0: Resetting chip for stopped heartbeat on bcs'0
[ 64.789742] i915 0000:02:02.0: [drm] GT0: GuC firmware i915/mtl_guc_70.bin version 70.20.0
[ 64.799999] i915 0000:02:02.0: [drm] GT0: GUC: submission enabled
[ 64.800003] i915 0000:02:02.0: [drm] GT0: GUC: SLPC enabled
but i guess thats not related to ESXI
I have now installed Ubuntu 24.04 on an HP Elite Mini 600 G9, and got HW transcoding with the GPU to work without any problems. Dmesg | grep 915 shows no errors. If I install ESXI 8.0U3 on the same PC, do pass trough on the GPU, install a VM with Ubuntu 24.04 and assign the GPU to the VM, I get the exact same errors in dmesg | grep i915 which I get on my Intel Nuc 14 pro.
I get the same error on both machines: "failed to find VBIOS tables" and "Invalid ROM got 0x00000, expected 0x0000"
There must be something I'm missing that I can't figure out when I get exactly the same error with two different computers with different hardware
It looks like there's updated i915 driver instructions for Meteor Lake https://dgpu-docs.intel.com/driver/client/overview.html#installing-client-gpus-on-ubuntu-desktop-24-10
Have you given these a try, you may want to do a clean install to ensure the older drivers aren't being loaded or conflicts and see if that helps
I have now tried the suggested drivers. Unfortunately, it didn't change anything. The common errors are still:
3.338005] i915 0000:02:02.0: Invalid PCI ROM data signature: expecting 0x52494350, got 0xcb80aa55
[ 3.338007] i915 0000:02:02.0: [drm] Failed to find VBIOS tables (VBT)
spirit:
64.797714] i915 0000:02:02.0: [drm] GPU HANG: ecode 12:0:00000000
[ 64.797903] i915 0000:02:02.0: [drm] GT0: Resetting chip for stopped heartbeat on bcs'0
But as previously written, I get exactly the same errors on an HP Elite Mini 600 G9 which has an Intel Alder Lake-S GT1 (UHD Graphics 770) which is a few generations older. So I believe more that it is something in ESXI that is not configured correctly, I just don't know what
Hi, I am new to the NUC-ESXi family. Running Esxi 8.0 U3 on a NUC14 Pro Intel Core Ultra 7 155H with nearly the same config as described.
I am confused with the USB Flings drivers. Having two Sodola (2.5 GbE) USB-Adapter with the Realtek 8156(B) chipset and only 100Mbit is shown. Flings is with the latest version 76444229 installed. 0x0bda and 0x8156 is confirmed.
Any hint is welcome.
OMG.... after further Investigation, i suspected the cables. And yepp, the two cables i used for the USB Adapters were poorly configured. I tried with proper Cat6 cables and guess what.... fully 2500 Mbits/s are visible. Definitely something to have a look at from the beginning.
Oh wow ... physical issue ... yea, its something we don't alway assume. I've had another user who had a bad SATA cable connector for the taller models and that was the last thing they replaced including the board. Glad to hear everything is working!
Up and running. NVMe tiering in place. 477 GB RAM 😎 next is vSAN. Many thanks for all the articles you wrote and all the instructions given. Very helpful for me as a homelab newcomer.
With the change of behavior of Broadcom.
I consider this nuc with proxmox 🙂 (not a troll)
Wow. This has been super helpful. I wish I’d come across your post sooner. I just purchased the new: ASUS NUC 14 Pro+ NUC14RVSU9 Mini PC, Core Ultra 9 185H, 96GB RAM + 4TB NVMe 16-Cores 5.1Ghz, Arc Graphics, NPU, Dual Thunderbolt 4 Ports, WiFi 6E + BT 5.3, Win 11 Pro (96GB RAM + 4TB NVMe)
My Core use case is to run VMware VCenter I’m licensed for 7.0 to run my Vendors NAC solutions and to test interoperability, etc. I’m a noob at managing and implementing VMware. So, will this new NUC seamlessly support my use case? TIA
Not sure what’s need for your NAC solution, but conceptually, yes more than sufficient for your use case (pretty common for testing/development)
Much appreciated William. Question, apologies if it’s not the platform for this easy one. When installing ESXi 8.0 on my NUC14 i9, 4TB NVMe SSD via workstation 17 Pro. Does it matter to choose NVMe over SCSI for HDD storage? I’m getting a weird error:
type: ">" not supported between none type" and "float"
I’m confused … what is running on bare-metal? ESXi which is Type-1 Hypervisor or VMware Workstation which is Type-2
Hi WIlliam, apologies for the confusion. I was having difficulty loading ESXI 8.0.3 [Releasebuild-24022510 x86_64] from USB Boot. It woud load, up and start extracting than I would get an HW incompatibility detected cannot start cr0=0x8001003d cr2=0x0 cr3=0x60d000 cr4=0x14012c FMS=06/aa/4 uCode=0x20
*PCPU. 0: SIIIIIIIII
ultimately outputting a pink screen - i can upload if possible.
PanicvPanickIn@vmkernel and 2660: Fatal CPU mismatch on feature "Cores per tile"; cpu2 value = 0x16, but cpu0 value = 0xb same thing again mentioning Hyperthreads per core"
and more - Unsure how to upload the image to share.
I updated the BIOS on the NUC14 PRO+ to the lastest as recommended on Asus. I disabled HYPER-V from settings and via powershell - to kill services from running to prevent conflict.
Workstation 17.6.2 - was still indicating an error <Hardware:_VIRTUALIZATION WARNING; Hardware Virtualization is not a festure of the CPU or is no enabled in the BIOS - I went back into BIOS and validated that under ADVANCE/SECURITY it was enabled for Intel V-Technology.
Ultimtate, I was able to use workstation 17.5.2 and get my VM imported, but had to disable the Intel-VT option in WS17. 5
Ideally, I want to run ESXI 8.0 from the Boot option considering I have 4TB SSD NVMe (WIN 11 Pro OS) and not have to leverage Workstation, this NU14 PRO+ is more then capable I'm hitting some snags. TIA
Please read the blog post in which you’re posting on as this topic has been covered extensively on why and how to mitigate PSOD
understood, much appreciated. Wiil do.
Hi William, I found the thread here: https://williamlam.com/2023/01/video-of-esxi-install-workaround-for-fatal-cpu-mismatch-on-feature-for-intel-12th-gen-cpus-and-newer.html#google_vignette
Super grateful, coffee coming your way.. Much appreciated!
Hi William, I found the thread here: https://williamlam.com/2023/01/video-of-esxi-install-workaround-for-fatal-cpu-mismatch-on-feature-for-intel-12th-gen-cpus-and-newer.html#google_vignette
Super grateful, coffee coming your way.. Much appreciated!
Hey!
I cant get the iGPU passthrough to work on esxi 8.0.3 on a Intel NUC pro14 + with the cpu 185h. I dont have ReBAR support from what I can tell, could that be the issue?
sudo dmesg | grep i915
use xe.force_probe='7d55' and i915.force_probe='!7d55'
[ 3.299121] i915 0000:02:03.0: Using 8 cores (0-7) for kthreads
[ 3.299674] i915 0000:02:03.0: VT-d active for gfx access
[ 3.299677] i915 0000:02:03.0: vgaarb: deactivate vga console
[ 3.299698] i915 0000:02:03.0: Using Transparent Hugepages
[ 3.300078] i915 0000:02:03.0: Invalid PCI ROM data signature: expecting 0x52494350, got 0xcb80aa55
[ 3.300079] i915 0000:02:03.0: [drm] Failed to find VBIOS tables (VBT)
[ 3.322755] i915 0000:02:03.0: [drm] Finished loading DMC firmware i915/mtl_dmc_ver2_12.bin (v2.12)
[ 4.811580] i915 0000:02:03.0: [drm] [ENCODER:236:DDI A/PHY A] failed to retrieve link info, disabling eDP
[ 4.811736] i915 0000:02:03.0: [drm] Unknown port:B
[ 4.811754] WARNING: CPU: 2 PID: 604 at /var/lib/dkms/intel-i915-dkms/1.24.8.5.241129.8/build/drivers/gpu/drm/i915/display/intel_hdmi.c:2076 intel_hdmi_init_connector+0x3d5/0x430 [i915]
[ 4.811861] Modules linked in: overlay qrtr vsock_loopback vmw_vsock_virtio_transport_common vmw_vsock_vmci_transport vsock cfg80211 binfmt_misc nls_iso8859_1 i915(OE+) intel_rapl_msr intel_rapl_common i915_compat(OE) xe drm_gpuvm drm_exec gpu_sched drm_buddy drm_suballoc_helper intel_uncore_frequency_common drm_ttm_helper ttm drm_display_helper intel_vsec(OE) pmt_telemetry(OE) pmt_class(OE) cec rc_core i2c_algo_bit rapl vmw_balloon video wmi joydev i2c_piix4 input_leds i2c_smbus vmw_vmci mac_hid serio_raw sch_fq_codel dm_multipath msr efi_pstore nfnetlink dmi_sysfs ip_tables x_tables autofs4 btrfs blake2b_generic raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 crct10dif_pclmul crc32_pclmul polyval_clmulni polyval_generic ghash_clmulni_intel sha256_ssse3 sha1_ssse3 psmouse ahci vmxnet3 libahci pata_acpi vmw_pvscsi aesni_intel crypto_simd cryptd
[ 4.811911] RIP: 0010:intel_hdmi_init_connector+0x3d5/0x430 [i915]
[ 4.812025] ? intel_hdmi_init_connector+0x3d5/0x430 [i915]
[ 4.812110] ? intel_hdmi_init_connector+0x3d5/0x430 [i915]
[ 4.812180] ? intel_hdmi_init_connector+0x3d5/0x430 [i915]
[ 4.812246] intel_ddi_init+0x7db/0x8c0 [i915]
[ 4.812318] intel_modeset_init_nogem+0x64c/0x7d0 [i915]
[ 4.812397] ? intel_irq_postinstall+0x309/0x480 [i915]
[ 4.812458] ? intel_irq_install+0xc4/0x250 [i915]
[ 4.812519] i915_driver_probe+0x153b/0x1c60 [i915]
[ 4.812579] i915_pci_probe+0xc5/0x3a0 [i915]
[ 4.812657] i915_pci_register_driver+0x23/0x30 [i915]
[ 4.812715] __init_backport+0x4e/0x140 [i915]
[ 4.812769] ? __pfx___init_backport+0x10/0x10 [i915]
[ 4.823927] i915 0000:02:03.0: GT0: GUC: load failed: status = 0x400000A0, time = 5ms, freq = 2150MHz, ret = 0
[ 4.823934] i915 0000:02:03.0: GT0: GUC: load failed: status: Reset = 0, BootROM = 0x50, UKernel = 0x00, MIA = 0x00, Auth = 0x01
[ 4.823937] i915 0000:02:03.0: GT0: GUC: firmware signature verification failed
[ 4.834911] i915 0000:02:03.0: GT0: GUC: load failed: status = 0x80007134, time = 10ms, freq = 2350MHz, ret = 0
[ 4.834914] i915 0000:02:03.0: GT0: GUC: load failed: status: Reset = 0, BootROM = 0x1A, UKernel = 0x71, MIA = 0x00, Auth = 0x02
[ 4.834983] i915 0000:02:03.0: GT0: GuC initialization failed -ENXIO
[ 4.834986] i915 0000:02:03.0: GT0: Enabling uc failed (-5)
[ 4.834988] i915 0000:02:03.0: GT0: Failed to initialize GPU, declaring it wedged!
[ 4.912195] [drm] Initialized i915 1.6.0 for 0000:02:03.0 on minor 0
[ 18.827945] i915 0000:02:03.0: GT0: GUC: mmio request 0x4100: no reply 4100
[ 19.770931] i915 0000:02:03.0: GT0: GUC: mmio request 0x4100: no reply 4100
[ 22.334911] i915 0000:02:03.0: GT0: GUC: mmio request 0x4100: no reply 4100
jhagman-adm@odbdock01:~$ hwinfo --display
08: PCI 203.0: 0300 VGA compatible controller (VGA)
[Created at pci.386]
Unique ID: QOEa.oQng9K+95x3
SysFS ID: /devices/pci0000:02/0000:02:03.0
SysFS BusID: 0000:02:03.0
Hardware Class: graphics card
Device Name: "pciPassthru0"
Model: "Intel VGA compatible controller"
Vendor: pci 0x8086 "Intel Corporation"
Device: pci 0x7d55
SubVendor: pci 0x1043 "ASUSTeK Computer Inc."
SubDevice: pci 0x88c8
Revision: 0x08
Driver: "i915"
Driver Modules: "i915"
Memory Range: 0xd0000000-0xd0ffffff (ro,non-prefetchable)
Memory Range: 0xc0000000-0xcfffffff (ro,non-prefetchable)
Memory Range: 0x000c0000-0x000dffff (rw,non-prefetchable,disabled)
IRQ: 38 (161 events)
Module Alias: "pci:v00008086d00007D55sv00001043sd000088C8bc03sc00i00"
Driver Info #0:
Driver Status: xe is active
Driver Activation Cmd: "modprobe xe"
Driver Info #1:
Driver Status: i915 is active
Driver Activation Cmd: "modprobe i915"
Config Status: cfg=new, avail=yes, need=no, active=unknown
Primary display adapter: #8
jhagman-adm@odbdock01:~$ vainfo
Trying display: wayland
Trying display: x11
error: can't connect to X server!
Trying display: drm
libva info: VA-API version 1.22.0
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_22
libva error: /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so init failed
libva info: va_openDriver() returns 1
libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_1_20
libva error: /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so init failed
libva info: va_openDriver() returns -1
vaInitialize failed with error code -1 (unknown libva error),exit
What version of Ubuntu are you using? I think I saw this with earlier releases, you could try latest 24.10 to see if that helps