While searching for drivers for another Intel NUC platform, I saw that Intel had recently published new graphic drivers for Linux including support for their new Intel Arc GPUs. This of course got me curious on wondering whether this would help at all with the issues regarding passthrough of the integrated graphics (iGPU) for recent Intel NUCs? π€
As a refresher, starting with the 11th Gen Intel NUCs, passthrough of the iGPU on Windows had stopped working and would result in the infamous Windows Error Code 43 and even worse on the 12th Gen Intel NUCs, Windows would simply BSOD after the initial reboot. The behavior is also simliar for Linux operating systems, while it better handles the issue by not crashing the OS, iGPU passthrough is also not functional for Linux systems.
To be honest, I had low expectations these new Linux graphic drivers would behave any differently, but I decided for the sake of persistency that I would give it one more go. I had access to both an Intel NUC 12 Extreme (Dragon Canyon) and Intel NUC 12 Pro (Wall Street Canyon), both of which included recent Intel iGPUs.
π€―π€―π€― is the only way I could describe what I had discovered after my testing!
I am super excited to share that I was able to successfully use the passthrough iGPU from both an Intel NUC 12 Extreme and Intel NUC 12 Pro to an Ubuntu 22.04 VM running ESXi 7.0 Update 3 and ESXi 8.0 without any issues!
Furthermore, I unexpectedly discovered that the console output from the Ubuntu 22.04 VM was also able to output video directly to the physical monitor that was plugged into the Intel NUC 12 Extreme! π² I believe this might actually be the first time this has ever been documented to work with an iGPU passthrough from ESXi. I had to blink a few times to make sure I was not dreaming this since I was not expecting anything to show up on the physical monitor, so that took me by complete surprise and shock.
Not only does this prove iGPU passthrough for recent Intel NUCs can function correctly with ESXi, but the current Windows Error Code 43 issue is due to issues with the Intel graphic drivers for Windows. Hopefully someone from the Intel graphics team will consider looking at this issue again as it should be possible to also get this functional on a Windows operating system but it would require some support from Intel.
I also ran additional experiments using both an Intel NUC 11 Extreme (Beast Canyon) and Intel NUC 11 Pro (Tiger Canyon), however I was not successful in using the iGPU from these platforms, so it looks like something may have changed in both drivers and/or hardware that makes this viable again starting with the Intel NUC 12th Gen platforms. I should also mention video output to the physical monitor only worked when using the Intel NUC 12 Extreme and did not work when using the Intel NUC 12 Pro, perhaps this is because it has Intel DG1 vs UHD iGPU, but just something to be aware of if you are looking for that functionality.
Intel NUC 12 Extreme
Here is a screenshot of Ubunutu 22.04 VM which has the default virtual graphics disabled and is connected using a remote session utilizing the iGPU passthrough from an Intel NUC 12 Extreme running ESXi 8.0 (also works on latest ESXi 7.0 Update 3 release).
Here are the high level instructions for setting this up:
Step 1 - Create and install Ubuntu Server 22.04 VM (recommend using 60GB storage or more, as additional packages will need to be installed). Once the OS has been installed, go ahead and shutdown the VM.
Step 2 - Enable passthrough of the iGPU under the ESXi Configure->Hardware->PCI Devices settings and then add a new PCI Device to the VM and select the iGPU. You can use either DirectPath IO or Dynamic DirectPath IO, it does not make a difference.
Step 3 - Disable the default virtual graphics driver (svga) as you could observe strange behaviors, edit the VM and under VM Options->Advanced->Configuration Parameters change the following setting from true to false:
svga.present
Note: This setting is also required if you wish to output the display from the Ubuntu VM to the physical monitor that is connected to Intel NUC 12 Extreme
Step 4 - Power on the VM and then follow these instructions for installing the Intel Graphic Drivers for Ubuntu 22.04 and once completed, you will now be able to successfully use the iGPU from within Ubuntu VM.
With the ability to output the display from the Ubuntu VM to the physical monitor that is connected to Intel NUC 12 Extreme, you may also be interested in passing through additional devices like keyboard and mouse, as shown in screenshot below so that you can interact with the system. To do so, you can follow the instructions found in this blog post for more details.
Intel NUC 12 Pro
Here is a screenshot of Ubunutu 22.04 VM which has the default virtual graphics disabled and is connected using a remote session utilizing the iGPU passthrough from an Intel NUC 12 Pro running ESXi 8.0 (also works on latest ESXi 7.0 Update 3 release).
Here are the high level instructions for setting this up:
Step 1 - Create and install Ubuntu Server 22.04 VM (recommend using 60GB storage or more, as additional packages will need to be installed). Once the OS has been installed, go ahead and shutdown the VM.
Step 2 - Enable passthrough of the iGPU under the ESXi Configure->Hardware->PCI Devices settings and then add a new PCI Device to the VM and select the iGPU. You can use either DirectPath IO or Dynamic DirectPath IO, it does not make a difference.
Step 3 - Disable the default virtual graphics driver (svga) as you could observe strange behaviors, edit the VM and under VM Options->Advanced->Configuration Parameters change the following setting from true to false:
svga.present
Step 4 - Power on the VM and then follow these instructions for installing the Intel Graphic Drivers for Ubuntu 22.04 and once completed, you will now be able to successfully use the iGPU from within Ubuntu VM.
Intel NUC 12 EnthusiastΒ
UPDATE (11/18/22) - Both the discrete Intel Arc 770M and the Intel iGPU can be successfully passthrough and consume by Ubuntu VM. Please see this recent blog post for more details.
Additional Notes
- I also ran additional experiments using both an Intel NUC 11 Extreme (Beast Canyon) and Intel NUC 11 Pro (Tiger Canyon), however I was not successful in using the iGPU from these platforms with exact same instructions, so it looks like something may have changed in both the drivers and/or hardware that makes this viable again starting with Intel NUC 12th Gen platforms and newer
- The VM console output to the physical monitor only worked when using an Intel NUC 12 Extreme and did not work when using an Intel NUC 12 Pro, perhaps this is because it has Intel DG1 vs UHD iGPU, something to be aware of if you are looking for this additional functionality
- Since the graphic drivers were built specifically for Ubuntu, it may or may not work for other Linux distributions
- Here are some additional resources that I had used for setting up and accessing my Ubuntu VMs that might be useful
Ricardo Matos says
I've been able to do this with the Hades Canyon for years π I know it's not really an iGPU as it's using a PCIe connected Radeon GPU... but I've running a terminal VM on my Hades connected to a screen for years
B says
Is it possible to split the Intel GPU to support multiple VMs?
William Lam says
No. vSGA is not possible
maxdxs says
how havebeen the performance? ive tried to use my 12700 but I got hundreds of puprple screens then i have to move to proxmox :-/
Jacky says
Hi, how to resolve the issue of PCIPassthrLate when add a PCI device in ESXi 8.0? Thanks a lot.
Jayson says
Also suffering from the PCIPassthruLate error on ESXi 8. I've tried everything I can think of, disabling console with esxcli system settings kernel set -s vga -v FALSE. Any ideas?
thedavix says
Hi,
I just want to confirm that the passthrough of the iGPU of Intel NUC 11 Extreme (Beast Canyon) NUC11BTMi9 is working now with, at least, Bios DBTGL579 (0064). It works out of the box with a brand new install of Ubuntu 22.04 (server or desktop it doesn't mater).
I have been using the Best Canyon since more than a month now to transcode on Plex via a docker successfully.
Here is the summary of what I did on a fresh ubuntu 22.04 install (based on the steps above + another post of William)
0) Install and finish install of Ubuntu and shut it down
1) Add new PCI iGPU device to VM
2) Change advance parameter on VM: svga.present = false
3) Start the VM and configure the iGPU
3.1) Configuring permissions
stat -c "%G" /dev/dri/render*
groups ${USER}
sudo gpasswd -a ${USER} render
newgrp render
3.2) (Optional) Verify Media drivers installation
sudo apt-get install vainfo
export DISPLAY=:0.0
vainfo
3.3) reboot just to be sure
4) To prevent having to toggle the passthrough of the iGPU upon ESXI reboot, perform the following command in ESXI via SSH
esxcli system settings kernel set -s vga -v FALSE
Thanks to William for all the help on this!
Note: As said, I didn't need to install the latest intel drivers (step 4 of William's tutorial above) to have the iGPU work.
However, I did test and install latest intel drivers but I encountered issues with HDR tone mapping when doing HW transcoding, it wasn't working properly. That is why I stays with the original ubuntu drivers.
Daan van Doodewaard says
Cool. Can we also get this working in Windows 10/11 ? π
thedavix says
Just installed a fresh windows 10 to try it out for you and unfortunately I do have the Code 43 issue (https://williamlam.com/2021/07/passthrough-of-intel-iris-xe-integrated-gpu-on-11th-gen-nuc-results-in-error-code-43.html)
With original or updated graphic drivers it didn't work
did try as well hypvisor.cpuid.v0 = false and svga.present = false
but nothing changed unfortunately.
I saw some post of people that got it working but would be good to know what they did π
Tibi says
I was searching for a mini pc with homelab purpose and your guide on ESXi for gen12 helped me take the decision. I'm having a small issue with the iGPU pass. After I enable the passthrough I can't find where I need to select DirectPath IO or Dynamic DirectPath IO. I add the iGPU to vm (ubuntu v24) and the vm is not starting ....
I'm new with ESXi but beats me where to find the option DirectPath IO or Dynamic DirectPath IO, maybe is due to not configuring this.
Any advices would help ...
Tibi says
v22 ... there's a typo :))
William Lam says
I'm confused by your statements as they seem contradicting ... you just stated you "add he iGPU to vm", which is where you'd see the option of specifying whether to use DirectPath IO or Dynamic DirectPath IO. If you're new to ESXi, I'd recommend reviewing the ESXi documentation on these topics for more information
Tibi says
So updates: managed to install the intel driver via SSH and all looks good but now I can't even get console from ESXi. SSH is working but no video driver ...
tgrigorescui says
I mean via remote desktop is working, have checked the driver via lspci and hwinfo comand and seems is active but the console windows from esxi is not displaying anything ....
William Lam says
Folks - If you set svga.present = FALSE, it'll disable the default VMware Graphics Driver, which will also disable the VM Console (e.g. when you click on the VM thumbnail to access the Console), this is expected as this is what this setting is doing. Some folks may NOT want that enabled and it's marked as an optional step, for this very reason
Tibi says
First, would like to thank you for replying..can imagine there is quite a considerable amount of comments.
So, if I leave the svga to true, I will have the benefit of iGPU passed to the vm and also the console view, right ?
William Lam says
iGPU has nothing to do with that setting mentioned. It controls whether the virtual graphics is enabled for the VM or not. If you still want the VM console, then leave the default settings but as I mentioned, some folks don't want it enabled after passing through additional graphics
Tibi says
Thx ! I just enabled it but with both, iGPU and svga true, the system hangs while booting ubuntu
Andriy Sharandakov says
It doesn't hang, it just stops video output to the SVGA once iGPU driver is loaded by the kernel. Check if you can access your VM via SSH, and you will be surprised.
Anyways, I haven't found an elegant way to keep output to SVGA after the iGPU driver is loaded. There is a "nomodeset" kernel boot option that will prevent switching of the output to the iGPU, and the VM console will still be available. However, in this case iGPU won't be initialized on boot, and in this case, I haven't managed to get Plex to use hardware transcoding. I believe it should be possible but requires more tinkering.
On the other side, with the SVGA device disabled (svga.present = false), hardware transcoding works as expected, as in this case, there is only a single video adapter that is used by default.
BTW, here are the official driver installation guides from Intel: https://dgpu-docs.intel.com/installation-guides/index.html#intel-arc-gpus
Sebigeli says
Hello, I'm on the latest version of Ubuntu, I've followed the tutorial on Intel's site but Plex doesn't seem to want to do Plex transcoding while everything is set up (I'm running it with my Nvidia GPU on Windows ). I'm running ESXI 7.0 U3 with an I5 12400. Could someone help me?
hwinfo --display
20: PCI 1300.0: 0380 Display controller
[Created at pci.386]
Unique ID: jaVc.1H4YcO93p6C
Parent ID: abAj.11lousfpsJ2
SysFS ID: /devices/pci0000:00/0000:00:17.0/0000:13:00.0
SysFS BusID: 0000:13:00.0
Hardware Class: graphics card
Device Name: "pciPassthru0"
Model: "Intel Display controller"
Vendor: pci 0x8086 "Intel Corporation"
Device: pci 0x4692
SubVendor: pci 0x1462 "Micro-Star International Co., Ltd. [MSI]"
SubDevice: pci 0x7d25
Revision: 0x0c
Driver: "i915"
Driver Modules: "i915"
Memory Range: 0xfc000000-0xfcffffff (rw,non-prefetchable)
Memory Range: 0xd0000000-0xdfffffff (ro,non-prefetchable)
I/O Ports: 0x5000-0x503f (rw)
IRQ: 69 (2621 events)
Module Alias: "pci:v00008086d00004692sv00001462sd00007D25bc03sc80i00"
Driver Info #0:
Driver Status: i915 is active
Driver Activation Cmd: "modprobe i915"
Config Status: cfg=new, avail=yes, need=no, active=unknown
Attached to: #7 (PCI bridge)
27: PCI 0f.0: 0300 VGA compatible controller (VGA)
[Created at pci.386]
Unique ID: _+Pw.jBKePf3JQB5
SysFS ID: /devices/pci0000:00/0000:00:0f.0
SysFS BusID: 0000:00:0f.0
Hardware Class: graphics card
Model: "VMware VMWARE0405"
Vendor: pci 0x15ad "VMware, Inc."
Device: pci 0x0405 "VMWARE0405"
SubVendor: pci 0x15ad "VMware, Inc."
SubDevice: pci 0x0405
Driver: "vmwgfx"
Driver Modules: "vmwgfx"
I/O Ports: 0x1070-0x107f (rw)
Memory Range: 0xe8000000-0xefffffff (ro,non-prefetchable)
Memory Range: 0xfe000000-0xfe7fffff (rw,non-prefetchable)
Memory Range: 0x000c0000-0x000dffff (rw,non-prefetchable,disabled)
IRQ: 16 (330 events)
I/O Port: 0x00 (rw)
Module Alias: "pci:v000015ADd00000405sv000015ADsd00000405bc03sc00i00"
Driver Info #0:
XFree86 v4 Server Module: vmware
Config Status: cfg=new, avail=yes, need=no, active=unknown
Primary display adapter: #27
William Lam says
I recall reading on Reddit that someone had to disable the VMware Graphics for it to attempt rendering on iGPU as it defaults to the first device it sees.
Since this is more Plex specific, you may also want to post in Plex support channels in case someone has come across but from VM POV, iGPU is enabled and ready π
Maikel says
Can confirm disable vmware svga and it works for hw transcoding.
Tibi says
Haha, I had the issue. As long as you donβt disable svga false, it will not hw transcode
Sebigeli says
That's working, thanks !
Tibi says
Great ! Happy itβs ok now
Sebigeli says
I just did more test, I feel like it doesn't always work, the HW doesn't activate all the time, it doesn't work on all, any idea?. My movies are played from a SharePoint mounted via rclone. I have a good fiber internet line. My movies are between 1 and 100 GB
Tibi says
Well, problem means is somewhere else.
First point for hwtranscode is to have svga.presente=false
Probably you have some other setting that is affecting it or maybe go again and check if the driver is active.
In my case itβs playing from a NAS with the folders mounted via nfs share.
Sebigeli says
Hello,
On my Plex VM, the svga.presente=false parameter are present, it's ok.
I did not install the vmware tools, that does not affect?
I use xrdp in case of problem, my linux has a graphical interface, wouldn't that be a problem?
I tried copying a movie locally (from my sharepoint to home folder) and the HW doesn't work any better.
My user is called Plex and it's member of that groups :
plex adm cdrom sudo dip video plugdev render lpadmin lxd sambashare
it's normal when I run the "intel_gpu_top" command with my lambda user I have a problem right?
From what I've seen, some movies transcode well and others don't, but I don't know why...
Are there any other settings to enable in the bios?
the films have the same format, it's strange...
Do you have discord for help me ? thanks !
Sebigeli says
I just found, it's the HDR tone that was breaking the HW on some movies. Do you need to install something extra to be able to use it? THANKS
jasonchurchward says
For anyone running Plex in Docker and was suffering issues with the hardware transcoding working reliably then this is the fix.
I run DietPi and Docker but when I set the svga.present=false under VMware advanced the VM just wouldn't boot even though I had PCI pass through for my onboard iGPU configured. I instead left that alone and configured Docker to use my iGPU PCI passthrough as the primary.
To complete this set, it in Docker as /dev/dri/renderD129 (the iGPU) to be used as /dev/dri/renderD128 (VMware video) which effectively makes it the primary and hardware transcoding works every time even with HDR tone mapping enabled too. You configure this under "advanced container settings" in Portainer or in a Docker compose/stack file with the below entry.
devices:
- /dev/dri/renderD129:/dev/dri/renderD128
Maikel says
William,
I got a question you might be able to answer regarding passthrough of the iGPU.
Im running a vSAN cluster with 3 identical hosts.
Based on the information on Dynamic DirectPath i/o i should be able to vMotion and DRS the VM thats using the iGPU from example host-1 and be able to vMotion it to host-2 and utilize the iGPU from host-2 .
For what ever reason i can't migrate the vm to any other host even tho the iGPU and all other resources are available and enough for that VM to run there when it powered on. When powered off i got no issues, but this defeats the information regarding Dynamic DirectPath i/o.
Have you tried it or know the answer?
William Lam says
You've actually miss-understood what Dynamic DirectPath I/O actually provides and it is not vMotion. Please see https://blogs.vmware.com/vsphere/2020/03/vsphere-7-assignable-hardware.html for information, hint DRS "initial placement" π
Maikel says
Hi William,
Thank you for taking the time to respond.
I totally missed that and was convinced that DRS and vMotion should have worked.
My bad! It does work as intended on my side for initial placement.
Thanks again!
William Lam says
Yea, unless the device is virtualize (e.g. vGPU w/NVIDIA), the passthrough device is associated to VM which would prevents vMotion
surivlee115 says
For those who get "Module PCIPassthruLate power on failed" on ESXi 8 / vSphere 8, All you need to do is:
* Disable BOTH "Hardware virtualization" and "I/O MMU" for CPU.
William Lam says
You do NOT need to disable those features in the physical system. Please see https://williamlam.com/2022/11/esxi-on-intel-nuc-12-enthusiast-serpent-canyon.html, halfway down there is a workaround π
Shane Malden says
Understand this is re the GPU, but I was wondering if there is a post or thoughts on passing through the WiFi? Though would it work with a Ubuntu VM, as I've had issues with USB adapters not installing too.
Ricardo Matos says
Try IPFire. Worked great for me
William Lam says
I had tried recently given some of the reports but no luck on 20.04 or 22.04 (which required install backport drivers). With 23.04, drivers seem to be there but fails with UCODE -110, which seems to indicate firmware issues. Probably needs to be reported to Intel, but more than likely driver + firmware combo. This was attempted on 13th Gen
Ramon says
Not ESX, but this post might be relevant to the subject.
https://www.derekseaman.com/2023/06/proxmox-ve-8-windows-11-vgpu-vt-d-passthrough-with-intel-alder-lake.html
I tested this on my NUC13 and I was able to run a Windows 10 VM while using passthrough of the Iris XE Graphics using Intel's provided driver. Not sure if this means the Intel driver is not the core of the issue at hand? Without following the abovementioned approach I got the same error 43 as I got on ESXi8
William Lam says
This works because Proxmox IS using Linux, hence why the drivers are functional π This is still an Intel (Windows) driver issue ...
Nick S. says
I'd like to share some knowledge here since this post gave me so much information already, When using ESXI 8.0 and a intel NUC 11 Extreme - passing through the iGPU WITHOUT setting the `svga.present = false` causes a bizarre slow memory leak on any VM you connect the GPU to.
This is a obviously a bug, But should be noted in the original post that the setting called out above is not optional, unless you want to troubleshoot memory leak issues on any VM you attempt to use the iGPU with. Details:
https://communities.vmware.com/t5/ESXi-Discussions/Memory-Leak-on-VMware-ESXi-8-0-1-21495797-when-using-iGPU/td-p/2986286
https://github.com/k3s-io/k3s/discussions/8315
Original credit for the solution from here:
https://www.reddit.com/r/Ubuntu/comments/kwyhr0/comment/jcyjj11/?utm_source=share&utm_medium=web2x&context=3
William Lam says
Hi Nick,
I'm curious, if you've attempted to try a newer version of Ubuntu (23.04) and whether you're still seeing the same behavior? Just trying to rule out older versions as I see mention of 20.04 being used. Also, what version of ESXi are you observing this with and have you tried the latest version to rule out any potential fixes?
In addition, when this occurs. Can you also provide a vm-support from ESXi host, which would help us
Flo says
Does anybody know, if intel is working on a driver update to fix the gpu passthrough issue?
William Lam says
No, theyβre not
Bruce.Z says
Appreciate the guide! I've successfully passed through iGPU of Intel N100 CPU to Ubuntu VM 22.04, and made RDP working too.
Just a suggestion: don't use Ubuntu desktop version, install server version then GNOME+xrdp manually. I began with desktop version and failed to make it work in many ways. But server version just follows what described in the guides here.
Tom says
This is an older blog post but still, you might be interested to know that the console output to the physical monitor also seems to work with Windows VM's. Or better said "a" windows VM I have, since I haven't tried any others. I'm on 8.0U2 but it has worked before the upgrade (so on 8.0) as well. I likewise couldn't believe my eyes when that windows desktop first popped up on my second monitor because as far as I knew up until then, that should be impossible. I use displayport switches to switch my monitors between my private and work desktops, so at first I was sure that my work desktop had booted up and my displayport switch was on input 2 in stead of 1 but then I noticed the windows server manager pop up, and both my desktops are not windows server. That was a headscratcher. Anyways, thank you for all the work you do on here, you have no idea how much your blog posts have helped both myself and all my (VMware-admin and VMware-engineer) coworkers