Earlier this week I found out that it was possible to passthrough the Integrated GPU (iGPU) for standard Intel NUC which was motivated by a question I had saw on the VMware Subreddit. I have written about iGPU passthrough for Intel NUCs before but only for the higher end models which were the Hades Canyon NUC at the time.
Neat! Just found out you can actually passthrough the iGPU of standard Intel NUC. The trick looks to be enabling passthrough using ESXi Embedded Host Client UI & then you can assign it using vSphere UI#Homelab pic.twitter.com/NwuxbXwUMj
— William Lam (@lamw.bsky.social | @*protected email*) (@lamw) June 15, 2020
To be honest, I never thought about trying this out with a standard NUC as I figured the iGPU may not be powerful enough or warrant any interests. After sharing the news on Twitter, I came to learn from the community that not only is this desirable for various use cases but some folks have also been doing this for some time now and have shared some the benefits it brings for certain types of workloads.
Can’t take credit. It was one of our collegaues that pointet me to it. Hw transcoding went up a factor of almost x 20. So for specefic workloads the nuc is suddently a lot more capable than before.
— Robert Jensen (@rhjensen) June 15, 2020
I’ve been doing this forever, when I need to crack passwords but don’t need the full 7 gpu rig - all Supermicro and 1080ti GPUs these dayshttps://t.co/GJGRV5eu8f
— Rob VandenBrink (@rvandenbrink) June 15, 2020
seems like this would be great for ESXi + Plex hardware transcoding
— Will Beers (@willbeers) June 15, 2020
Below are the instructions I used to enable iGPU passthrough on an Intel NUC 10 (Frost Canyon) with vSphere 7.0. These instructions should also be applicable for other NUC models and earlier versions of vSphere including details around passthrough configuration persistency which I know some folks have ran into which I was able to figure out as part of this experiment.
Step 1 - Enable passthrough of the iGPU. When I had initially attempted this using the vSphere UI within vCenter Server, I was not able to toggle it. I had to login to the ESXi Embedded Host Client.
After it was enabled, I then logged into vCenter Server to enable the iGPU for passthrough as the changes were not picked up. If you are using vSphere 7, you can now take advantage of the new "Hardware Label" feature which is available of the new Assignable Hardware capability.
Step 2 - Navigate to Configure->Hardware->Graphics->Host Graphics and change the default graphics type to "Shared Direct"
Step 3 - Create a new VM, I used Windows 10 64-bit for my OS. Ensure that the VM is configured with vSphere 7 Compatibility (aka vHW 17) which is required to use the new Dynamic Direct Path I/O feature. If you are using an older version of vSphere or earlier VM Compatibility, the legacy Direct Path I/O should still work.
In addition, I also added hypervisor.cpuid.v0=FALSE to the VM Advanced Setting, I noticed this is generally recommended when using NVIDIA GPUs and I was not 100% sure whether this is needed or not in this case, but it did not look to hurt to add it.
After Windows was setup, I noticed that it detected the iGPU and automatically installed the drivers along with the Intels Graphics Center tool which was pretty useful.
One issue that I had noticed while looking into iGPU passthrough was that the ESXi passthrough configuration would not persist upon a reboot and folks have simply been dealing with it over the years by manually toggling passthrough. I too ran into this behavior and it certainly was not ideal and I wanted to dig deeper and at least file a bug internally.
After a bit of debugging with one of the Engineers, we found the real root cause and interestingly, it had nothing to do with persistency of the configuration which was being saved properly. The issue is that by default the VMKernel will automatically claim the VGA driver and this becomes an issue as the passthrough configuration is processed much later in the boot process causing the behavior that has been observed.
The good news is that there is an easy workaround that allows us to tell the VMKernel to not claim the VGA driver which is passed as an ESXi kernel setting. I do want to mention one side affect is that you will no longer be able to access the DCUI interface if you are using a monitor to connect to your NUC. Once the VMKernel is starting, you will see a screen like the following as the VGA driver is no longer being claimed.
To disable the claiming of the VGA driver, run the following ESXCLI command:
esxcli system settings kernel set -s vga -v FALSE
You can always re-enable this as long as you have access to ESXi host. At this point, you do not have to reboot the ESXi host but the next time it goes through a reboot the iGPU passthrough settings will now persist.
Lastly, I do want to mention that it is still possible to access the DCUI by using SSH, which may not be a very well known capability. Simply SSH to your ESXi host and run the following two commands which will launch DCUI and is fully functional:
TERM=xterm
dcui
Here are a few more success stories in iGPU passthrough on other NUCs like the Skull Canyon and 8th Gen, feel free to share your stories and configurations by leaving a comment.
Skull Canyon NUC tested and approved! After the iGPU is assigned to a VM via vSphere UI it seems reboot capable
— Thomas D. (@Oeppelman) June 16, 2020
@lamw this is getting exciting! I am using an NUC8i7BEH Iris Plus Graphics 655 GPU on ESXi 6.7. The GPU has 47 Execution Units! pic.twitter.com/aGiFAUDyKi
— vdoppler (@vdoppler) June 15, 2020
Can we also 'disable the claiming of the VGA driver' on standard pc homelab ? In order to get the onboard video access for passthrought ?
William,
thanks for the sharing. my question is
whether this igpu passthrough means drived in windows VM for transcoding only ,
or it can output destop to monitor via HDMI/DP also.
The former
Hey there, any idea how to get the GPU to passthru on this to an external monitor, just like how you can by using any normal GPU and using passthru?
I can passthru multitudes of Nvidia cards and then HDMI out to a monitor just fine, but I can't seem to do it with this. Struggling major. Have tried every technique that works with normal GPUs to no avail, including passthru.map editing (eg, 8086 9bca d3d0 false)
Any idea? Many thanks
I have done this in PVE 6.1-7,HDMI/DP output to monitor successfully. but it's hard to realize in Esxi. so far there is no way to set iGPU to PCI address 0x18, when i set pciPassthru0.pciSlotNumber = "24" in *.vmx . it's always changed back silently. accordingly PVE setting, we have to set iGPU to this virtual PCI address.
Hello, do you mean, you are able to output to monitor but you need to re-do the setup? Can you please help me to achieve this? I passed successfully, drive installed, disabled svga, etc.. but I'm not getting output from DP, I'm trying to this with an iGPU and the slot number is 2 in my case.
Is there any way to remove vmware svga from virtual machine with igpu passtrough?)
You can add svga.present=false to the VM Advanced Configuration which will disable the SVGA driver
Ok. Thank’s for your answer!
Hi William,
Thanks for the details. I am getting VM shutdown with following error when I enable iGPU Passthrough
NUC6I5
Intel Graphiscs 540
Client version:
1.34.0
Client build number:
15603211
ESXi version:
7.0.0
ESXi build number:
16324942
Error:
PCI passthru device 0000:00:02.0 caused an IOMMU fault type 6 at address 0x90ffeb000. Powering off the virtual machine. If the problem persists please contact the device's vendor
Can you please advise the problem here?
I saw it was asked some time ago, but want know if there is any news. Is it possible to execute the passthrought and have a monitor connected to the gpu card passed? I'm trying to do this for an iGPU but no success.
Hi, is there any possibility to change the Host Graphics Settings and Dynamic DirectPath I/O without vSphere UI? I tried to install vSphere UI and keeps getting postgre error.
You can configure passthrough using the vSphere UI in vCenter Server or the ESXi Embedded Host Client UI (simply open browser to your ESXi hostname/IP)
Is it possible to edit these settings using just the Embedded Host Client? There doesnt seem to be be any way find these settings.
Yes, PCIe pass through can be configured in ESXi Embedded Host Client. Not in front of computer, but it’s definitely there
PCI passthrough can be toggled, the VM sees the iGPU but throws error code 43 and won't start the driver for it. There is no obvious way to change host graphics settings. Maybe you know of a way.
Hello,
After successfully configuring the iGPU passthrough for Windows 10 VM on Skull Canyon, the connected Monitor shows No Signal.
I tried different ESXI versions(6.5, 6.7.U2 and 7.0U1c) , as well as different monitors and TV, connected to the HDMI or thunderbolt port but nothing comes out when the Windows 10 guest box starts(automatically).
The Windows box has the Intel Iris Pro 580 driver installed with no error. The VMWare SVGA is disabled and I can access it using RDP to verify that all looks good. Using BIOS/VMXNET3 config for the VM.
Anyone having issues with this configuration? Any ideas?
Intel NUC Skull Canyon (NUC6i7KYK)
Bios Version: KYSKLi70.86A.0071.2020.0909.1612
Currently running ESXi-7.0U1c
Thanks!
This method of passthrough only enables the VM to use the GPU for processing but does NOT actually output video to the physical monitor
Thank you,
Is there a way the video can be actually seen on the monitor using the following configuration?
No, you would need a dedicated GPU and then this would work as expected
OK. That explains a lot.
Thank you!
On a Lenovo M900t, which is using HD530 that does actually display on the attached monitor. That is a different story with UHD 630 on a P340 SFF, which behaved like what you described (drivers installed fine, only no output on monitor). Any idea what cause the monitor not to display anything in some iGPU ?
TIL - I didn't know about accessing the DCUI through an SSH session, wow thanks so much!
Hi - I'm trying this out - and am getting at error when powering up the VM:
Failed to register the device pciPassthru0 for 0:2.0 due to unavailable hardware or software support.
This was on a brand new VM. 7.0 U1.
ESXi 7.0 Update 1
I don't see an option to set Shared Direct - wonder if that's why?
Do I understand the above discussion correctly that if I want ESXi running on an INTEL NUC (in my case 10i5FNH) to display the output from a running virtual machine (e.g. Windows 10 or LibreELEC) via the internal graphics card (on the HDMI port), it is not possible?
Passthrough can indeed be activated, but the HDMI port only displays a black screen.
And for this setup to work do I need to have another external video card? (which of course cannot be connected to this Intel NUC in any way) 🙁
So is there any meaningful use for an internal graphics card in passthrough mode to run a virtual machine? Maybe to speed up some operations? However, displaying the outputs is probably not possible... 🙁
Same with ESXi, 7.0.2, 17867351
Model: NUC11TNHv5 i5-1145G7
and VM with win10 21H1.
To get around the Error Code 43 I installed a new VM with ESX Compatibility Version 6.5 and driver was found perfectly. Version 7 didnt worked for me, Lenovo M920X
Just gave this a shot and after you reboot, the issue is the same. I'm on vSphere 7.0 Update 2c and using vHW13 (ESXi 6.5 Compatibility)
Has any tried to passthrough AMD APU 5700G ?
Hi.
I am using esxi 6.5 and my Intel UHD Graphics 630 not working error cod 43.
I am using ESXi 8 and have configured the vga to false. What I am now seeing is the embedded web client does not work at all any longer. Is this normal? Second I am using a Hades Canyon NUC, with two GPUs. I am passing through the AMD Vega GPU to my Linux vm but I still have the embedded Intel GPU. Can this be used for normal VGA graphics?
Changing VM level setting should have no implications on the ESXi Host Client UI ... so definitely not normal nor is this something I've seen and I've done this many times 🙂 So its probably something else that you might be changing or environmental. Not sure what you mean by "normal VGA graphics"? If you're referring to ESXi screen, yes, it'll default to that and no change in behavior
Well more info...it seems that my Linux vm, upon reading the vbios from the the AMD gpu, blows up the whole hypervisor. Everything hangs and the disk light is lit. I rebooted the hypervisor and tried again and got the same issue. Not sure if this is a problem with the vm or hypervisor. I am going to do some more testing but it seems on ESXi 8 this is the issue.
I am going to take a step back and try ESXi 7.0u3g also.
Thanks William, I was having this issue since such a long time, in case of power failure or unexpected restart my vm with GPU passthrough was not able to boot.
You fix made the trick on my old NUC6i5SYH on ESXI 7