WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple
You are here: Home / ESXi / Passthrough of Integrated GPU (iGPU) for standard Intel NUC

Passthrough of Integrated GPU (iGPU) for standard Intel NUC

06.18.2020 by William Lam // 35 Comments

Earlier this week I found out that it was possible to passthrough the Integrated GPU (iGPU) for standard Intel NUC which was motivated by a question I had saw on the VMware Subreddit. I have written about iGPU passthrough for Intel NUCs before but only for the higher end models which were the Hades Canyon NUC at the time.

Neat! Just found out you can actually passthrough the iGPU of standard Intel NUC. The trick looks to be enabling passthrough using ESXi Embedded Host Client UI & then you can assign it using vSphere UI#Homelab pic.twitter.com/NwuxbXwUMj

— William Lam (@lamw.bsky.social | @*protected email*) (@lamw) June 15, 2020

To be honest, I never thought about trying this out with a standard NUC as I figured the iGPU may not be powerful enough or warrant any interests. After sharing the news on Twitter, I came to learn from the community that not only is this desirable for various use cases but some folks have also been doing this for some time now and have shared some the benefits it brings for certain types of workloads.

Can’t take credit. It was one of our collegaues that pointet me to it. Hw transcoding went up a factor of almost x 20. So for specefic workloads the nuc is suddently a lot more capable than before.

— Robert Jensen (@rhjensen) June 15, 2020

I’ve been doing this forever, when I need to crack passwords but don’t need the full 7 gpu rig - all Supermicro and 1080ti GPUs these dayshttps://t.co/GJGRV5eu8f

— Rob VandenBrink (@rvandenbrink) June 15, 2020

seems like this would be great for ESXi + Plex hardware transcoding

— Will Beers (@willbeers) June 15, 2020

Below are the instructions I used to enable iGPU passthrough on an Intel NUC 10 (Frost Canyon) with vSphere 7.0. These instructions should also be applicable for other NUC models and earlier versions of vSphere including details around passthrough configuration persistency which I know some folks have ran into which I was able to figure out as part of this experiment.

Step 1 - Enable passthrough of the iGPU. When I had initially attempted this using the vSphere UI within vCenter Server, I was not able to toggle it. I had to login to the ESXi Embedded Host Client.


After it was enabled, I then logged into vCenter Server to enable the iGPU for passthrough as the changes were not picked up. If you are using vSphere 7, you can now take advantage of the new "Hardware Label" feature which is available of the new Assignable Hardware capability.


Step 2 - Navigate to Configure->Hardware->Graphics->Host Graphics and change the default graphics type to "Shared Direct"


Step 3 - Create a new VM, I used Windows 10 64-bit for my OS. Ensure that the VM is configured with vSphere 7 Compatibility (aka vHW 17) which is required to use the new Dynamic Direct Path I/O feature. If you are using an older version of vSphere or earlier VM Compatibility, the legacy Direct Path I/O should still work.


In addition, I also added hypervisor.cpuid.v0=FALSE to the VM Advanced Setting, I noticed this is generally recommended when using NVIDIA GPUs and I was not 100% sure whether this is needed or not in this case, but it did not look to hurt to add it.

After Windows was setup, I noticed that it detected the iGPU and automatically installed the drivers along with the Intels Graphics Center tool which was pretty useful.


One issue that I had noticed while looking into iGPU passthrough was that the ESXi passthrough configuration would not persist upon a reboot and folks have simply been dealing with it over the years by manually toggling passthrough. I too ran into this behavior and it certainly was not ideal and I wanted to dig deeper and at least file a bug internally.

After a bit of debugging with one of the Engineers, we found the real root cause and interestingly, it had nothing to do with persistency of the configuration which was being saved properly. The issue is that by default the VMKernel will automatically claim the VGA driver and this becomes an issue as the passthrough configuration is processed much later in the boot process causing the behavior that has been observed.

The good news is that there is an easy workaround that allows us to tell the VMKernel to not claim the VGA driver which is passed as an ESXi kernel setting. I do want to mention one side affect is that you will no longer be able to access the DCUI interface if you are using a monitor to connect to your NUC. Once the VMKernel is starting, you will see a screen like the following as the VGA driver is no longer being claimed.


To disable the claiming of the VGA driver, run the following ESXCLI command:

esxcli system settings kernel set -s vga -v FALSE

You can always re-enable this as long as you have access to ESXi host. At this point, you do not have to reboot the ESXi host but the next time it goes through a reboot the iGPU passthrough settings will now persist.

Lastly, I do want to mention that it is still possible to access the DCUI by using SSH, which may not be a very well known capability. Simply SSH to your ESXi host and run the following two commands which will launch DCUI and is fully functional:

TERM=xterm
dcui


Here are a few more success stories in iGPU passthrough on other NUCs like the Skull Canyon and 8th Gen, feel free to share your stories and configurations by leaving a comment.

Skull Canyon NUC tested and approved! After the iGPU is assigned to a VM via vSphere UI it seems reboot capable

— Thomas D. (@Oeppelman) June 16, 2020

@lamw this is getting exciting! I am using an NUC8i7BEH Iris Plus Graphics 655 GPU on ESXi 6.7. The GPU has 47 Execution Units! pic.twitter.com/aGiFAUDyKi

— vdoppler (@vdoppler) June 15, 2020

More from my site

  • Passthrough of Integrated GPU (iGPU) for Apple Mac Mini 2018
  • NVIDIA GPU with Dynamic DirectPath IO (Passthrough) to Tanzu Kubernetes Grid (TKG) Cluster using vSphere with Tanzu
  • GPU Passthrough with Nested ESXi
  • ESXi with Intel Arc 750 / 770 GPU
  • Passthrough of Intel Iris Xe Integrated GPU on 11th Gen NUC results in Error Code 43

Categories // ESXi Tags // ESXi 7.0, GPU, Intel NUC, Passthrough

Comments

  1. *protectedJeff says

    06/18/2020 at 2:41 pm

    Can we also 'disable the claiming of the VGA driver' on standard pc homelab ? In order to get the onboard video access for passthrought ?

    Reply
  2. *protectedBenny Yu says

    07/26/2020 at 8:32 am

    William,
    thanks for the sharing. my question is
    whether this igpu passthrough means drived in windows VM for transcoding only ,
    or it can output destop to monitor via HDMI/DP also.

    Reply
    • William Lam says

      07/26/2020 at 11:55 am

      The former

      Reply
  3. *protectedNeedhelp says

    07/28/2020 at 11:27 pm

    Hey there, any idea how to get the GPU to passthru on this to an external monitor, just like how you can by using any normal GPU and using passthru?

    I can passthru multitudes of Nvidia cards and then HDMI out to a monitor just fine, but I can't seem to do it with this. Struggling major. Have tried every technique that works with normal GPUs to no avail, including passthru.map editing (eg, 8086 9bca d3d0 false)

    Any idea? Many thanks

    Reply
  4. *protectedBenny Yu says

    08/01/2020 at 4:28 am

    I have done this in PVE 6.1-7,HDMI/DP output to monitor successfully. but it's hard to realize in Esxi. so far there is no way to set iGPU to PCI address 0x18, when i set pciPassthru0.pciSlotNumber = "24" in *.vmx . it's always changed back silently. accordingly PVE setting, we have to set iGPU to this virtual PCI address.

    Reply
    • *protectedVictor says

      12/26/2020 at 8:30 pm

      Hello, do you mean, you are able to output to monitor but you need to re-do the setup? Can you please help me to achieve this? I passed successfully, drive installed, disabled svga, etc.. but I'm not getting output from DP, I'm trying to this with an iGPU and the slot number is 2 in my case.

      Reply
  5. *protectedVille says

    10/09/2020 at 2:27 am

    Is there any way to remove vmware svga from virtual machine with igpu passtrough?)

    Reply
    • William Lam says

      10/10/2020 at 3:46 am

      You can add svga.present=false to the VM Advanced Configuration which will disable the SVGA driver

      Reply
      • *protectedVille says

        10/10/2020 at 4:31 am

        Ok. Thank’s for your answer!

        Reply
  6. *protectedpschakravarthi says

    11/15/2020 at 5:06 am

    Hi William,
    Thanks for the details. I am getting VM shutdown with following error when I enable iGPU Passthrough

    NUC6I5
    Intel Graphiscs 540

    Client version:
    1.34.0

    Client build number:
    15603211

    ESXi version:
    7.0.0

    ESXi build number:
    16324942

    Error:
    PCI passthru device 0000:00:02.0 caused an IOMMU fault type 6 at address 0x90ffeb000. Powering off the virtual machine. If the problem persists please contact the device's vendor

    Can you please advise the problem here?

    Reply
  7. *protectedVictor says

    12/26/2020 at 8:20 pm

    I saw it was asked some time ago, but want know if there is any news. Is it possible to execute the passthrought and have a monitor connected to the gpu card passed? I'm trying to do this for an iGPU but no success.

    Reply
  8. *protectedfarel says

    01/04/2021 at 9:11 am

    Hi, is there any possibility to change the Host Graphics Settings and Dynamic DirectPath I/O without vSphere UI? I tried to install vSphere UI and keeps getting postgre error.

    Reply
    • William Lam says

      01/04/2021 at 9:39 am

      You can configure passthrough using the vSphere UI in vCenter Server or the ESXi Embedded Host Client UI (simply open browser to your ESXi hostname/IP)

      Reply
      • *protectedTimir Datta says

        05/19/2021 at 11:28 am

        Is it possible to edit these settings using just the Embedded Host Client? There doesnt seem to be be any way find these settings.

        Reply
        • William Lam says

          05/19/2021 at 11:55 am

          Yes, PCIe pass through can be configured in ESXi Embedded Host Client. Not in front of computer, but it’s definitely there

          Reply
          • *protectedTimir Datta says

            05/19/2021 at 3:07 pm

            PCI passthrough can be toggled, the VM sees the iGPU but throws error code 43 and won't start the driver for it. There is no obvious way to change host graphics settings. Maybe you know of a way.

  9. *protectedPlam Ki says

    01/14/2021 at 10:47 am

    Hello,

    After successfully configuring the iGPU passthrough for Windows 10 VM on Skull Canyon, the connected Monitor shows No Signal.
    I tried different ESXI versions(6.5, 6.7.U2 and 7.0U1c) , as well as different monitors and TV, connected to the HDMI or thunderbolt port but nothing comes out when the Windows 10 guest box starts(automatically).
    The Windows box has the Intel Iris Pro 580 driver installed with no error. The VMWare SVGA is disabled and I can access it using RDP to verify that all looks good. Using BIOS/VMXNET3 config for the VM.

    Anyone having issues with this configuration? Any ideas?

    Intel NUC Skull Canyon (NUC6i7KYK)
    Bios Version: KYSKLi70.86A.0071.2020.0909.1612
    Currently running ESXi-7.0U1c

    Thanks!

    Reply
    • William Lam says

      01/14/2021 at 10:52 am

      This method of passthrough only enables the VM to use the GPU for processing but does NOT actually output video to the physical monitor

      Reply
      • *protectedPlam Ki says

        01/14/2021 at 11:32 am

        Thank you,
        Is there a way the video can be actually seen on the monitor using the following configuration?

        Reply
        • William Lam says

          01/14/2021 at 11:47 am

          No, you would need a dedicated GPU and then this would work as expected

          Reply
          • *protectedPlam Ki says

            01/14/2021 at 12:44 pm

            OK. That explains a lot.

            Thank you!

      • *protectedSnowy says

        01/30/2022 at 8:04 am

        On a Lenovo M900t, which is using HD530 that does actually display on the attached monitor. That is a different story with UHD 630 on a P340 SFF, which behaved like what you described (drivers installed fine, only no output on monitor). Any idea what cause the monitor not to display anything in some iGPU ?

        Reply
  10. *protectedJuan says

    01/26/2021 at 4:34 pm

    TIL - I didn't know about accessing the DCUI through an SSH session, wow thanks so much!

    Reply
  11. *protectedDU says

    03/02/2021 at 9:08 am

    Hi - I'm trying this out - and am getting at error when powering up the VM:

    Failed to register the device pciPassthru0 for 0:2.0 due to unavailable hardware or software support.

    This was on a brand new VM. 7.0 U1.

    ESXi 7.0 Update 1

    I don't see an option to set Shared Direct - wonder if that's why?

    Reply
  12. *protectedKiwiiCZ says

    06/03/2021 at 4:29 am

    Do I understand the above discussion correctly that if I want ESXi running on an INTEL NUC (in my case 10i5FNH) to display the output from a running virtual machine (e.g. Windows 10 or LibreELEC) via the internal graphics card (on the HDMI port), it is not possible?
    Passthrough can indeed be activated, but the HDMI port only displays a black screen.
    And for this setup to work do I need to have another external video card? (which of course cannot be connected to this Intel NUC in any way) 🙁
    So is there any meaningful use for an internal graphics card in passthrough mode to run a virtual machine? Maybe to speed up some operations? However, displaying the outputs is probably not possible... 🙁

    Reply
  13. *protectedGrzegorz Kulikowski says

    08/11/2021 at 7:33 am

    Same with ESXi, 7.0.2, 17867351
    Model: NUC11TNHv5 i5-1145G7
    and VM with win10 21H1.

    Reply
  14. *protectedbulli says

    08/26/2021 at 2:13 pm

    To get around the Error Code 43 I installed a new VM with ESX Compatibility Version 6.5 and driver was found perfectly. Version 7 didnt worked for me, Lenovo M920X

    Reply
    • William Lam says

      08/27/2021 at 10:41 am

      Just gave this a shot and after you reboot, the issue is the same. I'm on vSphere 7.0 Update 2c and using vHW13 (ESXi 6.5 Compatibility)

      Reply
  15. *protectedL says

    12/20/2021 at 9:14 am

    Has any tried to passthrough AMD APU 5700G ?

    Reply
  16. *protectedKhandaker Shahriar Amin says

    07/23/2022 at 6:44 am

    Hi.
    I am using esxi 6.5 and my Intel UHD Graphics 630 not working error cod 43.

    Reply
  17. *protectedSteven Petrillo says

    03/06/2023 at 8:39 am

    I am using ESXi 8 and have configured the vga to false. What I am now seeing is the embedded web client does not work at all any longer. Is this normal? Second I am using a Hades Canyon NUC, with two GPUs. I am passing through the AMD Vega GPU to my Linux vm but I still have the embedded Intel GPU. Can this be used for normal VGA graphics?

    Reply
    • William Lam says

      03/06/2023 at 9:28 am

      Changing VM level setting should have no implications on the ESXi Host Client UI ... so definitely not normal nor is this something I've seen and I've done this many times 🙂 So its probably something else that you might be changing or environmental. Not sure what you mean by "normal VGA graphics"? If you're referring to ESXi screen, yes, it'll default to that and no change in behavior

      Reply
      • *protectedSteven Petrillo says

        03/06/2023 at 10:10 am

        Well more info...it seems that my Linux vm, upon reading the vbios from the the AMD gpu, blows up the whole hypervisor. Everything hangs and the disk light is lit. I rebooted the hypervisor and tried again and got the same issue. Not sure if this is a problem with the vm or hypervisor. I am going to do some more testing but it seems on ESXi 8 this is the issue.

        Reply
        • *protectedSteven Petrillo says

          03/06/2023 at 10:34 am

          I am going to take a step back and try ESXi 7.0u3g also.

          Reply
  18. *protectedraphael says

    02/02/2025 at 9:35 am

    Thanks William, I was having this issue since such a long time, in case of power failure or unexpected restart my vm with GPU passthrough was not able to boot.
    You fix made the trick on my old NUC6i5SYH on ESXI 7

    Reply

Thanks for the comment!Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...