WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud
  • Tanzu
    • Application Modernization
    • Tanzu services
    • Tanzu Community Edition
    • Tanzu Kubernetes Grid
    • vSphere with Tanzu
  • Home Lab
  • Nested Virtualization
  • Apple
You are here: Home / ESXi / Updated findings for passthrough of Intel NUC Integrated Graphics (iGPU) with ESXi

Updated findings for passthrough of Intel NUC Integrated Graphics (iGPU) with ESXi

11.17.2022 by William Lam // 29 Comments

While searching for drivers for another Intel NUC platform, I saw that Intel had recently published new graphic drivers for Linux including support for their new Intel Arc GPUs. This of course got me curious on wondering whether this would help at all with the issues regarding passthrough of the integrated graphics (iGPU) for recent Intel NUCs? 🤔

As a refresher, starting with the 11th Gen Intel NUCs, passthrough of the iGPU on Windows had stopped working and would result in the infamous Windows Error Code 43 and even worse on the 12th Gen Intel NUCs, Windows would simply BSOD after the initial reboot. The behavior is also simliar for Linux operating systems, while it better handles the issue by not crashing the OS, iGPU passthrough is also not functional for Linux systems.

To be honest, I had low expectations these new Linux graphic drivers would behave any differently, but I decided for the sake of persistency that I would give it one more go. I had access to both an Intel NUC 12 Extreme (Dragon Canyon) and Intel NUC 12 Pro (Wall Street Canyon), both of which included recent Intel iGPUs.

🤯🤯🤯 is the only way I could describe what I had discovered after my testing!

I am super excited to share that I was able to successfully use the passthrough iGPU from both an Intel NUC 12 Extreme and Intel NUC 12 Pro to an Ubuntu 22.04 VM running ESXi 7.0 Update 3 and ESXi 8.0 without any issues!

Furthermore, I unexpectedly discovered that the console output from the Ubuntu 22.04 VM was also able to output video directly to the physical monitor that was plugged into the Intel NUC 12 Extreme! 😲 I believe this might actually be the first time this has ever been documented to work with an iGPU passthrough from ESXi. I had to blink a few times to make sure I was not dreaming this since I was not expecting anything to show up on the physical monitor, so that took me by complete surprise and shock.

Not only does this prove iGPU passthrough for recent Intel NUCs can function correctly with ESXi, but the current Windows Error Code 43 issue is due to issues with the Intel graphic drivers for Windows. Hopefully someone from the Intel graphics team will consider looking at this issue again as it should be possible to also get this functional on a Windows operating system but it would require some support from Intel.

I also ran additional experiments using both an Intel NUC 11 Extreme (Beast Canyon) and Intel NUC 11 Pro (Tiger Canyon), however I was not successful in using the iGPU from these platforms, so it looks like something may have changed in both drivers and/or hardware that makes this viable again starting with the Intel NUC 12th Gen platforms. I should also mention video output to the physical monitor only worked when using the Intel NUC 12 Extreme and did not work when using the Intel NUC 12 Pro, perhaps this is because it has Intel DG1 vs UHD iGPU, but just something to be aware of if you are looking for that functionality.

Intel NUC 12 Extreme

Here is a screenshot of Ubunutu 22.04 VM which has the default virtual graphics disabled and is connected using a remote session utilizing the iGPU passthrough from an Intel NUC 12 Extreme running ESXi 8.0 (also works on latest ESXi 7.0 Update 3 release).


Here are the high level instructions for setting this up:

Step 1 - Create and install Ubuntu Server 22.04 VM (recommend using 60GB storage or more, as additional packages will need to be installed). Once the OS has been installed, go ahead and shutdown the VM.

Step 2 - Enable passthrough of the iGPU under the ESXi Configure->Hardware->PCI Devices settings and then add a new PCI Device to the VM and select the iGPU. You can use either DirectPath IO or Dynamic DirectPath IO, it does not make a difference.


Step 3 - Optionally, if you wish to disable the default virtual graphics driver (svga), edit the VM and under VM Options->Advanced->Configuration Parameters change the following setting from true to false:

svga.present

Note: This setting is also required if you wish to output the display from the Ubuntu VM to the physical monitor that is connected to Intel NUC 12 Extreme

Step 4 - Power on the VM and then follow these instructions for installing the Intel Graphic Drivers for Ubuntu 22.04 and once completed, you will now be able to successfully use the iGPU from within Ubuntu VM.

With the ability to output the display from the Ubuntu VM to the physical monitor that is connected to Intel NUC 12 Extreme, you may also be interested in passing through additional devices like keyboard and mouse, as shown in screenshot below so that you can interact with the system. To do so, you can follow the instructions found in this blog post for more details.

Intel NUC 12 Pro

Here is a screenshot of Ubunutu 22.04 VM which has the default virtual graphics disabled and is connected using a remote session utilizing the iGPU passthrough from an Intel NUC 12 Pro running ESXi 8.0 (also works on latest ESXi 7.0 Update 3 release).


Here are the high level instructions for setting this up:

Step 1 - Create and install Ubuntu Server 22.04 VM (recommend using 60GB storage or more, as additional packages will need to be installed). Once the OS has been installed, go ahead and shutdown the VM.

Step 2 - Enable passthrough of the iGPU under the ESXi Configure->Hardware->PCI Devices settings and then add a new PCI Device to the VM and select the iGPU. You can use either DirectPath IO or Dynamic DirectPath IO, it does not make a difference.


Step 3 - Optionally, if you wish to disable the default virtual graphics driver (svga), edit the VM and under VM Options->Advanced->Configuration Parameters change the following setting from true to false:

svga.present

Step 4 - Power on the VM and then follow these instructions for installing the Intel Graphic Drivers for Ubuntu 22.04 and once completed, you will now be able to successfully use the iGPU from within Ubuntu VM.

Intel NUC 12 Enthusiast 

UPDATE (11/18/22) - Both the discrete Intel Arc 770M and the Intel iGPU can be successfully passthrough and consume by Ubuntu VM. Please see this recent blog post for more details.

Additional Notes

  • I also ran additional experiments using both an Intel NUC 11 Extreme (Beast Canyon) and Intel NUC 11 Pro (Tiger Canyon), however I was not successful in using the iGPU from these platforms with exact same instructions, so it looks like something may have changed in both the drivers and/or hardware that makes this viable again starting with Intel NUC 12th Gen platforms and newer
  • The VM console output to the physical monitor only worked when using an Intel NUC 12 Extreme and did not work when using an Intel NUC 12 Pro, perhaps this is because it has Intel DG1 vs UHD iGPU, something to be aware of if you are looking for this additional functionality
  • Since the graphic drivers were built specifically for Ubuntu, it may or may not work for other Linux distributions
  • Here are some additional resources that I had used for setting up and accessing my Ubuntu VMs that might be useful
    • Enabling remote desktop for Ubuntu
    • Graphic benchmarking tools (glmark2) for Linux

More from my site

  • ESXi with Intel Arc 750 / 770 GPU
  • Passthrough of Intel Iris Xe Integrated GPU on 11th Gen NUC results in Error Code 43
  • Changing the default HTTP(s) Reverse Proxy Ports on ESXi 8.0
  • Automated ESXi Installation with a USB Network Adapter using Kickstart
  • vSphere with Tanzu using Intel Arc GPU

Categories // ESXi, Home Lab, vSphere 7.0, vSphere 8.0 Tags // Dragon Canyon, ESXi 7.0 Update 3, ESXi 8.0, GPU, Intel Arc, Iris Xe, Wall Street Canyon

Comments

  1. Ricardo Matos says

    11/17/2022 at 1:18 pm

    I've been able to do this with the Hades Canyon for years 🙂 I know it's not really an iGPU as it's using a PCIe connected Radeon GPU... but I've running a terminal VM on my Hades connected to a screen for years

    Reply
  2. B says

    11/17/2022 at 2:40 pm

    Is it possible to split the Intel GPU to support multiple VMs?

    Reply
    • William Lam says

      11/17/2022 at 4:20 pm

      No. vSGA is not possible

      Reply
  3. maxdxs says

    11/25/2022 at 8:30 am

    how havebeen the performance? ive tried to use my 12700 but I got hundreds of puprple screens then i have to move to proxmox :-/

    Reply
  4. Jacky says

    12/23/2022 at 8:27 am

    Hi, how to resolve the issue of PCIPassthrLate when add a PCI device in ESXi 8.0? Thanks a lot.

    Reply
  5. Jayson says

    01/14/2023 at 11:45 am

    Also suffering from the PCIPassthruLate error on ESXi 8. I've tried everything I can think of, disabling console with esxcli system settings kernel set -s vga -v FALSE. Any ideas?

    Reply
  6. thedavix says

    01/17/2023 at 1:33 am

    Hi,

    I just want to confirm that the passthrough of the iGPU of Intel NUC 11 Extreme (Beast Canyon) NUC11BTMi9 is working now with, at least, Bios DBTGL579 (0064). It works out of the box with a brand new install of Ubuntu 22.04 (server or desktop it doesn't mater).
    I have been using the Best Canyon since more than a month now to transcode on Plex via a docker successfully.

    Here is the summary of what I did on a fresh ubuntu 22.04 install (based on the steps above + another post of William)

    0) Install and finish install of Ubuntu and shut it down
    1) Add new PCI iGPU device to VM
    2) Change advance parameter on VM: svga.present = false
    3) Start the VM and configure the iGPU

    3.1) Configuring permissions
    stat -c "%G" /dev/dri/render*
    groups ${USER}

    sudo gpasswd -a ${USER} render
    newgrp render

    3.2) (Optional) Verify Media drivers installation
    sudo apt-get install vainfo
    export DISPLAY=:0.0
    vainfo

    3.3) reboot just to be sure

    4) To prevent having to toggle the passthrough of the iGPU upon ESXI reboot, perform the following command in ESXI via SSH
    esxcli system settings kernel set -s vga -v FALSE

    Thanks to William for all the help on this!

    Note: As said, I didn't need to install the latest intel drivers (step 4 of William's tutorial above) to have the iGPU work.
    However, I did test and install latest intel drivers but I encountered issues with HDR tone mapping when doing HW transcoding, it wasn't working properly. That is why I stays with the original ubuntu drivers.

    Reply
    • Daan van Doodewaard says

      01/20/2023 at 6:43 am

      Cool. Can we also get this working in Windows 10/11 ? 🙂

      Reply
      • thedavix says

        01/25/2023 at 1:10 am

        Just installed a fresh windows 10 to try it out for you and unfortunately I do have the Code 43 issue (https://williamlam.com/2021/07/passthrough-of-intel-iris-xe-integrated-gpu-on-11th-gen-nuc-results-in-error-code-43.html)
        With original or updated graphic drivers it didn't work
        did try as well hypvisor.cpuid.v0 = false and svga.present = false
        but nothing changed unfortunately.

        I saw some post of people that got it working but would be good to know what they did 🙂

        Reply
  7. Tibi says

    01/27/2023 at 3:46 pm

    I was searching for a mini pc with homelab purpose and your guide on ESXi for gen12 helped me take the decision. I'm having a small issue with the iGPU pass. After I enable the passthrough I can't find where I need to select DirectPath IO or Dynamic DirectPath IO. I add the iGPU to vm (ubuntu v24) and the vm is not starting ....

    I'm new with ESXi but beats me where to find the option DirectPath IO or Dynamic DirectPath IO, maybe is due to not configuring this.

    Any advices would help ...

    Reply
    • Tibi says

      01/27/2023 at 3:48 pm

      v22 ... there's a typo :))

      Reply
    • William Lam says

      01/28/2023 at 7:40 am

      I'm confused by your statements as they seem contradicting ... you just stated you "add he iGPU to vm", which is where you'd see the option of specifying whether to use DirectPath IO or Dynamic DirectPath IO. If you're new to ESXi, I'd recommend reviewing the ESXi documentation on these topics for more information

      Reply
  8. Tibi says

    01/28/2023 at 4:32 am

    So updates: managed to install the intel driver via SSH and all looks good but now I can't even get console from ESXi. SSH is working but no video driver ...

    Reply
    • tgrigorescui says

      01/28/2023 at 4:46 am

      I mean via remote desktop is working, have checked the driver via lspci and hwinfo comand and seems is active but the console windows from esxi is not displaying anything ....

      Reply
      • William Lam says

        01/28/2023 at 7:42 am

        Folks - If you set svga.present = FALSE, it'll disable the default VMware Graphics Driver, which will also disable the VM Console (e.g. when you click on the VM thumbnail to access the Console), this is expected as this is what this setting is doing. Some folks may NOT want that enabled and it's marked as an optional step, for this very reason

        Reply
        • Tibi says

          01/28/2023 at 7:48 am

          First, would like to thank you for replying..can imagine there is quite a considerable amount of comments.

          So, if I leave the svga to true, I will have the benefit of iGPU passed to the vm and also the console view, right ?

          Reply
          • William Lam says

            01/28/2023 at 8:27 am

            iGPU has nothing to do with that setting mentioned. It controls whether the virtual graphics is enabled for the VM or not. If you still want the VM console, then leave the default settings but as I mentioned, some folks don't want it enabled after passing through additional graphics

          • Tibi says

            01/28/2023 at 9:34 am

            Thx ! I just enabled it but with both, iGPU and svga true, the system hangs while booting ubuntu

  9. Andriy Sharandakov says

    01/30/2023 at 4:42 pm

    It doesn't hang, it just stops video output to the SVGA once iGPU driver is loaded by the kernel. Check if you can access your VM via SSH, and you will be surprised.
    Anyways, I haven't found an elegant way to keep output to SVGA after the iGPU driver is loaded. There is a "nomodeset" kernel boot option that will prevent switching of the output to the iGPU, and the VM console will still be available. However, in this case iGPU won't be initialized on boot, and in this case, I haven't managed to get Plex to use hardware transcoding. I believe it should be possible but requires more tinkering.
    On the other side, with the SVGA device disabled (svga.present = false), hardware transcoding works as expected, as in this case, there is only a single video adapter that is used by default.

    BTW, here are the official driver installation guides from Intel: https://dgpu-docs.intel.com/installation-guides/index.html#intel-arc-gpus

    Reply
  10. Sebigeli says

    02/14/2023 at 5:45 am

    Hello, I'm on the latest version of Ubuntu, I've followed the tutorial on Intel's site but Plex doesn't seem to want to do Plex transcoding while everything is set up (I'm running it with my Nvidia GPU on Windows ). I'm running ESXI 7.0 U3 with an I5 12400. Could someone help me?

    hwinfo --display
    20: PCI 1300.0: 0380 Display controller
    [Created at pci.386]
    Unique ID: jaVc.1H4YcO93p6C
    Parent ID: abAj.11lousfpsJ2
    SysFS ID: /devices/pci0000:00/0000:00:17.0/0000:13:00.0
    SysFS BusID: 0000:13:00.0
    Hardware Class: graphics card
    Device Name: "pciPassthru0"
    Model: "Intel Display controller"
    Vendor: pci 0x8086 "Intel Corporation"
    Device: pci 0x4692
    SubVendor: pci 0x1462 "Micro-Star International Co., Ltd. [MSI]"
    SubDevice: pci 0x7d25
    Revision: 0x0c
    Driver: "i915"
    Driver Modules: "i915"
    Memory Range: 0xfc000000-0xfcffffff (rw,non-prefetchable)
    Memory Range: 0xd0000000-0xdfffffff (ro,non-prefetchable)
    I/O Ports: 0x5000-0x503f (rw)
    IRQ: 69 (2621 events)
    Module Alias: "pci:v00008086d00004692sv00001462sd00007D25bc03sc80i00"
    Driver Info #0:
    Driver Status: i915 is active
    Driver Activation Cmd: "modprobe i915"
    Config Status: cfg=new, avail=yes, need=no, active=unknown
    Attached to: #7 (PCI bridge)

    27: PCI 0f.0: 0300 VGA compatible controller (VGA)
    [Created at pci.386]
    Unique ID: _+Pw.jBKePf3JQB5
    SysFS ID: /devices/pci0000:00/0000:00:0f.0
    SysFS BusID: 0000:00:0f.0
    Hardware Class: graphics card
    Model: "VMware VMWARE0405"
    Vendor: pci 0x15ad "VMware, Inc."
    Device: pci 0x0405 "VMWARE0405"
    SubVendor: pci 0x15ad "VMware, Inc."
    SubDevice: pci 0x0405
    Driver: "vmwgfx"
    Driver Modules: "vmwgfx"
    I/O Ports: 0x1070-0x107f (rw)
    Memory Range: 0xe8000000-0xefffffff (ro,non-prefetchable)
    Memory Range: 0xfe000000-0xfe7fffff (rw,non-prefetchable)
    Memory Range: 0x000c0000-0x000dffff (rw,non-prefetchable,disabled)
    IRQ: 16 (330 events)
    I/O Port: 0x00 (rw)
    Module Alias: "pci:v000015ADd00000405sv000015ADsd00000405bc03sc00i00"
    Driver Info #0:
    XFree86 v4 Server Module: vmware
    Config Status: cfg=new, avail=yes, need=no, active=unknown

    Primary display adapter: #27

    Reply
    • William Lam says

      02/14/2023 at 5:51 am

      I recall reading on Reddit that someone had to disable the VMware Graphics for it to attempt rendering on iGPU as it defaults to the first device it sees.

      Since this is more Plex specific, you may also want to post in Plex support channels in case someone has come across but from VM POV, iGPU is enabled and ready 🙂

      Reply
    • Tibi says

      02/14/2023 at 6:33 am

      Haha, I had the issue. As long as you don’t disable svga false, it will not hw transcode

      Reply
      • Sebigeli says

        02/15/2023 at 8:46 am

        That's working, thanks !

        Reply
        • Tibi says

          02/15/2023 at 10:21 am

          Great ! Happy it’s ok now

          Reply
          • Sebigeli says

            02/16/2023 at 1:09 pm

            I just did more test, I feel like it doesn't always work, the HW doesn't activate all the time, it doesn't work on all, any idea?. My movies are played from a SharePoint mounted via rclone. I have a good fiber internet line. My movies are between 1 and 100 GB

          • Tibi says

            02/16/2023 at 1:11 pm

            Well, problem means is somewhere else.
            First point for hwtranscode is to have svga.presente=false

            Probably you have some other setting that is affecting it or maybe go again and check if the driver is active.

            In my case it’s playing from a NAS with the folders mounted via nfs share.

  11. Sebigeli says

    02/17/2023 at 1:24 am

    Hello,
    On my Plex VM, the svga.presente=false parameter are present, it's ok.
    I did not install the vmware tools, that does not affect?

    I use xrdp in case of problem, my linux has a graphical interface, wouldn't that be a problem?

    I tried copying a movie locally (from my sharepoint to home folder) and the HW doesn't work any better.

    My user is called Plex and it's member of that groups :

    plex adm cdrom sudo dip video plugdev render lpadmin lxd sambashare

    it's normal when I run the "intel_gpu_top" command with my lambda user I have a problem right?

    From what I've seen, some movies transcode well and others don't, but I don't know why...

    Are there any other settings to enable in the bios?

    the films have the same format, it's strange...

    Do you have discord for help me ? thanks !

    Reply
    • Sebigeli says

      02/17/2023 at 1:50 am

      I just found, it's the HDR tone that was breaking the HW on some movies. Do you need to install something extra to be able to use it? THANKS

      Reply
  12. jasonchurchward says

    03/04/2023 at 11:44 am

    For anyone running Plex in Docker and was suffering issues with the hardware transcoding working reliably then this is the fix.

    I run DietPi and Docker but when I set the svga.present=false under VMware advanced the VM just wouldn't boot even though I had PCI pass through for my onboard iGPU configured. I instead left that alone and configured Docker to use my iGPU PCI passthrough as the primary.

    To complete this set, it in Docker as /dev/dri/renderD129 (the iGPU) to be used as /dev/dri/renderD128 (VMware video) which effectively makes it the primary and hardware transcoding works every time even with HDR tone mapping enabled too. You configure this under "advanced container settings" in Portainer or in a Docker compose/stack file with the below entry.

    devices:
    - /dev/dri/renderD129:/dev/dri/renderD128

    Reply

Thanks for the comment! Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Infrastructure Business Group (CIBG) at VMware. He focuses on Cloud Native technologies, Automation, Integration and Operation for the VMware Cloud based Software Defined Datacenters (SDDC)

Connect

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Recent

  • How to disable the Efficiency Cores (E-cores) on an Intel NUC? 03/24/2023
  • Changing the default HTTP(s) Reverse Proxy Ports on ESXi 8.0 03/22/2023
  • NFS Multi-Connections in vSphere 8.0 Update 1 03/20/2023
  • Quick Tip - How to download ESXi ISO image for all releases including patch updates? 03/15/2023
  • SSD with multiple NVMe namespaces for VMware Homelab 03/14/2023

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2023

 

Loading Comments...