With the latest Intel Hades Canyon now being able to run ESXi, a number of folks have been interested in taking advantage of the integrated GPU that is included in the system. There are two models of the Hades Canyon, NUC8i7HNK which is the lower end system with Radeon RX Vega M and the NUC8i7HVK which is the higher end system with Radeon RX Vega GH. One of the first thing I had attempted after getting ESXi working on the Hades Canyon was to try to enable passthrough of the iGPU into a Windows GuestOS but in all my attempts, it resulted into a PSOD'ing the ESXi host once you start installing the AMD Drivers from Intel.
A few days ago, one of my readers, Chris78 shared an update where he was able to prevent the ESXi host from PSOD'ing by adding a VM Advanced Setting but he he was still having issues where the Windows GuestOS would now BSOD. This sounded promising, I figure it would not hurt to gave it a try and to my surprise, I was able to successfully passthrough the iGPU to a Windows 10, Windows Server 2016 and 2019 system from my limited testing. After reporting the success back to Chris78 who was still having issues even after using the settings I had used, his conclusion was there may be a difference between the HNK and HVK models, with the latter having BSOD issues. For now, it seems like iGPU can only be passthrough if you have the NUC8i7HNK model.
Step 1 - Create either a Windows 10 or Windows Server 2016/2019 VM using the vSphere UI (H5 or Embedded Host Client) and I used all the defaults. You can definitely change vCPU, Memory and Disk capacity but you will need to use BIOS Firmware and an E1000E Network Adapter, if you switch it to EFI or VMXNET3, it seems to crash the VM when powering on the VM after attaching the iGPU.
Windows 10
- vHW 13 (ESXi 6.5 and later)
- BIOS Firmware
- 2 vCPU
- 6GB Memory
- 32GB Disk
- E1000E Network Adapter
Windows 2016/2019
- vHW 13 (ESXi 6.5 and later)
- BIOS Firmware
- 2 vCPU
- 8GB Memory
- 40GB Disk
- E1000E Network Adapter
Note: I am using vHW13 because on this system I have ESXi 6.5 Update 2 running which I need to have around for some other testing but this should also work on latest ESXi 6.7 Update 1.
Step 2 - Add the following VM Advanced Setting hypervisor.cpuid.v0 = False, by navigating to VM Options->Advanced->Edit Configuration in the vSphere UI
Step 2 - Install Windows 10 or Window Server 2016/2019 including VMware Tools and applying latest Microsoft updates
Step 3 - Download and Install Radeon RX Vega M Graphics Driver for your Windows OS
Step 4 - If you require the use of Intel Quick Sync Transcoding, you will also need to install Intel HD Graphics Driver for your Windows OS
Here is a screenshot of iGPU passthrough to a Windows 10 system:
Here is a screenshot of iGPU passthrough to a Windows Server 2019 system:
Tvzada says
Thank your for sharing your success, i have been trying to solve this for weeks and i to have a NUC8i7HVK and I to have issues with driver installation inside guest vm. NUC8i7HVK is the overclockable version. I am using fedora server 29 and I can passtrough everything with success, USB controllers, SD card, or network adapters, and play around with IOMMU groups using ACS patched kernel, etc, except for the GPU. I get use ACS patch to isolate GPU + Audio but i get a BSOD on driver installation, fault module, atikmdag.sys. @Chris78 I am with you on this, hope to find a solution soon! hades is vt-d and vt-x compatible after all, right intel?!
Chris78 says
On Windows dumpfile I see the error: THREAD_STUCK_IN_DEVICE_DRIVER_M (100000ea) seems to be related to dxgkrnl.sys and atikmdag.sys.
STACK_TEXT:
ffff8208`917e9858 fffff804`2311a790 : 00000000`000000ea ffffc78b`0dd94080 00000000`00000000 00000000`00000000 : nt!KeBugCheckEx
ffff8208`917e9860 fffff804`2311a86a : ffff8208`917e9938 fffff804`28a48854 ffff8208`917e9938 00000000`00000000 : dxgkrnl!TdrTimedOperationBugcheckOnTimeout+0x40
ffff8208`917e98d0 fffff804`289c65f0 : ffffc78b`0dcf9000 00000000`00000000 ffff8208`917e9a20 fffff804`28a48840 : dxgkrnl!TdrTimedOperationDelay+0xca
ffff8208`917e9910 ffffc78b`0dcf9000 : 00000000`00000000 ffff8208`917e9a20 fffff804`28a48840 00000000`00002710 : atikmdag+0x665f0
ffff8208`917e9918 00000000`00000000 : ffff8208`917e9a20 fffff804`28a48840 00000000`00002710 ffffc78b`10920028 : 0xffffc78b`0dcf9000
Hope there will be a fix some day.
Chris78 says
(Also posted on Intel forum)
I (sort of) managed to install the AMD Radeon RX Vega M GH card on Windows 10 and Windows Server 2016/2019 VM by disabling Intel iGD in the BIOS of the NUC (used Intel (Windows 10) and AMD (modified Adrenalin 18.12.3 for Windows Server 201x) drivers. But AMD Radeon Settings, OpenCL or GL are not working. Guess the Intel HD Graphics 630 is really needed for that but if I enable Intel iGD in the BIOS I get a BSOD "THREAD_STUCK_IN_DEVICE_DRIVER_M (100000ea)" even without passthrough the Intel HD Graphics 630 (Display Controller) to the VM
Tvzada says
Thanks, I was able to track your posts on Intel forum where you describe how to modify the INF file. I could not however install the driver successfully, with the iGD disabled. it still hangs the guest, either w10 or w2k19 also with signed drivers turned off. It is good to know that there is someone very capable paying attention to this as this needs to be fixed badly. If one can passtrough the GPU on the lower end NUC8i7HNK, the higher end NUC8i7HVK should also provide this ability. I don't overclock, so I don't need the overclock settings, if some BIOS setting could disable all overclocking and turn a NUC8i7HVK into a sort of a NUC8i7HNK, on demand. Maybe it could be easier than that...
Chris78 says
I doubt of the stability and usefulness of the AMD Radeon RX Vega M GH card if the iGD is disabled (no Intel Graphics HD 630 then in ESXi to passthrough. No OpenGL or OpenCL on the Vega). You would have a half bricked graphics card in your VM. Better only passthrough the 'Display Controller' to the VM and update it with the latest driver. I got that working even with EFI bios, VMXnet3 and Paravirtual driver on Windows 2016 and 2019.
Tvzada says
Hello!! Thank you for all the input Chris78. i have great news! I was finally able to pull this through today, on a NUC8i7HVK. With the current fedora kvm/libvirt/cpu setup i am using, the GPU passes through and all drivers install without errors or crashing the guest. Also tried with a fresh windows 10 installation, and really had no need to modify VEGA M drivers, thus, had no need to disable driver signing as well, at the OS level. But yes, iGD and VEGA M must go hand in hand ( FYI used with success the dch_win64_25.20.100.6471.exe for the iGD and GFX_Radeon_Win10_64_18.12.2.exe for the RX Vega M GH ). But feel i could use any driver, because using these drivers alone is not sufficient, as i said i had to tweak kvm and cpu settings using libvirt ( in a ACS patched kernel ) almost in a trial and error basis. I will still have to try if disabling ACS overrides continues to allow a correct DMA communication with the GPU. But, with this setup, it works. Some additional findings, Radeon Relive, and all AMD tweaks or settings appear to be all available and the GPU output is indeed routed to the external physical monitor, which is the cherry on top of the cake. Radeon console also says that the GPU is connected to a LCD2690WUXI2, which is my computer monitor model (so it had to communicate properly to fetch it's model) and is great news! Further tests, perhaps tweaks are still needed, as my main goal is similar to yours, to use Steam to stream games to a 4K TV inside this VM ( the setup i was using with success but not inside a VM) I have installed Dolphin emulator and it works very well up to 60fps and, although Fish GL renders very well also in RDP it may have some stability issues, like the VM may stop responding after a few minutes of OpenGL rendering, not sure exactly what is causing this, it is not overheat as the host continues to work normally. I will make further tests now but feel i am very close to the final goal, Gone from thinking it was not possible to proving it is! We are almost there...
Christian Perret says
Good news Tvzada, but am I right you're using KVM and not ESXi or did I misinterpreted your post?
Graehme Paulson says
Hello! I have stumbled upon this looking for the solution to passing my VEGA M GH on my NUC8i7HVK. Did you ever end up making a video on how to do this?
Tvzada says
Yes, correct, i used KVM ( 3 week learning curve ). I started with ESXI but gave up as i could not get the correct control of IOMMU groups, But, whatever the host OS, the point here is that the hardware IS capable. Using a ACS patched kernel allows you to passtrough only the GPU + Audio, which are in one IOMMU group and the IGD which are in another IOMMU group, into the same windows VM. without having to passtrough SD Card and USB controllers ( which in the case of NUC8i7HVK you would have to if you weren't using a patched kernel ) these configurations to the best of my knowledge are not currently available to public in ESXI.
William Lam says
Not sure if this will help or not, but ESXi does have a boot option to completely disable ACS checking (which could be dangerous), but given this is for home lab use, its worth a shot since there's been mention of ACS ...
To Enable, run the following command: esxcli system settings kernel set -s disableACSCheck -v true
To Verify, run the following command: esxcli system settings kernel list -o disableACSCheck
You'll need to reboot for the changes
Good Luck!
Chris78 says
I bought myself a second NUC. This time the NUC8I7HNK. I hoped to get it to install like William did. Tried it with boot VMkernel.Boot.disableACSCheckBelow = FALSE and set to TRUE (reboot after each change). Below are my results:
- Installing Windows 10 64-bit with default VM template + hypervisor.cpuid.v0 = FALSE and installed latest VMware tools and Windows Updates successful
- Shut down VM
- Passthrough 1 PCIE device (Display controller) successful boot,
- Passthrough 2 PCIE devices (Display controller + Polaris 22XL) results in VM panic:
2019-01-23T16:14:43.593Z| vcpu-0| E105: PANIC: PCIPassthruChangeIntrSettings: 0000:01:00.0 failed to register interrupt (error code 195887105)
2019-01-23T16:14:44.058Z| vcpu-0| W115: A core file is available in "/vmfs/volumes/5c41eeb7-51e98bba-93de-54b2030a1dc5/vapp02/vmx-zdump.000"
2019-01-23T16:14:44.061Z| vcpu-0| I125: Writing monitor file `vmmcores.gz`
2019-01-23T16:14:44.063Z| vcpu-0| W115: Dumping core for vcpu-0
2019-01-23T16:14:44.063Z| vcpu-0| I125: CoreDump: dumping core with superuser privileges
2019-01-23T16:14:44.063Z| vcpu-0| I125: VMK Stack for vcpu 0 is at 0x439150813000
2019-01-23T16:14:44.063Z| vcpu-0| I125: Beginning monitor coredump
2019-01-23T16:14:44.067Z| mks| W115: Panic in progress... ungrabbing
2019-01-23T16:14:44.067Z| mks| I125: MKS: Release starting (Panic)
2019-01-23T16:14:44.067Z| mks| I125: MKS: Release finished (Panic)
2019-01-23T16:14:44.421Z| vcpu-0| I125: End monitor coredump
2019-01-23T16:14:44.421Z| vcpu-0| W115: Dumping core for vcpu-1
2019-01-23T16:14:44.421Z| vcpu-0| I125: CoreDump: dumping core with superuser privileges
2019-01-23T16:14:44.421Z| vcpu-0| I125: VMK Stack for vcpu 1 is at 0x439150993000
2019-01-23T16:14:44.421Z| vcpu-0| I125: Beginning monitor coredump
2019-01-23T16:14:44.768Z| vcpu-0| I125: End monitor coredump
2019-01-23T16:14:45.059Z| mks| W115: Panic in progress... ungrabbing
2019-01-23T16:14:45.059Z| mks| I125: MKS: Release starting (Panic)
2019-01-23T16:14:45.059Z| mks| I125: MKS: Release finished (Panic)
2019-01-23T16:14:45.425Z| vcpu-0| I125: Printing loaded objects
2019-01-23T16:14:45.425Z| vcpu-0| I125: [0x4D51FE1000-0x4D530AE914): /bin/vmx
2019-01-23T16:14:45.425Z| vcpu-0| I125: [0x4D936A8000-0x4D936BF1CC): /lib64/libpthread.so.0
2019-01-23T16:14:45.425Z| vcpu-0| I125: [0x4D938C5000-0x4D938C6F00): /lib64/libdl.so.2
2019-01-23T16:14:45.425Z| vcpu-0| I125: [0x4D93AC9000-0x4D93AD1D08): /lib64/librt.so.1
2019-01-23T16:14:45.425Z| vcpu-0| I125: [0x4D93CE4000-0x4D93F7C4E4): /lib64/libcrypto.so.1.0.2
2019-01-23T16:14:45.425Z| vcpu-0| I125: [0x4D941AD000-0x4D942164AC): /lib64/libssl.so.1.0.2
2019-01-23T16:14:45.425Z| vcpu-0| I125: [0x4D94421000-0x4D9453537C): /lib64/libX11.so.6
2019-01-23T16:14:45.425Z| vcpu-0| I125: [0x4D9473C000-0x4D9474B01C): /lib64/libXext.so.6
2019-01-23T16:14:45.425Z| vcpu-0| I125: [0x4D9494C000-0x4D94A30341): /lib64/libstdc++.so.6
2019-01-23T16:14:45.425Z| vcpu-0| I125: [0x4D94C4F000-0x4D94CCF21C): /lib64/libm.so.6
2019-01-23T16:14:45.425Z| vcpu-0| I125: [0x4D94ED2000-0x4D94EE6BC4): /lib64/libgcc_s.so.1
2019-01-23T16:14:45.425Z| vcpu-0| I125: [0x4D950E8000-0x4D95248C74): /lib64/libc.so.6
2019-01-23T16:14:45.425Z| vcpu-0| I125: [0x4D53487000-0x4D534A47D8): /lib64/ld-linux-x86-64.so.2
2019-01-23T16:14:45.425Z| vcpu-0| I125: [0x4D95453000-0x4D9546D634): /lib64/libxcb.so.1
2019-01-23T16:14:45.425Z| vcpu-0| I125: [0x4D9566F000-0x4D9567095C): /lib64/libXau.so.6
2019-01-23T16:14:45.425Z| vcpu-0| I125: [0x4D95C3F000-0x4D95CD4534): /usr/lib64/vmware/plugin/objLib/upitObjBE.so
2019-01-23T16:14:45.425Z| vcpu-0| I125: [0x4D95EED000-0x4D96043994): /usr/lib64/vmware/plugin/objLib/vsanObjBE.so
2019-01-23T16:14:45.425Z| vcpu-0| I125: [0x4D962DA000-0x4D962EEF94): /lib64/libz.so.1
2019-01-23T16:14:45.425Z| vcpu-0| I125: [0x4D96739000-0x4D967441D0): /lib64/libnss_files.so.2
2019-01-23T16:14:45.425Z| vcpu-0| I125: End printing loaded objects
2019-01-23T16:14:45.425Z| vcpu-0| I125: Backtrace:
2019-01-23T16:14:45.425Z| vcpu-0| I125: Backtrace[0] 0000004d97e7d2e0 rip=0000004d526615e7 rbx=0000004d526610e0 rbp=0000004d97e7d300 r12=0000000000000000 r13=0000000000000001 r14=0000000000000001 r15=0000004d53a80b80
2019-01-23T16:14:45.425Z| vcpu-0| I125: Backtrace[1] 0000004d97e7d310 rip=0000004d521a166c rbx=0000004d97e7d330 rbp=0000004d97e7d810 r12=0000004d533062b0 r13=0000000000000001 r14=0000000000000001 r15=0000004d53a80b80
2019-01-23T16:14:45.425Z| vcpu-0| I125: Backtrace[2] 0000004d97e7d820 rip=0000004d522a0546 rbx=0000000000000100 rbp=0000004d97e7da80 r12=0000000000000001 r13=0000000000000001 r14=0000000000000001 r15=0000004d53a80b80
2019-01-23T16:14:45.425Z| vcpu-0| I125: Backtrace[3] 0000004d97e7da90 rip=0000004d522a14e0 rbx=0000004d53a80b80 rbp=0000004d97e7dab0 r12=0000000000000001 r13=0000000000000001 r14=0000004d97e7db3c r15=0000004d53a80c28
2019-01-23T16:14:45.425Z| vcpu-0| I125: Backtrace[4] 0000004d97e7dac0 rip=0000004d522a1b0b rbx=000000000000000f rbp=0000004d97e7db10 r12=0000004d53a80b80 r13=000000000000000f r14=0000004d97e7db3c r15=0000004d53a80c28
2019-01-23T16:14:45.425Z| vcpu-0| I125: Backtrace[5] 0000004d97e7db20 rip=0000004d522f7530 rbx=0000000000000011 rbp=0000004d97e7db70 r12=0000000001b0003c r13=0000004d969ba020 r14=0000000000000000 r15=0000004d53a7e600
2019-01-23T16:14:45.425Z| vcpu-0| I125: Backtrace[6] 0000004d97e7db80 rip=0000004d525af431 rbx=0000004d5341a4a8 rbp=0000004d97e7dbb0 r12=0000004d5316d580 r13=0000000000000168 r14=0000004d53860e80 r15=0000000000000000
2019-01-23T16:14:45.425Z| vcpu-0| I125: Backtrace[7] 0000004d97e7dbc0 rip=0000004d525d7176 rbx=000000000000012d rbp=0000004d97e7dc00 r12=0000004d533062b0 r13=0000004d53411d80 r14=0000004d532ec1a0 r15=0000000000000000
2019-01-23T16:14:45.425Z| vcpu-0| I125: Backtrace[8] 0000004d97e7dc10 rip=0000004d525af518 rbx=0000000000000000 rbp=0000004d97e7dc20 r12=0000004d97e7dc40 r13=0000004d53864d70 r14=0000004d536a5040 r15=0000000000000003
2019-01-23T16:14:45.425Z| vcpu-0| I125: Backtrace[9] 0000004d97e7dc30 rip=0000004d5264cb05 rbx=0000004d5341baa0 rbp=0000004d97e7dd80 r12=0000004d97e7dc40 r13=0000004d53864d70 r14=0000004d536a5040 r15=0000000000000003
2019-01-23T16:14:45.425Z| vcpu-0| I125: Backtrace[10] 0000004d97e7dd90 rip=0000004d936afcfc rbx=0000000000000000 rbp=0000000000000000 r12=00000331951bfa40 r13=0000004d97e7e9c0 r14=0000004d536a5040 r15=0000000000000003
2019-01-23T16:14:45.425Z| vcpu-0| I125: Backtrace[11] 0000004d97e7dea0 rip=0000004d951b9ead rbx=0000000000000000 rbp=0000000000000000 r12=00000331951bfa40 r13=0000004d97e7e9c0 r14=0000004d536a5040 r15=0000000000000003
2019-01-23T16:14:45.425Z| vcpu-0| I125: Backtrace[12] 0000004d97e7dea8 rip=0000000000000000 rbx=0000000000000000 rbp=0000000000000000 r12=00000331951bfa40 r13=0000004d97e7e9c0 r14=0000004d536a5040 r15=0000000000000003
2019-01-23T16:14:45.425Z| vcpu-0| I125: SymBacktrace[0] 0000004d97e7d2e0 rip=0000004d526615e7 in function (null) in object /bin/vmx loaded at 0000004d51fe1000
2019-01-23T16:14:45.425Z| vcpu-0| I125: SymBacktrace[1] 0000004d97e7d310 rip=0000004d521a166c in function (null) in object /bin/vmx loaded at 0000004d51fe1000
2019-01-23T16:14:45.425Z| vcpu-0| I125: SymBacktrace[2] 0000004d97e7d820 rip=0000004d522a0546 in function (null) in object /bin/vmx loaded at 0000004d51fe1000
2019-01-23T16:14:45.425Z| vcpu-0| I125: SymBacktrace[3] 0000004d97e7da90 rip=0000004d522a14e0 in function (null) in object /bin/vmx loaded at 0000004d51fe1000
2019-01-23T16:14:45.425Z| vcpu-0| I125: SymBacktrace[4] 0000004d97e7dac0 rip=0000004d522a1b0b in function (null) in object /bin/vmx loaded at 0000004d51fe1000
2019-01-23T16:14:45.425Z| vcpu-0| I125: SymBacktrace[5] 0000004d97e7db20 rip=0000004d522f7530 in function (null) in object /bin/vmx loaded at 0000004d51fe1000
2019-01-23T16:14:45.425Z| vcpu-0| I125: SymBacktrace[6] 0000004d97e7db80 rip=0000004d525af431 in function (null) in object /bin/vmx loaded at 0000004d51fe1000
2019-01-23T16:14:45.425Z| vcpu-0| I125: SymBacktrace[7] 0000004d97e7dbc0 rip=0000004d525d7176 in function (null) in object /bin/vmx loaded at 0000004d51fe1000
2019-01-23T16:14:45.425Z| vcpu-0| I125: SymBacktrace[8] 0000004d97e7dc10 rip=0000004d525af518 in function (null) in object /bin/vmx loaded at 0000004d51fe1000
2019-01-23T16:14:45.425Z| vcpu-0| I125: SymBacktrace[9] 0000004d97e7dc30 rip=0000004d5264cb05 in function (null) in object /bin/vmx loaded at 0000004d51fe1000
2019-01-23T16:14:45.425Z| vcpu-0| I125: SymBacktrace[10] 0000004d97e7dd90 rip=0000004d936afcfc in function (null) in object /lib64/libpthread.so.0 loaded at 0000004d936a8000
2019-01-23T16:14:45.425Z| vcpu-0| I125: SymBacktrace[11] 0000004d97e7dea0 rip=0000004d951b9ead in function clone in object /lib64/libc.so.6 loaded at 0000004d950e8000
2019-01-23T16:14:45.425Z| vcpu-0| I125: SymBacktrace[12] 0000004d97e7dea8 rip=0000000000000000
2019-01-23T16:14:45.425Z| vcpu-0| I125: Msg_Post: Error
2019-01-23T16:14:45.425Z| vcpu-0| I125: [msg.log.error.unrecoverable] VMware ESX unrecoverable error: (vcpu-0)
2019-01-23T16:14:45.425Z| vcpu-0| I125+ PCIPassthruChangeIntrSettings: 0000:01:00.0 failed to register interrupt (error code 195887105)
2019-01-23T16:14:45.425Z| vcpu-0| I125: [msg.panic.haveLog] A log file is available in "/vmfs/volumes/5c41eeb7-51e98bba-93de-54b2030a1dc5/vapp02/vmware.log".
2019-01-23T16:14:45.425Z| vcpu-0| I125: [msg.panic.requestSupport.withoutLog] You can request support.
2019-01-23T16:14:45.425Z| vcpu-0| I125: [msg.panic.requestSupport.vmSupport.vmx86]
2019-01-23T16:14:45.425Z| vcpu-0| I125+ To collect data to submit to VMware technical support, run "vm-support".
2019-01-23T16:14:45.425Z| vcpu-0| I125: [msg.panic.response] We will respond on the basis of your support entitlement.
2019-01-23T16:14:45.425Z| vcpu-0| I125: ----------------------------------------
2019-01-23T16:14:45.426Z| vcpu-0| I125: Exiting
- Passthrough 1 PCIE device (Polaris 22 XL) results in the same VM panic.
Installed Windows 10 again with same config but with passthrough of both PCIE devices. As soon as I installed the VMware tools, VM went haywire. Can't get the same results as you got William.
So I installed a second VM with Windows 10 64-bit, this time with firmware EFI, SCSI Controller = VMware Paravirtual and LAN = VMXnet3, also hypervisor.cpuid.v0 = FALSE. Installed latest VMware tools and Windows update.
- Shut down VM
- Passthrough 1 PCIE device (Display controller) successful boot,
- Passthrough 2 PCIE devices (Display controller + Polaris 22XL) successful boot
- Passthrough 3 PCIE devices (Display controller, Polaris 22 XL and Audio device) successful boot
- Install Win10_64_18.12.2.exe (from Intel website) results in VM crash and BSOD after reboot.
Exact same results as with the HVK. Any other idea to try?
Chris78 says
This is from the last VM with EFI, VMXNet 3 etc. Seems to be the same panic.
2019-01-23T17:11:06.369Z| vcpu-0| E105: PANIC: PCIPassthruChangeIntrSettings: 0000:01:00.0 failed to register interrupt (error code 195887105)
2019-01-23T17:11:07.859Z| vcpu-0| W115: A core file is available in "/vmfs/volumes/5c41eeb7-51e98bba-93de-54b2030a1dc5/vapp03/vmx-zdump.000"
2019-01-23T17:11:07.859Z| mks| W115: Panic in progress... ungrabbing
2019-01-23T17:11:07.859Z| mks| I125: MKS: Release starting (Panic)
2019-01-23T17:11:07.859Z| mks| I125: MKS: Release finished (Panic)
2019-01-23T17:11:07.861Z| vcpu-0| I125: Writing monitor file `vmmcores.gz`
2019-01-23T17:11:07.862Z| vcpu-0| W115: Dumping core for vcpu-0
2019-01-23T17:11:07.862Z| vcpu-0| I125: CoreDump: dumping core with superuser privileges
2019-01-23T17:11:07.862Z| vcpu-0| I125: VMK Stack for vcpu 0 is at 0x43916a893000
2019-01-23T17:11:07.862Z| vcpu-0| I125: Beginning monitor coredump
2019-01-23T17:11:08.235Z| vcpu-0| I125: End monitor coredump
2019-01-23T17:11:08.235Z| vcpu-0| W115: Dumping core for vcpu-1
2019-01-23T17:11:08.235Z| vcpu-0| I125: CoreDump: dumping core with superuser privileges
2019-01-23T17:11:08.235Z| vcpu-0| I125: VMK Stack for vcpu 1 is at 0x43916aa13000
2019-01-23T17:11:08.235Z| vcpu-0| I125: Beginning monitor coredump
2019-01-23T17:11:08.607Z| vcpu-0| I125: End monitor coredump
2019-01-23T17:11:08.861Z| mks| W115: Panic in progress... ungrabbing
2019-01-23T17:11:08.861Z| mks| I125: MKS: Release starting (Panic)
2019-01-23T17:11:08.861Z| mks| I125: MKS: Release finished (Panic)
2019-01-23T17:11:09.308Z| vcpu-0| I125: Printing loaded objects
2019-01-23T17:11:09.308Z| vcpu-0| I125: [0x8AD8412000-0x8AD94DF914): /bin/vmx
2019-01-23T17:11:09.308Z| vcpu-0| I125: [0x8B19AD9000-0x8B19AF01CC): /lib64/libpthread.so.0
2019-01-23T17:11:09.308Z| vcpu-0| I125: [0x8B19CF6000-0x8B19CF7F00): /lib64/libdl.so.2
2019-01-23T17:11:09.308Z| vcpu-0| I125: [0x8B19EFA000-0x8B19F02D08): /lib64/librt.so.1
2019-01-23T17:11:09.308Z| vcpu-0| I125: [0x8B1A115000-0x8B1A3AD4E4): /lib64/libcrypto.so.1.0.2
2019-01-23T17:11:09.308Z| vcpu-0| I125: [0x8B1A5DE000-0x8B1A6474AC): /lib64/libssl.so.1.0.2
2019-01-23T17:11:09.308Z| vcpu-0| I125: [0x8B1A852000-0x8B1A96637C): /lib64/libX11.so.6
2019-01-23T17:11:09.308Z| vcpu-0| I125: [0x8B1AB6D000-0x8B1AB7C01C): /lib64/libXext.so.6
2019-01-23T17:11:09.308Z| vcpu-0| I125: [0x8B1AD7D000-0x8B1AE61341): /lib64/libstdc++.so.6
2019-01-23T17:11:09.308Z| vcpu-0| I125: [0x8B1B080000-0x8B1B10021C): /lib64/libm.so.6
2019-01-23T17:11:09.308Z| vcpu-0| I125: [0x8B1B303000-0x8B1B317BC4): /lib64/libgcc_s.so.1
2019-01-23T17:11:09.308Z| vcpu-0| I125: [0x8B1B519000-0x8B1B679C74): /lib64/libc.so.6
2019-01-23T17:11:09.308Z| vcpu-0| I125: [0x8AD98B8000-0x8AD98D57D8): /lib64/ld-linux-x86-64.so.2
2019-01-23T17:11:09.308Z| vcpu-0| I125: [0x8B1B884000-0x8B1B89E634): /lib64/libxcb.so.1
2019-01-23T17:11:09.308Z| vcpu-0| I125: [0x8B1BAA0000-0x8B1BAA195C): /lib64/libXau.so.6
2019-01-23T17:11:09.308Z| vcpu-0| I125: [0x8B1C070000-0x8B1C105534): /usr/lib64/vmware/plugin/objLib/upitObjBE.so
2019-01-23T17:11:09.308Z| vcpu-0| I125: [0x8B1C31E000-0x8B1C474994): /usr/lib64/vmware/plugin/objLib/vsanObjBE.so
2019-01-23T17:11:09.308Z| vcpu-0| I125: [0x8B1C70B000-0x8B1C71FF94): /lib64/libz.so.1
2019-01-23T17:11:09.308Z| vcpu-0| I125: [0x8B1CB6A000-0x8B1CB751D0): /lib64/libnss_files.so.2
2019-01-23T17:11:09.308Z| vcpu-0| I125: End printing loaded objects
2019-01-23T17:11:09.308Z| vcpu-0| I125: Backtrace:
2019-01-23T17:11:09.308Z| vcpu-0| I125: Backtrace[0] 0000008b1e3092e0 rip=0000008ad8a925e7 rbx=0000008ad8a920e0 rbp=0000008b1e309300 r12=0000000000000000 r13=0000000000000001 r14=0000000000000001 r15=0000008ad9efcac0
2019-01-23T17:11:09.308Z| vcpu-0| I125: Backtrace[1] 0000008b1e309310 rip=0000008ad85d266c rbx=0000008b1e309330 rbp=0000008b1e309810 r12=0000008ad97372b0 r13=0000000000000001 r14=0000000000000001 r15=0000008ad9efcac0
2019-01-23T17:11:09.308Z| vcpu-0| I125: Backtrace[2] 0000008b1e309820 rip=0000008ad86d1546 rbx=0000000000000100 rbp=0000008b1e309a80 r12=0000000000000001 r13=0000000000000001 r14=0000000000000001 r15=0000008ad9efcac0
2019-01-23T17:11:09.308Z| vcpu-0| I125: Backtrace[3] 0000008b1e309a90 rip=0000008ad86d24e0 rbx=0000008ad9efcac0 rbp=0000008b1e309ab0 r12=0000000000000001 r13=0000000000000001 r14=0000008b1e309b3c r15=0000008ad9efcb68
2019-01-23T17:11:09.308Z| vcpu-0| I125: Backtrace[4] 0000008b1e309ac0 rip=0000008ad86d2b0b rbx=000000000000000f rbp=0000008b1e309b10 r12=0000008ad9efcac0 r13=000000000000000f r14=0000008b1e309b3c r15=0000008ad9efcb68
2019-01-23T17:11:09.308Z| vcpu-0| I125: Backtrace[5] 0000008b1e309b20 rip=0000008ad8728530 rbx=0000000000000011 rbp=0000008b1e309b70 r12=0000000001b0003c r13=0000008b1cdce020 r14=0000000000000000 r15=0000008ad9efd360
2019-01-23T17:11:09.308Z| vcpu-0| I125: Backtrace[6] 0000008b1e309b80 rip=0000008ad89e0431 rbx=0000008ad984b4a8 rbp=0000008b1e309bb0 r12=0000008ad959e580 r13=0000000000000168 r14=0000008ad9c940e0 r15=0000000000000000
2019-01-23T17:11:09.308Z| vcpu-0| I125: Backtrace[7] 0000008b1e309bc0 rip=0000008ad8a08176 rbx=000000000000012d rbp=0000008b1e309c00 r12=0000008ad97372b0 r13=0000008ad9842d80 r14=0000008ad971d1a0 r15=0000000000000000
2019-01-23T17:11:09.308Z| vcpu-0| I125: Backtrace[8] 0000008b1e309c10 rip=0000008ad89e0518 rbx=0000000000000000 rbp=0000008b1e309c20 r12=0000008b1e309c40 r13=0000008ad9bb2250 r14=0000008ad9ad6040 r15=0000000000000003
2019-01-23T17:11:09.308Z| vcpu-0| I125: Backtrace[9] 0000008b1e309c30 rip=0000008ad8a7db05 rbx=0000008ad984caa0 rbp=0000008b1e309d80 r12=0000008b1e309c40 r13=0000008ad9bb2250 r14=0000008ad9ad6040 r15=0000000000000003
2019-01-23T17:11:09.308Z| vcpu-0| I125: Backtrace[10] 0000008b1e309d90 rip=0000008b19ae0cfc rbx=0000000000000000 rbp=0000000000000000 r12=0000036b3394ea40 r13=0000008b1e30a9c0 r14=0000008ad9ad6040 r15=0000000000000003
2019-01-23T17:11:09.308Z| vcpu-0| I125: Backtrace[11] 0000008b1e309ea0 rip=0000008b1b5eaead rbx=0000000000000000 rbp=0000000000000000 r12=0000036b3394ea40 r13=0000008b1e30a9c0 r14=0000008ad9ad6040 r15=0000000000000003
2019-01-23T17:11:09.308Z| vcpu-0| I125: Backtrace[12] 0000008b1e309ea8 rip=0000000000000000 rbx=0000000000000000 rbp=0000000000000000 r12=0000036b3394ea40 r13=0000008b1e30a9c0 r14=0000008ad9ad6040 r15=0000000000000003
2019-01-23T17:11:09.308Z| vcpu-0| I125: SymBacktrace[0] 0000008b1e3092e0 rip=0000008ad8a925e7 in function (null) in object /bin/vmx loaded at 0000008ad8412000
2019-01-23T17:11:09.308Z| vcpu-0| I125: SymBacktrace[1] 0000008b1e309310 rip=0000008ad85d266c in function (null) in object /bin/vmx loaded at 0000008ad8412000
2019-01-23T17:11:09.308Z| vcpu-0| I125: SymBacktrace[2] 0000008b1e309820 rip=0000008ad86d1546 in function (null) in object /bin/vmx loaded at 0000008ad8412000
2019-01-23T17:11:09.308Z| vcpu-0| I125: SymBacktrace[3] 0000008b1e309a90 rip=0000008ad86d24e0 in function (null) in object /bin/vmx loaded at 0000008ad8412000
2019-01-23T17:11:09.308Z| vcpu-0| I125: SymBacktrace[4] 0000008b1e309ac0 rip=0000008ad86d2b0b in function (null) in object /bin/vmx loaded at 0000008ad8412000
2019-01-23T17:11:09.308Z| vcpu-0| I125: SymBacktrace[5] 0000008b1e309b20 rip=0000008ad8728530 in function (null) in object /bin/vmx loaded at 0000008ad8412000
2019-01-23T17:11:09.308Z| vcpu-0| I125: SymBacktrace[6] 0000008b1e309b80 rip=0000008ad89e0431 in function (null) in object /bin/vmx loaded at 0000008ad8412000
2019-01-23T17:11:09.308Z| vcpu-0| I125: SymBacktrace[7] 0000008b1e309bc0 rip=0000008ad8a08176 in function (null) in object /bin/vmx loaded at 0000008ad8412000
2019-01-23T17:11:09.308Z| vcpu-0| I125: SymBacktrace[8] 0000008b1e309c10 rip=0000008ad89e0518 in function (null) in object /bin/vmx loaded at 0000008ad8412000
2019-01-23T17:11:09.308Z| vcpu-0| I125: SymBacktrace[9] 0000008b1e309c30 rip=0000008ad8a7db05 in function (null) in object /bin/vmx loaded at 0000008ad8412000
2019-01-23T17:11:09.308Z| vcpu-0| I125: SymBacktrace[10] 0000008b1e309d90 rip=0000008b19ae0cfc in function (null) in object /lib64/libpthread.so.0 loaded at 0000008b19ad9000
2019-01-23T17:11:09.308Z| vcpu-0| I125: SymBacktrace[11] 0000008b1e309ea0 rip=0000008b1b5eaead in function clone in object /lib64/libc.so.6 loaded at 0000008b1b519000
2019-01-23T17:11:09.308Z| vcpu-0| I125: SymBacktrace[12] 0000008b1e309ea8 rip=0000000000000000
2019-01-23T17:11:09.308Z| vcpu-0| I125: Msg_Post: Error
2019-01-23T17:11:09.308Z| vcpu-0| I125: [msg.log.error.unrecoverable] VMware ESX unrecoverable error: (vcpu-0)
2019-01-23T17:11:09.308Z| vcpu-0| I125+ PCIPassthruChangeIntrSettings: 0000:01:00.0 failed to register interrupt (error code 195887105)
2019-01-23T17:11:09.308Z| vcpu-0| I125: [msg.panic.haveLog] A log file is available in "/vmfs/volumes/5c41eeb7-51e98bba-93de-54b2030a1dc5/vapp03/vmware.log".
2019-01-23T17:11:09.308Z| vcpu-0| I125: [msg.panic.requestSupport.withoutLog] You can request support.
2019-01-23T17:11:09.308Z| vcpu-0| I125: [msg.panic.requestSupport.vmSupport.vmx86]
2019-01-23T17:11:09.308Z| vcpu-0| I125+ To collect data to submit to VMware technical support, run "vm-support".
2019-01-23T17:11:09.308Z| vcpu-0| I125: [msg.panic.response] We will respond on the basis of your support entitlement.
2019-01-23T17:11:09.308Z| vcpu-0| I125: ----------------------------------------
2019-01-23T17:11:09.309Z| vcpu-0| I125: Exiting
William Lam says
Please re-read the article, as mentioned in article if you use EFI or VMXNET3, you'll get get a crash. You need to use BIOS and E100e
Chris78 says
Please re-read my comment (keywords are 'default VM template'):
– Installing Windows 10 64-bit with default VM template + hypervisor.cpuid.v0 = FALSE and installed latest VMware tools and Windows Updates successful.
Frank Anderson says
Confirmed adding "hypervisor.cpuid.v0=FALSE" stabilizes the VM with vSphere 6.7 U1. Same behavior and crash issues.
Chris78 says
William, did you maybe change any settings in the BIOS of the NUC (especially the Security Features)? Or is your ESXi version modified (I know you were working on a a fix for running ESXi on the HNK before so maybe some leftovers)? Still trying to find out why you succeeded while I can't get it to work.
William Lam says
I didn't make any specific changes in the BIOS, in fact I had reset the system awhile back after playing around with some initial settings. I'll try to boot up my system this weekend and do some screenshots of all pages to see if there's any differences, but I'm honestly using the default setup running latest BIOS.
Chris78 says
Thank you, last one: are you still on BIOS 51 or also on 53?
Chris78 says
I got it working on NUC8I7HVK! It seems it is only possible to passthrough only one of the graphics cards (Intel ór AMD). It seems impossible to passthrough both.
AMD:
If you want a fully working Radeon RX Vega M GH graphics go into the NUC BIOS (used version 51), go to Performance - Graphics and set Intel iGD to Auto (this has to be done!)
Create a Windows 10 64-bit VM (used vHW 13) with any setting you like (it also works with EFI, VMXnet3, VMware Paravirtual controller etc.) and set hypervisor.cpuid.v0 = FALSE
Install Windows 10, install VMware Tools and update Windows. Install your Radeon drivers and your done!
Right now I have Radeon RX Vega M GH Graphics and VMware SVGA 3D graphics under Device Manager. AMD Settings are working, GPU-Z shows OpenCL and DirectCompute 5.0 active on Radeon RX Vega M GH.
Intel:
This one is easier, just Enable the iGD in the BIOS, passthrough only the Display Controller. Create VM as explained earlier, install Windows, VMware Tools and Windows Updates and install the Intel driver (it will be automatically detected by Windows anyway but there are newer drivers available. I don't think hypervisor.cpuid.v0 setting is needed for only the Intel graphics card.
Will test the same procedure on NUC8I7HNK with BIOS 53 and Windows Server 2019.
Chris78 says
Also got the Radeon RX Vega M GL graphics working on a Windows Server 2019 VM. Running on a NUC8I7HNK with BIOS 53. The mentioned Windows Server 2016 drivers mentioned by William work out-of-the box but it is also possible to use the latest Adrenalin drivers (how to easily modify and install these: https://forums.intel.com/s/question/0D70P000006BEvNSAW).
William Lam says
Glad to hear you're able to get the iGPU passthrough successfully now and thanks for sharing all your findings.
I just took a look at my system, using BIOS 53 and it looks like the iGD setting is set to "automatic" (default), so never had to touch anything and as I mentioned, I had even reseted my BIOS. The only thing that I know was disabled was Secure Boot
Chris78 says
Thanks for the confirmation of the Bios setting for the iGD. So am I correct that you don't have an Intel "display controller" visible to PassThrough in your ESXi Host Hardware settings? When the Bios setting is set to "enabled", it shows an additional display controller (Intel HD Graphics 630) in the host hardware list. You could also PassThrough that controller instead of the Radeon. Just want to know if you only did a PassThrough on the Radeon video and graphics and not on the Intel.
William Lam says
That's correct, it didn't show the Intel "Display Controller", just the Radeon
Chris78 says
Radeon video and audio controller*
Tvzada says
Good to know you finally made it trough using Esxi in NUC8I7HVK Chris. You mention it is impossible to pass both iGD and Vega, but i do this using my fedora host on the NUC8I7HVK so it is not a hardware limitation. Also, i finally have a stable setup, 0 VM crashes and 0 frame drops, testing with fishgl and Dolphin Emulator ROMs . Latest Adrenalin 18.12.3 installed also. As I get more and more into fedora server the less I want to back go to ESXI! I am planning on writing a GPU how to on how to acomplish this using fedora, kvm libvirt and with the current OVMF bios + Q35, but only when I finally have my final setup. Meanwhile if anyone has any questions feel free.
Chris78 says
I'm really interested in your how to. Please write it n00b-proof as I have zero knowledge of Linux. If possible starting with installing Fedora 🙂.
Юлий Афанасьев says
Hello! I have NUC NUC8I7HVK with the latest bios and ESXI 6.7U1. I tried to passthrough VEGA and my system (ESXi) can`t boot at all. I tried iGD - Enabled, Auto, Disabled.
With intel graphics passthrough I have no problems. What did I do wrong? Thanks in advance!
Chris78 says
For a stable and working passthrough of the Radeon you have to use Intel iGD = Auto. This way the 'Intel(R) Display Controller' is not visible under your host hardware settings.
- What do you mean with system? Is your ESXi host not booting or the guest VM (assume a Windows VM)?
- If it is the VM, which settings did you use? Did you add the hypervisor.cpuid.v0 = FALSE under the Advanced Settings of the VM?
- Where you able to install Windows with Radeon passthrough and did it fail to boot up after installing the Radeon drivers or does the problem occur right after creating the VM?
- Do you receive any error messages when starting up the VM?
Юлий Афанасьев says
Thank you for answer!
If I go to host -> manage -> hardware -> and toggle amd radeon passthrough
"ESXi host not booting". It starts booting and freezes with no errors.
Only way to get to console is reinstall ESXi!
Chris78 says
Which BIOS version did you use? I don't have problems with toggle passthrough of the Radeon on BIOS version 51, 52 or 53 on HVK or HNK. Didn't try older versions.
Юлий Афанасьев says
As I remember Bios 53. I'll try to reset Bios to default settings and come back with I hope good news :))
Chris78 says
How did you detect that your ESXi host freezes? You connected a screen to the HDMI port and does it 'freeze' on that screen? Can you still 'ping' the ESXi host? If you passthrough the Radeon the monitor screen will indeed 'freeze' so you can only connect to the host through the webgui. That is normal behavior.
Юлий Афанасьев says
Yes. I connected a monitor and keyboard. ESXi freezes in a very beginning of booting. I am not near PC and can`t remember what stage is the last.
Юлий Афанасьев says
I have already done:
1. Reset my bios to defaults
2. Set iGD = auto
3. Made clean install of ESXi 6.7U1 (even tried 6.7 without update)
4. No guest system are installed.
5. I tried to passthrough VEGA and my system can`t boot. It freezes on "dma_mapper_iommu loaded successfully"
I started to think I have problems with hardware. Tomorrow I am going to install Windows 10 direct on NUC to check it.
Chris78 says
Can you ping your host? If you passthrough the Radeon, your screen won't go to the login of ESXi. That is normal behavior. You should use your webgui from another computer to connect to ESXi
Юлий Афанасьев says
I understand how it works :). I have two guest system in my ESXi, webserver on Linux and workstation on windows. I don't have any problems if I don't touch Radion Vega. I use Intel graphics with passtrought for Windows and VmWare SVGA for Linux.
I would provide you with photo of my monitor with issue, but this site doesn't acsepa any links in comments
Ely501 says
Hey I realize this might be a silly question but here goes: Chris78 says that you can only passthrough 1 gfx controller, Radeon or IGD but it seems to be in the context of passing both through to one VM? Would passing Radeon to one VM and iGD to a different VM be possible or would it result in the same PSOD, Thread stuck error etc? I realize that could be uglier than just all into one but thought i'd ask anywayz.
Chris78 says
There are no silly question and it's a good thought. However, as soon as the iGD is enabled in Bios and available in the ESXi host hardware list, installation of the Radeon in a VM will fail.
I could only get the Radeon installed with Intel iGD on Auto in the Bios, assume Disabled will also work. With both options, the iGD will not be presented to the host.
Mark says
Ok thanks Cris78. I got mine up and running. I was trying to passthrough USB for keyboard and mouse but wasn't havnig any luck initially. Any recommendations for which controller to passthrough and which ports it affects?
Юлий Афанасьев says
I really don't understand what I do wrong. I still have a hung ESXi after trying to switch passtrough on host manage settings. I set up a clean install of windows 10 on bare hardware and got perfectly working system. So my guess about troubles with Vega hardware did`t live up to my expectations. I got back to passthough only intel integrated video which works perfectly and doesn't have any issues 🙁
http://pichost.org/images/2019/02/02/IMG_20190130_205053_1.jpg
Mark says
Your screenshot is normal. The system is not necessarily hung. That is the point at which ESXI does the Passthrough of the VEGA device. You have to wait for awhile for ESXI to continue booting up to access it via browser. From there you can add the pasthrough device to a Win10 VM and then boot. When the VM initializes the VEGA device it will clear the ESXI screen I think you have to install the Intel VEGA driver in order for it to be fully recognized.
Tvzada says
Hello all! iGD configured as Auto, will actually disable it in certain scenarios. I am able to pass the VEGA with iGD on or off. I have it configured has enabled now, this way it is always enabled. Also i have been working on this and fine tuning it. I am now very pleased with this setup, it is a stable setup so far. Given the NUC8i7HVK has two NIC cards, i used one for host and other for one vm. But i have two VM's and on the other one i had to use a virtual bridge that was giving me problems. After i used virtio drivers NIC problems were fixed. One of my last issues was audio, apparently fixed now as well. I have two working VM's. I worked CPU affinity so both VM's do not step each other down (4 cpus for the games VM, 3 cpus for the Audio VM, 1 cpu for the host) The second VM is for running audio DAW reaper connected to my e-drum. It is working beautifully. It has a full USB controller passed trough so i am able to connect both my TD30 and Mackie studio box and use ASIO Drivers ( Fender + oDEUS ) the output audio is CLEAN and crack free, which is making me very happy! As i said the more i use fedora the more i am sticking to it. I still have to work on VIRSH hooks now on my fedora host, to automate the start/stop of the vm's, but i am very close to finish line now and will plan a how to video, i think during this month. Cheers
Snakeh says
I have been fighting with this exact thing for several days. Using Ubuntu and the NUC8i7HNK. Have an ACS patched kernel, can get the iGD passed through successfully but not the Vega (entire screen goes black whenever I try). What are the kvm and cpu settings you had to tweak using libvirt?
Please do make that tutorial. Are you going to put it on YouTube, or where? Is your YouTube username the same as your username here (Tvzada)?
Thanks very much, all the stuff you've posted here and on Reddit has been super helpful.
Tvzada says
Hello, snakeh, i dont have a youtube channel yet, i am going to create one and then post it here, i have been very busy lately, sorry for not replying to you earlier. I used the following kvm configurations:
options kvm_amd avic=1
options kvm ignore_msrs=1
options kvm report_ignored_msrs=0
vfio configurations like specifying like vfio_iommu_type1 allow_unsafe_interrupts=1
dont seem to be necessary. The other thing you have to do is to CPU pin your hyper-threaded cores in order to boost guest performance, specially if you are running more than one guest VM like i am.
I don't have the BIOS updated so that is not a requirement, the only BIOS requirement is that you have to disable secure boot.
I plan to work on that video in the next couple of weeks, maybe earlier. Meanwhile, i hope this helps you
Neil says
Did you ever get a 'HOWTO' instruction video made? It's just I've *tried* to follow the instructions but the graphics card has never appeared in my virtual Windows 10 machine, so the Windows Driver doesn't install as there's no graphics card. I'm clearly missing a step somewhere but I can't see what step that is.
Any help appreciated
Mark says
Ok here's another one. Anyone tried to get Ubuntu 18.04/18.10 working on ESXI 6.7 with the Vega M passthrough? I tried some steps i found online but it either doesn't recognize it or fails on boot. Would be nice to have either Windows or Ubuntu vm's able to use Vega M graphics.
nova853 says
Is GPU Passthrough can run on multi VM at the same time?
nova853 says
I means ONE GPU display card for multi VM same time~
Chris78 says
One physical GPU can only be PassThrough to one VM.
Chris Lemmer says
Thanks for the iGD setting! After enabling that on my NUC8i7HVK, I got the Radeon to work as well.
I would love to use the Radeon to Mine ETH. Has anyone attempted such configuration. The mining software doesn't seem to pick up the Radeon card. Does one need to switch the VM to it somehow?
Thanks!
Chris Lemmer says
Hey Guys,
How does this work precisely. My Windows 10 VM actually picks up the graphics card, however, I'm unable to use it for anything. Windows seems to keep on using the VMWare one.
Can you only use it with apps that allow you to specify a graphics card?
Thanks
Chris
Chris78 says
It should work without problems, otherwise disable the VMware videocard in device manager of Windows to force using the Radeon.
Thiago says
Would it be possible to passthrough one Intel GPU to one VM and the AMD to another and have both VMs running at the same time? By the way, how does it work on Intel NUCs? Does it separate some I/O ports for one graphics card and other I/O ports for the another?
Kevin Greenway says
Thanks very much for this very detailed guide and assistance. With this help I'm almost there, but from my side something isn't quite working right which I'm still investigating. If anybody has any pointers I'd be most grateful!
I am running ESXi 6.7 Update 2, with vCenter 6.7. Using the supplied guides above I am able to passthrough either the AMD Radeon or Intel HD 630 Graphics (Dependant on BIOS Graphics setting). I am also able to install the required drivers for either adapter into windows. Both appears in GPU-Z and Device Manager as if they are working.
However whenever I leverage any DirectX/OpenGL application, which I'd normally expect to see shift towards the GPU it appears to still leverage the CPU and SVGA (Not the GPU passthrough). I am monitoring the GPU load either under GPU-Z or Windows Task Manager. The GPU load is 0% whilst the CPU load is very high. This is using applications such as VLC media player or Google Chrome. This occurs if connecting via RDP Client / VMware Horizon Client or Citrix VDA. All of which should detect the GPU inside the VM and push the rendering load towards the GPU.
I'm seeing the same results with Server 2019 or Windows 10 using hardware version 14. I've also tried a variety of different video settings under the VMX file, but I've never yet seen the GPU actually utilized for any tasks. Similarly have disabled the VMware SVGA driver installed by VMware Tools.
I'm curious if anybody else can actually confirm the GPU is utilized, since all the above examples and screenshots demonstrate the driver is loaded but not actually utilized by the VM.
One other anomaly is that if I run dxdiag and look under display it's showing errors.
I'm going to continue to investigate, but as mentioned if anybody can give any pointers or has verified utilization past the point of the driver loading I'd appreciate! Oh one more thing BIOS is v58.
Thanks!
Kevin Greenway says
I figured it out. Both Horizon and Citrix leverage the RDSH role on 2016/2019. As soon as the RDSH role is installed it switches remote connection to use just the basic display for graphics rendering and not the GPU. This is the case whether connecting to the VM via RDP/Horizon/Citrix, since they each use underlying RDSH role.
The following GPO can be edited which switches back to GPU. This now works for both the Intel HD 630 and AMD Radeon.
Local Computer Policy\Computer Configuration\Administrative Templates\Windows Components\Remote Desktop Services\Remote Desktop Session Host\Remote Session Environment
Then enable “Use the hardware default graphics adapter for all Remote Desktop Services sessions”
Hope this provides some help to anybody else requiring the same setup. Thanks again to all involved in creating this really helpful guide!
Ryan says
I just wanted to say thank you, this was exactly what my issue was.
Siegfried KIRCHEIS says
Hello Guys,
I'm looking for feedback about eGPU passthrough to a Windows 10 VM using an Hades Canyon.
Regards
Haldun says
Hello William & everybody sharing comments here,
With your help I decided to buy NUC8i7HNK and install ESXi on it as a home lab. Just received & installed my 64GB RAM and 2x1TB SSD on it and it is the most versatile & portable server I ever had.
I tried eGPU passthrough with this wonderful guide. My Windows VM can now output my two monitor setup (a DP monitor & an HDMI monitor).
I also setup USB & Audio passthrough but it seems that three are some caveats about this.
1. Only front blue USB port is available to VM. Other ports (front yellow & rear ones) are still assigned to ESXi.
2. I couldn't get audio output from front jack but only from HDMI monitor. However, there audio quality is not that good.
Does anybody else experience the same problems? If so, did you find any solutions.
Thanks in advance.
BR,
Haldun
S.Jun says
Hello Haldun, eGPU pass through, You mean eGPU with thunderbolt3 eGPU enclosures?
haldunalimli says
Hello S.Jun,
Sorry for wrong wording. No eGPUs, just talking about pass-through internal AMD GPU.
jimmy says
Has anyone tried esx7 on the haydes canyon nucs yet? i installed the other day and am having an issue with the pass-through graphics. when i go in to configure the vega grphics as a pass through pci device there are two options. one is hdmi audio and the other is the vega graphics. I can enable both but only the hdmi audio seems to remain enabled through a reboot. anyone seen this or have any suggestions?
Chris78 says
I see the same. On the hardware tab, the Radeon graphics card shows enabled, but host needs a reboot. A reboot does not activate the passthrough of the Radeon graphics card.
But seems there are more small bugs as I'm unable to activate NTP service also (always reverts back to manual inert time)
Chris78 says
After reboot, toggle passthru of the Advanced Micro Devices twice, first one disable the graphics and hdmi audio, second one enables and activates the passthru. No need for a reboot. Now you will be able to use the graphics card in your VM.
However, it still does not survive a reboot.
The syslog.log shows an error during boot that the module pciPassthru cant be loaded as it seems busy. Don't know with what. Other pci devices seems to keep working (like the Intel HD 630 graphics card if you enable it in the Bios, however, don't do that of you want to use the Radeon as it will throw you back to the original problem mentioned in this post like the BSOD of the VM).
Maybe William has an idea why the passthru of the Radeon does not survive a reboot in ESXi 7?
jimi says
I was talking to Paul over at tinkertry.com and hes having the same issue but he's using a supermicro board. so it doesn't look like the issue is with the nuc's but maybe a bug in the os
BR0KK says
Tried it but i can not figure out how to:
1. get rid of the vmware gpu
2. i get Error 43 from the vega
I'm on esxi 6.7U3B with the latest patches
Windows installs fine and the Vega Driver does find the GPU (and installs it)
Any tipps?
BR0KK says
got it working after fiddeling with the nucs bios. Now it installs fine and there is no error 43.
but how do i get rid of the vmWARE GPU?
I deinstalled it and it shows up as a MS Basic Display Adapter
CPU and GPU-Z wont show the Vega but rather the MS adapter
Thank you
Mark says
If you go into the advanced settings for the VM and search for the parameter “svga.present” and change it to “false”. That will remove the software vga adapter for that machine. If you ever muck up the pass through you have to go back in and re-enable.
Samuel says
Hi Mate,
Do you remember what you did in the NUC BIOS?
I am on 6.7 U3 and getting code 43. Was fine on 6.7
Amanda says
Hi William,
Great explanation. Totally appreciated!
Did anyone try passing the Radeon GPU to a linux vm (Ubuntu, Fedora, CentOS)?
The above steps works like a charm on a Windows VM but not for any Linux distro. I am also noticing that when the VM restarts, it hangs the host and i have to manually restart the host.
I am running ESXI 6.5, the virtual machine version is 13 and I made sure the vm's boot option is set to BIOS and that the network adapter is E1000e.
Would appreciate any pointers 🙂
Chris78 says
Been a while that I have been playing around with the Hades Canyon, ESXi and GPU Passthrough but wanted to rebuild my lab with ESXi 7.0, Windows 11 and Windows Server 2022 so here I am again.
With info from a later blogpost from William (https://williamlam.com/2020/06/passthrough-of-integrated-gpu-igpu-for-standard-intel-nuc.html) I prevented ESXi to claim the iGPU (esxcli system settings kernel set -s vga -v FALSE) so I am able to passthrough both GPU's to the same VM (with hypervisor.cpuid.v0=FALSE under Advanced Settings of the VM). The Intel GPU installs correctly but the Radeon RX Vega M GL errors out with a error 43 which indicates that the kernel is using this GPU now instead of the Intel GPU.
@William, any idea if the second GPU can also be prevented from being claimed by the kernel? Or should above command prevent both GPU's from being used by the kernel?
Steven Petrillo says
I am using ESXi 8.0b and trying to passthrough the AMD GPU to a Linux VM. When it reads the GPU BIOS it hangs the ESX server completely. I am going to try passing through the Intel GPU on the next pass.