After successfully enabling and persisting the passthrough of the iGPU for the latest Intel NUC 10 (Frost Canyon), I thought it was worth experimenting with the Apple Mac Mini 2018 to see if the same could be accomplished with its iGPU, which is an Intel UHD 630. The biggest benefit in addition to unlocking the iGPU for general use is support for Apple's Metal API which gives developers access to the underlying GPU when building and testing MacOS and iOS applications. This is also quite timely as the Apple Mac Mini 2018 was just added to the VMware HCL!
My initial attempt failed when using the latest ESXi 6.7 Update 3 release. After enabling passthrough of the iGPU and rebooting the ESXi host for the change to take affect, the system would get stuck during boot up when loading the dma_iommu_mapper module. After speaking with Engineering, the issue is probably not related to dma_iommu_mapper module but some other module shortly after but without serial console output or ability to see terminal screen, it would be very difficult to debug the issue.
About to give up, my last attempt was try ESXi 7.0 and to my surprise the ESXi host fully booted up after enabling passthrough of the iGPU. It is still not clear on what might be causing the problem for 6.7 but at least 7.0 works!
Note: To be able to successfully power on a MacOS VM running on ESXi 7.0, ensure you have applied the recent ESXi 7.0b patch. You will need to go to the VMware Patch Portal siteto download and apply the update.
Step 1- Enable passthrough of the iGPU using the vSphere UI and then reboot for changes to take affect.
Step 2 - Navigate to Configure->Hardware->Graphics->Host Graphics and change the default graphics type to "Shared Direct"
Step 3 - Create a new MacOS VM and install MacOS, I used MacOS 10.15 Catalina. Ensure that the VM is configured with vSphere 7 Compatibility (aka vHW 17) which is required to use the new Dynamic Direct Path I/O feature. If you are using an older version of vSphere or earlier VM Compatibility, the legacy Direct Path I/O should still work.
Note: One important thing to be aware of _before_ you attach the iGPU to the MacOS VM is to enable either Apple Remote Desktop and/or SSH. The reason is that I found after powering on the VM with the iGPU attached, the VM Console no longer functions and simply goes "black". The VM is fully functional but you just will not be able to use the VM Console for any type of access and this looks to be a limitation for now.
Step 4 - After you have installed MacOS and enable remote management, you can then attach the iGPU as shown in the screenshot below and using the new Dynamic DirectPath I/O feature introduced in vSphere 7.
Step 5 - To ensure the passthrough configurations are persisted, we must disable the claiming of the VGA driver by the VMkernel as explained in the previous article, run the following ESXCLI command:
esxcli system settings kernel set -s vga -v FALSE
You can always re-enable this as long as you have access to ESXi host. At this point, you do not have to reboot the ESXi host but the next time it goes through a reboot the iGPU passthrough settings will now persist.
Here screenshot of accessing the MacOS VM via SSH and using the system_profiler utility to show our iGPU:
Here screenshot of accessing the MacOS VM using Apple Remote Desktop and System Report to show our iGPU:
Lastly, I was able to also get the iGPU passthrough working with the recently announced MacOS 11 (Big Sur) Beta 1 release. For more details on installing Big Sur on ESXi, please see this blog post for more details.
Kevin Ou says
Hmm, I am able to get the iGPU passthough working persistently across ESXi reboots on EXSi 6.7 without any tweak, even though the console stops update and eventually goes black after showing message of dma_iommu_mapper module loaded. But I have no luck with that on eGPU connected via Thunderbolt, even with "esxcli system settings kernel set -s vga -v FALSE". The audio portion of GPU card stays at "Enabled/need reboot", and the video portion of it just stays at disabled state after reboot. Any insight to that?
William Lam says
Is this a Mac Mini 2018? What type of eGPU are you using (enclosure model / GPU decide)?
Kevin Ou says
Yes, it's a Mac mini 2018. I am using a Sonnet Breakaway Box 550 enclosure for my testings. So far I have tried Radeon RX 580 and Radeon Pro W5500. They both show the same issue. Also, when the eGPU is plugged in, the iGPU became invisible to ESXi whereas on a bare metal macOS, both eGPU and iGPU are visible and both can be used.
E_DESCLAUX says
Hi,
what about the non ECC ram on the macmini? Does this config be usable in a "production cluster?"
Regards.
William Lam says
All Mac Mini have always supported non-ECC memory, I'm not even sure if the system will support ECC memory and this is how customers have been running for many years now 🙂
Given this is a consumer platform, you should design your infrastructure to assume the HW will fail (not if, but when) and built in availability either in your application or take that into consideration. This is how many customers have done it to scale.
MichalM.Mac says
Is this going to be possible to do with Mac mini 2012?
William Lam says
Give it a try 🙂
GarrettSkj says
but can you get nested virtualization enabled at the same time...
William Lam says
Yes, why would Nested Virtualization be affected by passthrough?
garrettskj says
Good question, changed for some reason in 7.x - Failed to reconfigure virtual machine %MACHINE%. PCI passthrough devices cannot be added when Nested Hardware-Assisted Virtualization is enabled. - Lemme know if you find any tricks. 🙂
William Lam says
What’s the use case for passthrough and Nested? Is this for ESXi or some other Hypervisor?
garrettskj says
Fusion on top of OSX on top of ESXi. Currently we have some developers that use Fusion with Vagrant, which we had needed to virtualize with all the mass migration to work from home. We ended up standing-up a separate 6-node 6.7 cluster (on Minis!) with some .vmx play, to allow for the remote workers to get GPU passthrough (for OSX), as well as nested virt to continue to their Fusion&Vagrant work. After seeing your work with 7.x and 2018s, I was hoping that you had overcome similar obstacles.
William Lam says
Thanks for the use case detail and just so that I understand, the issue is the iGPU passthrough works to OSX but you're not able to use or setup Fusion? If so, its possible you didn't pass in the right parameters but trying to understand what isn't working from Nested Virtualization standpoint that you're observing.
garrettskj says
I'm unable to power on VM in 7.x that has both pci passthrough and hardware assisted virtualization enabled, the following error is presented:
"PCI passthrough devices cannot be added when Nested Hardware-Assisted Virtualization is enabled."
Perhaps you're right, and I am not passing the right parameters in 7.x, but when I set it up with 6.7, it works as I'd expect??
William Lam says
Can you try the following to see if this trick still works with 7.x
Unregister the VM, then manually add the following two entries to the VMX and then re-register the VM:
vhv.enable = "TRUE"
vhv.allowPassthru = "TRUE"
garrettskj says
This is the trick that I use to get them working on 6.7, the .vmx play I was referring. 🙂 However, it doesn't work in 7.x, or perhaps there is an additional step/flag that I am missing...
William Lam says
Just sent you a note offline Garrett, may need some more information about the setup
Anton says
The console going black is a known issue in vSphere for a long while. I believe there was a kb article but I cannot seem to find it. Nvidia vGPUs (Grid/Tesla cards) have the same issue, but since this technology was primarily intended for VDI environments, this is no problem since you can (should) connect with the vSphere Horizon or other PCoIP client anyway.
Vadim Bobrenok says
I was able to install Catalina on ESXi 7.0b running on Mac Mini 2018, but I still can't passthorugh the iGPU, once I add it to a machine it hangs in a middle of the boot process. The only difference to the setup from your post is that how I enabled "SharedDirect" for the graphics, I did it via console with `esxcli graphics host set --default-type SharedPassthru` command, which I belive the same to what you've described.
Do you have any ideas what could be a reason?
William Lam says
This behavior sounds like what I had observed when attempting iGPU passthrough on 6.7, after the reboot the ESXi host would simply hang as mentioned in the article, is this what you're observing? If so, could you attempt a fresh install of ESXi 7.0b (rather than upgrade if that's what you had done)?
Vadim Bobrenok says
This is a fresh install of ESXi 7 and the 7.0b patch on a top of it. For me ESXi host does not hang, but the guest macos does. If I launch a VM with svga and iGPU I see the boot screen and the progress bar that reaches ~50% and it stays like this forever.
William Lam says
Ah, then this is expected as I've already mentioned in the blog post. You'll need remotely connect using Apple Remote Desktop or via SSH, the VM Console will not work after powering the guest w/iGPU 🙂
Vadim Bobrenok says
I enabled remote desktop before adding the iGPU and verified that it works, but once I add the iGPU (with or without svga enabled) I can't connect to it anymore even in 10 minutes after powering on
William Lam says
Is it even responding on the network? How was Catalina installed?
Vadim Bobrenok says
I had a chance to play a bit more with Mac Mini and this what I found.
- If I boot a guest with svga enabled it actually boots (you were right, just console stops workking) and I can even see the iGPU if I execute the `system_profiler SPDisplaysDataType` , but I can't see the iGPU anywhere else and HW acceleration is not working
- If I boot a guest without svga it doesn't boot at all, because it can't find bootable disk(!). This is what I see in logs:
```2020-07-06T23:52:32.600Z| vcpu-0| I125: Guest: Status upon boot failure: No Media
2020-07-06T23:52:32.678Z| vcpu-0| I125: Guest: About to do EFI boot: EFI Virtual disk (0.0)
2020-07-06T23:52:42.680Z| vcpu-0| I125: Guest: Status upon boot failure: Not Found
2020-07-06T23:52:42.682Z| vcpu-0| I125: Guest: EFI Shell inactive in default boot sequence.
2020-07-06T23:52:42.682Z| vcpu-0| I125: Guest: Status upon boot failure: unsuccessful
2020-07-06T23:52:42.731Z| vcpu-0| I125: Msg_Post: Warning
2020-07-06T23:52:42.731Z| vcpu-0| I125: [msg.Backdoor.OsNotFound] No operating system was found. If you have an operating system installation disc, you can insert the disc into the system's CD-ROM drive and restart the virtual machine.
2020-07-06T23:52:42.731Z| vcpu-0| I125: ----------------------------------------```
I tried Mojave and Catalina and both behave the same. Both installed from App Store images made with hdiutil. The issue is so weird that I don't even know where to look further.
Btw, I also tried eGPU and have the same issue as already mentioned in the comments - not possible to enable passrough, iGPU is not available if eGPU connected, etc.
Seems like MacOS on VM won't happen to me this time 🙁
jeffer says
How can I pass thur rx 570 to macos catalina on esxi, i tried it on windows 10 it worked but cant get it working on macos, shouldnt it have native driver for it ?
Fábio Loureiro says
Hello Good Sir,
First of all thank you for the very nice tutorial, im using a 2014 Mac Mini (7,1) followed all of the tutorial, got the GPU passed thru, but OSX / macOS dosent load the Kext's. Ive tryed it on Catalina, Big Sur and El Cap. Any ideas?
Thanks for your time.
Joel Roberts says
Has anyone been able to get GPU passthrough working on the Mac Pro 2013?
Olli K. says
Q on creating the USB Installer: I simply can't get the MAC Mini2018 t(Intel) o boot/recognize the USB stick. Works fine with all NUCs but the MAC does not care ... also switched off all security settings ... anyone has a clue
Jeff says
Has anyone gotten this working without the integrated VMware SVGA? It's a huge pain to always have 2 displays with no way to disable the VMware SVGA device.
You can use SwitchResX to "fake disable" the integrated screen, but this is annoying and doesn't actually disable it.
Ideas?
Justin says
Hey William, have you tried this recently? I have a MacMini 2018 running 7.0.3 21686933. The error I get is "Mobule DevicePOwerOn Power on failed. Failed to start the virtual machine. Failed to register the device pciPassthrough0 for 255.31.7 due to unavailable hardware or software support." I'd ask VMware support but they made it clear to me that support contract or not, they have no access to a mac cluster and cannot actually support it.
William Lam says
No
Justin says
I decided to go the support request route and I just finished the call. One of the more curious parts of it was that they were seeing the supported processor for the 2018 Mac Mini was listed as a Xeon. What a special beast that would be if it existed!
Justin says
I have spent some time on this today including bringing up a 2018 Mac Mini onto a clean install patching to 21686933 and then trying to enable pass through. The iGPU will not actually go into passthrough mode. If I configure it that way in vCenter and then reboot the machine, when it comes back up it is no longer in the list of passthrough enabled devices. If I use the host client it always shows "Enabled/Needs Reboot" no matter how many times I reboot.
Justin says
I have it working again though there are some loose ends to track down. After a few rounds of calls and log collection, VMware pointed at there not being a driver loaded in vSphere 7 for the GPU. they had to re-image the host onto 7.0U3g then manually patch it to 7.0U3i in order to get the driver in place. At that point in order to eliminate variables, I created a new macOS Monterey VM from scratch using a newly created ISO, installed the VMware tools, enabled Remote Desktop service, and added the smbios.reflecthost = true flag. After that I was able to fire it up, open Safari, and fire up a WebGL demo. What I still need to exercise is fully patching the vSphere host, seeing if multiple macOS VM's can share the GPU, and checking reboot persistence of the functionality both for the VM and the vSphere host. William, you might be mildly entertained to know that VMware support had found your article before our first call and was going to reference it until I did first.
SemoTech says
Great tutorial as always, thank you William!
I think I managed to get this working on a 2018 Mac mini running ESXi 8.0u2b, however in Ventura System Profiler (or via SSH) I see both the "Display 3MB" GPU and the "Intel HD Graphics CFL CRB" GPU. Can the 3MB one be removed from the VM config to ensure it is not used?
Also, in vCenter 8.0.2.00300 I enabled Passthrough for the Intel HD Graphics, and also set the Host Graphics to "Shared Direct" but the "Default graphics type" still shows "Shared" instead of "Shared Direct" as yours does, and Graphics Passthrough says 0 VM's are using the graphics card!?!? Why?
FYI, I also disabled claiming of the VGA driver using:
esxcli system settings kernel set -s vga -v FALSE
Here are a few screenshots:
vCenter Host PCI Config: https://app.screencast.com/kmTysTaLVIlmJ
vCenter Shared Direct Enabled: https://app.screencast.com/NlTwotMOj5ZWD
vCenter Host Graphics: https://app.screencast.com/INacxJlGgRJx7
vCenter MacOS VM Settings: https://app.screencast.com/TcELTvXZSTor6
MacOS System Profiler: https://app.screencast.com/Tp12Z58TPGPT2
Did I miss something with vCenter saying 0 VM's are using the Passthrough enabled Graphics card when it is clearly seen by MacOS Ventura?
And why the discrepancies from your vCenter & mine?
Thanks.
360coolp says
Hi SemoTech,
Could you share you .vmx file please? I followed the tutorial and enabled GPU passthrough, but the GPU is still not recognized in the OS. I'm curious what you did to get it working.
Greg Christopher says
Hi All,
Depending on the card, It is likely doable that you can enable passthrough on the mac mini (and 2019 mac pro ) with some settings I mentioned in the 2019 mac pro thread here: https://williamlam.com/2020/01/esxi-on-the-new-2019-apple-mac-pro.html BUT ALSO getting rid of the "black screen".
The general steps are:
-use an apple supported gpu/chip set. This generally means AMD and specific chip set/models. I have gotten perfect setup from a sapphire radeon RX6900XT because of the big navi 2 chip set
-create or download an oprom file for that same GPU. You will place that file (usually a .rom) into the same folder as your mac OS virtual machine.
-follow the directions in the other thread about discovering the passthrough ID for the card you are passing through (after adding PCI device in macos VM settings), and add the ".filename" and ".opromEnabled" flags in the vmx file as instructed. Virtual center users obviously will do that in the "advanced" area under the "VM Options" tab by clicking "edit configuration" and then "Add Configuration Params". Make sure you don't type the quotes and no spaces before or after what you type.
-You may end up playing later if the VM can not be shutdown and then restarted. Generally you will need to mess with the "reset method" in the /etc/vmware/passthru.map file. Getting it working the first time is most important.
-Yes, svga is left on in most cases but in SOME cases (such as the sapphire radeon example) the oprom setting "is enough" to avoid the black screen. But to be clear today I leave "svga.present" to "TRUE".
At this point I accidentally found out something. If you use a vmware console (web, workstation, fusion, or vmrc should all work) to the black screened VM, and then RESIZE the window, wait 5 seconds, the monitor you will have attached via HDMI (not thunderbolt) should light up with the mac OS screen. Something about resizing the window sends a message to vmware tools which "wakes up" the monitor. But you will have an SVGA monitor dangling. You can both:
-disable the vga monitor
and
-automate the change resolution command
with a startup script. I automatically launch the script from "login items" and set the application to open to "Terminal":
#!/bin/bash
/Library/Application\ Support/VMware\ Tools/vmware-resolutionSet 1920 1080
sleep 10
monitorID=`/Applications/DisableMonitor.app/Contents/MacOS/DisableMonitor -l | grep Unknown | awk '{print $1}'`
/Applications/DisableMonitor.app/Contents/MacOS/DisableMonitor -d $monitorID
You can find DisableMonitor here: https://github.com/epalzeolithe/DisableMonitor-3.0
I hope this does work for people as well as it's been working for me.
SemoTech says
Anyone manage to get audio working on the 2018 Mac mini in ESXi 7 or 8, maybe via a passthrough of the "Apple Audio Device" (PCIe: 0000:02:00.3) in a virtualized MacOS Sonoma VM?
My MacOS Sonoma 14.5 VM shows no audio "Output devices" so there is no sound when accessing the VM via Apple's Screen Sharing and I'd like to change that...