Inquiries from customers on the support for ESXi on the latest 2019 Apple Mac Pro 7,1 has slowly been trickling in since the release of the system in late December. Officially, VMware currently does not support this platform and until we have a unit in-house to investigate further, this is the official stance.
With that said, several folks from the community have reached to me and shared some of their findings as it relates to ESXi with the new Mac Pro. A huge thanks goes out to Mike Rimmer who was able to go through the installation process and identified that the on-board NICs were not automatically detected by ESXI and the installation was unable to proceed. With the extensibility of the Mac Pro, Mike was able to add a supported Intel-based NIC to the system so that we could further understand the issue.
Upon closer investigation, it looks like the new Mac Pro uses two Aquantia based 10GbE NIC which is simliar to the 2018 Mac Mini which requires the Aquantia ESXi driver which was developed earlier last year.
AQC107 NBase-T/IEEE 802.3bz Ethernet Controller [AQtion]
Vendor ID: 0x1d6a
Device ID: 0x07b1
Although Mike did not have a chance to confirm this assumption, I did get validation from another customer who made the same observation when he attempted to install ESXi and once the Aquantia ESXi driver was incorporated into the latest ESXi 6.7 Update 3 image, both on-board NICs were automatically picked up by ESXi and installation was successful.
UPDATE (09/02/21) - Per this official blog post, VMware will no longer pursue hardware certification for the Apple 2019 Mac Pro 7,1 for ESXi.
UPDATE (04/28/20) - ESXi 6.7 Patch 02 resolves a number of the issues mentioned below, please take a look at this blog post here for more details.
UPDATE 1 (01/16/19) - Thanks to our Graphics team who was kind enough to loan me their 2019 Mac Pro which literally came in yesterday! I had an idea which I wanted to run an experiment on which was to add a PCIe card w/M.2 NVMe SSD and see whether or not the Apple T2 Security Chip would have any affect on whether or not ESXi would be able to see the device. I was not super optimistic but I had a need for an additional M.2 device, so I went ahead and purchased a $15 PCIe adaptor. I was pleasantly surprise to see that ESXi not only detected the device but I was able to format a local VMFS volume and power up a functional VM! I guess this makes sense as only the Apple SSD's are cryptographically tied to the T2 chip and other PCIe devices would not be and this would allow customers to take advantage of this system right now for running non-MacOS guests (yes, T2 still affects the SMC).
🔥 BOOM! 🤜🎤🔥
PCIe adaptor w/M.2 NVMe is NOT affected by the Apple T2 Chip! ESXi is able to see the device but more importantly, I was able to format local VMFS volume and power up a VM! Guess it makes sense, Apple SSD are cryptographically tied to T2#ESXiOnMacPro2019 pic.twitter.com/hod8Irckj9
— William Lam (@lamw) January 17, 2020
I also ran another experiment by connecting a Thunderbolt 3 chassis which also had a supported M.2 NVMe to see if I was going to be lucky again. Although it looks like ESXi 6.7 Update 3 has resolved the PSOD'ing issue, ESXi was not able to see anything on the other end.
Note: Secure Boot must be disabled on the Mac Pro before you can install ESXi, you can find the instructions in this Apple KB.
This was certainly some good news but like the 2018 Mac Mini, the new 2019 Mac Pro also ships with the Apple T2 Security Chip which has proved challenging for ESXi as mentioned here along with some known caveats. For now, I would hold off making any purchases of the new Mac Pro if you intend to run ESXi. VMware does officially support ESXi on the last current generation of Mac Pro 6,1 along with Mac Mini 6,2 and Mac Mini 7,1 which are all on the official VMware HCL.
I will continue to update this article as new information and findings are shared with me.
NATHAN BARRY says
I think the biggest caveat is the inability to boot Mac OS vms due to T2 blocking SMC. I really hope apple starts helping out since the new rackmount form factor has some serious potential for vm xcode build and test farms. I'd really like to be able to replace our developers old minis with an actual rackmount system that could integrate with vsphere & our FC storage.
Kazuto Okayasu says
I'd put one I think significant caveat on the ' officially support ESXi on the last current generation of Mac Pro 6,1' comment at the end. We've been running three of them for several years, and they've always had a problem of seemingly randomly dropping their network connections, causing APDs, etc. But the problem I encountered when trying to contact VMWare support is that because the PCI IDs of those NICs aren't in the HCL, VMWare Support refuses to work on the case, and refers me to the 'hardware vendor' since they claim that entry into the HCL is wholly dependent on the vendor. Well, to my knowledge Apple isn't going to support ESXi on their stuff, so we're stuck.
William Lam says
We've had plenty of customers call in for Mac Pro 6,1 and in fact, MacStadium is a very large customer who's also got a huge fleet of these systems as they're quite popular as well. I know there were some support changes to educate GSS on the Mac Pro system as its not a common system that we get called about compared to the rest of the x86 systems. If you're having trouble, feel free to reach out to me directly via DM and we can make sure you're fully supported, as long as you've got valid SnS with VMware Support.
Benny Rt2 says
Cool. Can you install a vCenter Server Appliance VM on that ESXi host and then use that vServer appliance vSphere Client to build guest VMs on that ESXi host?
Vladislav Rassokhin says
Thanks for research and investigation. Overall great news except:
>running non-MacOS guests (yes, T2 still affects the SMC)
Maybe it's possible to emulate SMC same way as QEMU and Proxmox does? Maybe some extension for ESXi to do so? Anyway we're ok with Apple license since it runs on Apple hardware.
Inability to run macOS guests makes whole ESXi on MacPro kind of useless
Hey, found it was very interesting and useful.
I was able to get about as far as you have with the new MacPro and ESXi. Having the SMC not recognised by VMs is a bit of a deal breaker though. Do we know if there is any progress between Apple and VMWare or are both companies waiting for the other? There has been some progress getting Linux to see the internal ssd but I haven't seen any mentions of the SMC in these projects. As it stands with the Mac mini I can't see ESXi on the new MacPro anytime soon which is a shame as its really perfect for it.
Has anyone tried to install ESXi 7 on the new macpro ?
William Lam says
Yes, I had tested this awhile back (pre vSphere 7 GA) and the behaviors are the same as described above.
Ok, bad news....
Thanks for your reply and all your great articles !
Dan Pineda says
I installed a PCIe card with a 1Tb M.2 NVME as what you have done and the OS install won't continue and hangs at the error below:
More than one module named /sb,v00
More than one module named /s,v00
More than one module named /esxupdt.v01
Boot module signatures are not valid
I also created an USB 6.7 U3 with the Marvel NIC drivers. Is there anything I'm missing?
Not having as much success as others seem to be. I can't get patch 6.7patch 3 to install, I get a PSOD on boot. 6.7 patch 2 installs and runs fine, though.
Ned W. says
I was able to get 7.0U1a installed, and I have a functional Catalina VM. Keep in mind - DO NOT install the 7.0U1b patch update - it screws with the SMC exposed to the guest OS, and the macOS guest will beachball.
I did follow the instructions above for the Marvell NIC drivers, but I just couldn't get the machine to pull an address from DHCP. I went out and bought the standard Intel 2 port SFP+ 10Gb fiber NIC (X540 I think?), installed that, works like a charm out of the box.
For the internal NVMe storage, I used a 4TB WD Black PCI card. Cost about $999 from Amazon, but I anticipated this and had the Mac Pro configured with the absolute minimum storage.
One thing that I'm unfortunately banging my head against here is the GPU pass through. Each time I enable a GPU on the guest, I start to get a reboot loop. @William Lam if you happen to read these comments, I could really use some help, and I've got a support contract... 🙂
Are you having problem on the aquantia aqc107 10g onboard nics on vsphere 7??
Michael Mast says
Halfway through May, 2021, I'm wondering if anything has improved WRT installing and running ESXi on Mac Pro 7,1.
Hi guys, I just installed 7.02 on my Mac Pro 7.1 with external nvme and used an external NIC unitec type c connector,
Next I’ll order an internal pci for the storage and network , any new updates from any one?
@Gili I've had great luck with the Western Digital AN1500 NVMe SSD Add-in Card. Performance is screaming fast. Last I checked, the 4TB model can be had on Amazon for about $1,000. VMware recognizes it on install.
Thanks @n3dwilson !
I will try it and also I need a good internal NIC for the system ( the internal was not working for me)
@Gili I always had issues getting the internal NICs to work, although to be honest, I haven't tried since the Atlantic drivers were included with ESXi out of the box. Startech makes a PCIe 10 Gigabit SFP+ NIC, based on the Intel 82599 chipset, that I have been using.
Out of curiosity, have you actually been able to get a macOS guest to install without a reboot loop? I ran into all kinds of problems with this, and sadly had to use the ESXi unlocker in order to get it to work, which is far from ideal.
Sadly , same here . I'm tried to install the unlocker because of the reboot loops but that didn't worked also.
the Atlantic drivers seem seems to be identified and connected but I didn't received automatics Ip (DHCP) so I'm working with a different usb NIC model.
@William any movement on this from the VMware side of things? I'd love to start testing again, but doesn't seem like a lot has changed in the last year or so. 🙁
I received a brand new Mac Pro Rack mount and see there is an internal USB port. In my other servers we install to SD cards on internal risers. Can one use that internal USB port to install ESXi onto a USB storage device so the entire internal storage is available?
I was able to get the 7.0U2a installer to run and during installation it saw the USB drive I had in the internal port and I was able to complete the install to it. However, I did notice that no other storage was found, like the internal SSD. And I have not found a way to get it to boot to that internal USB drive so any tips and pointers for that would be appreciated too.
Thank you in advance.
@simplijim please be advised, before you go down this road, that it appears that VMware has pretty much not been responsive to anything regarding ESXi on the 2019 Mac Pro for almost a year and a half. @lamw always shares when he gets new information, but sadly, there hasn't been much for a very long time. It is starting to feel like either VMware has decided not to pursue development or support of the 2019 Mac Pro, or Apple has sent them a cease and desist. It's pretty discouraging.
With that said - I would highly recommend installing an additional PCIe internal storage card. I've used a Western Digital Black 4TB PCIe card on three Mac Pro installs that I have here. You can set ESXi to boot off this internal device, and also use it for your local datastore.
The internal SSD is protected by Apple's T2 security architecture. I know that some folks have gotten ESXi to recognize it, but in my opinion, the juice isn't worth the squeeze.
If you want to boot to the internal SSD, which you will need to do from time to time to install macOS updates - plug an Apple keyboard and a display into the Mac Pro, turn the power on, and hold down the option key on the keyboard until you see the dialog that allows you to select an OS to boot. Pick Macintosh HD, and you will boot from the internal.
@n3dwilson, thank you. I found @lamw's virtualghetto site a long time ago when I was tasked with trying to get Mac Mini's to be VMHosts (successfully with that resource) and then the Coffee Can Mac Pro's with the Pegasus external storage arrays (again with this resource) so when the opportunity to try with the latest Pro was hinted at, I jumped.
I can get it to boot from the internal USB drive using the hold-the-option-key method but was looking for a way that would not require user intervention....In the mean time, and whilst still "in development" I'll have to be physically present to reboot.
I'm willing to try getting it to use the internal as I'm not on a timeline other than "whenever you can get it to work, we'd really appreciate it". 🙂
I just un-boxed it today so I'm just starting to explore. Found the NIC issue to still be there with 7.0U2a: https://williamlam.com/2021/03/aquantia-marvell-aqtion-atlantic-driver-now-inbox-in-esxi-7-0-update-2.html
I'll keep poking at it, I know @lamw and others are as well.
@simplijm ahh, yeah I see what you are trying to do. It sure would be nice if you could add IPMI functionality to Mac Pros as well!
If your Mac Pro is already booted into macOS, you can ssh into it and try the following:
# get a list of bootable volumes
sudo systemsetup -liststartupdisks
# get current startup disk
sudo systemsetup -getstartupdisk
# set the startup disk
sudo systemsetup -setstartupdisk /path/to/USB/Volume
If you are booted into ESXi, you could maybe try something like:
Hope this helps.
You can set the boot disk at the boot selection menu (pressing Option during POST) by holding down CTRL before hitting Enter to boot from the selected Disk. The icon under the disk should change from an Arrow to a Circlular Arrow. Subsequent restarts will boot from this disk.
After some trial and error, I was able to get 6.7 U3 Build 15160138 installed and up and running. I added in the Marvell driver to a custom ISO build, Installed and then loaded the community NVMe driver from esxcli.
Interestingly enough, I WAS able to see the SSD and reformat as a VMFS volume (although only 1.8TB instead of the full 2TB).
Now it boots to the internal USB stick where I have ESXi installed. This is what I was after.
Now to get a VM going.....
William Lam says
I’ve asked for an external update which should hanswer questions around 2019 Mac Pro. Will share its published
Long time reader, first time commenter. Thanks for everything you've shared over the years. I've been saved by technical details you've posted more than a few times.
I've been trying to just get our cheese grater to boot an ESXI usb stick or mounted ISO for a couple days with no luck. I've tried the syslinux + copy + edit and every combination of rufus settings for creating a usb stick. I've disabled secure boot protections and even disabled system integrity protection. Still no luck. USB sticks that boot on dell hosts are also not booting on the mac pro 2019. I see the device come up as UEFI Boot (no USB symbol on drive) but I always end up at system recovery to select boot disk and restart.
After years of managing esx clusters on Mac Pros it feels bad to need to ask but, how are you all making your bootable media, and is there something other than disabling the protections that's needed?
Hey Tyler, what kind of ISO are you using? I am able to boot off ISO over KVM to our new Mac Pros with a custom ISO generated via PowerCLI. All it has special is the VIBs for the ATTO adapters in it. ESXi-6.7.0-20200604001-standard seems to work the best for us. Upgrading or installing any newer version has broken ESXi in our experience.
William Lam says
FYI - In case folks have not seen/heard the news, VMware will no longer VMware will no longer pursue hardware certification for the Apple 2019 Mac Pro 7,1 for ESXi.
Ned Wilson says
Hey @lamw - man, that makes me so very sad. Thanks for the information, as always. But it sure would have been nice to make use of these machines in a workflow supported by VMware. Do you think that there is a snowball's chance in hell of VMware supporting ESXi on Apple Silicon, or is this likely the end of the road?
Has anyone found a single PCIe3.0 NIC that's both Mac happy and VMware 7.0 compatible?
you throw a super generic Intel X520 into a 2019 Mac pro and it just stops booting.
I've used these, and have never had any issues: https://www.startech.com/en-us/networking-io/pex20000sfpi
Keep in mind - it's not easy finding generic Mac drivers for these cards - I think you'd have to purchase a specific adapter from SmallTree to get the driver support. These will work out of the box with VMware, and are 100% stable. For macOS guests, just use VMware's built in networking, or you can also pass through one of the onboard Atlantic 10 GbE ports.
That's interesting. This is just the same Intel 82599 on a PCIe 2.0 platform as the Intel X520 that doesn't let the Mac boot. Smalltree is an interesting idea though. this card in particular I'll have to look into more:
thanks for the suggestion n3d.
Quick update for anyone googling around. The MacPro 7,1 has some hidden limit on the number of PCIe 2.0 cards, or lanes it will talk to simultaneously. the Intel X520 actually will boot on this hardware, assuming you don't have PCIe 2.0 cards in most of the other slots. I don't know what the exact limit is, but if you were trying to pack wall to wall PCIe devices into this unit, I would recommend making many, or all, of them PCIe v3.
Good to know about the PCIe card limit!
Sadly, I'm still stuck on getting VMs to boot without the unlocker installed. Looks like this is a bit of a lost cause at this point. 🙁
oh wow, you got MacOS VMs to boot with the unlocker? Good job! It gets so much worse than that. With a MacPro 7,1 and Vmware 7.0.2, 7.0.2d, or 7.0.3, Mac VMs don't boot with OR without the unlocker installed. And of course my Dswitch is running at version 7.0.2, so I can't try 7.0.1. I've gotten a High Sierra VM to boot maybe 1 in 10 times, but Mojave+ all do the Dontstealmacos boot loop and gray screen of death. If you have some magic version of the unlocker tool that works on 7.0.2, please link me. It's not violating anyone's EULA to leverage the $20,000 sunken cost that VMware and Apple have abandoned us with. @n3dwilson
@Jesus - damn. Yeah, without the unlocker, I just get the reboot loop as well. So frustrating.
So I have a MacPro7,1 that has been up for 117 days running 4 VMs. The firmware version is 17220.127.116.11.0, which I believe was distributed with the macOS Big Sur 11.6.1 installer.
On the VMware side, the installed profile is ESXi-7.0U3-18644231-standard.
For the unlocker, I grabbed it from GitHub here:
Be sure to build it first, instructions are in the README, and then copy (scp) the resulting .tgz file to the ESXi host. I use a Mac as a desktop computer, I've built this several times on said Mac, but have never tried with Windows or Linux. Linux would probably work.
I have only added three configuration options in the .vmx file for the guest. The first is required, and the second/third are only if you plan on mucking around with PCI passthru.
smc.present = "TRUE"
pciPassthru.use64bitMMIO = "TRUE"
pciPassthru.64bitMMIOSizeGB = "64"
your reply is already more than charitable and I appreciate you. If you have a chance, do you think you could check and see if the smc-test command returns SMC = present regardless of whether or not the unlocker tool is installed? I have observed it to list present both ways, which makes it difficult to know if I'm encountering an issue with the installer, or the environment more broadly.
I hate to even ask since I don't want you to uninstall it as a favor to a stranger and accidentally bring down your environment. That Mac firmware version is good info. I'll have to compare that to what I have.... I think on the MacOS side I had upgraded to Catalina or Big sur, not that I'll have any way to adjust it or roll it back to match yours.
I've been using the pre-compiled unlocker.tgz files this entire time, since I can't get it to compile on MacOS.
The esxi-smctest.sh script should always report "smcPresent = true" if you are running on genuine Apple hardware, regardless of if the unlocker is installed or not. ESXi will load a kernel module called "applesmc" on boot if it detects actual Apple hardware.
To build the unlocker from source is pretty easy - it's just Python.
Here are the steps I took - I tested this just now on a lab machine.
**** On Mac Desktop ****
git clone https://github.com/shanyungyang/esxi-unlocker.git
scp esxi-unlocker-302.tgz *protected email*:/vmfs/volumes/datastore1/
**** On VM Host ****
tar xzvf esxi-unlocker-302.tgz
**** To verify, on VM Host ****
smcPresent = true
custom.vgz false 36014512 B
Just verified that I am able to successfully boot a macOS guest running Monterey 12.2.1. Impossible to do without the unlocker though - it does the dreaded reboot loop after it causes a kernel panic at Dont_Steal_MacOS.cpp. It appears to be looking for a key in the NVRAM - "OSK0", that it can't find.
It seems like one way to tell whether you're using the native SMC or the unlocker SMC is whether running ./esxi-smctest.sh shows
smcPresent = true
or the fuller
smcPresent = true
custom.vgz false 34482448 B
It might actually be working now! the difference would somehow be using the self-compiled version of 3.0.2 that I made with Big Sur vs. a pre-compiled version of 300 or 303 from the releases area of the github. I'll have to test with more OSX versions and more reboots, and check passthrough as well. First time all year this hasn't looked completely hopeless. Thanks for all the notes @n3dwilson You can't imagine how helpful they were.
Greg Christopher says
Great to see folks up and running on 7.x with the mac pro.
I was stuck running on 6.7U3 with a 2020 patch. That was working perfectly; however no upgrade after that seems to allow the aquantias to boot. Since it's still 6.x, I am going to try the community drivers next to see if I can get up to the very latest version of 6.7 with working Aquantias.
@n3dwilson couple questions!
What nics are you using for your 7.0 build? You mention the startech card. I looked all over their site and they don't mention esx 7.0. They do mention the vmlinux compatible versions of ESX. So I'm just wondering if you're using that startech card on ESX 7.x.
Also a bit hard to figure out but wondering if that card will support an ethernet (copper) SFP? It keeps talking about fiber but it's kind of weird because the SFP is not installed. The SFP is what does the heavy lifting IMHO but I don't think this will hold an ethernet sftp (like https://www.amazon.com/10GBASE-T-Transceiver-Copper-Compatible-SFP-10G-T-S/dp/B06XQBFHNL?th=1 )
Greg Christopher says
A couple updates on my experiments with mac pro 2019 and aquantia using ESX 6.7 build 19195723 (latest) last night and this morning:
-I noticed there is a vib called "qcnic" bundled in later versions of ESX 6.7 that contains the marvell aquantia driver. It may be identical to the Atlantic driver we have been adding to the installer.
-Since those newer versions of ESX 6.7 are not working correctly, I tried building the latest ESX 6.7 images without the qcnic vib while including the community vmklinux based driver (AQtion-esxi-18.104.22.168.0-esxi1.6) OR the Atlantic (22.214.171.124 build 8169922) driver. I verified in turn that these drivers were exclusively loaded using "esxcli software vib list". Each build exhibited the same bad behavior I described above with the latest version of ESX 6.7.
So no love for the aquantia cards past 6.7 U3 so far.
Greg Christopher says
I have some good news for the brave amongst us. Who don't mind living in unsupported territory.
I am now using vsphere 7.0.3 (70U3d) build 19482537 on my Mac Pro 2019 with all the latest firmware updates. It is running steady with heavy workloads with the aquantia drivers that were released quite some time ago.
I have been able to use the 2 10gbe nic cards on a gigabit switch (actually an ASUS AX11000) as well as a new Netgear XS508M 10gbe switch. All seem to be working perfectly for me. As opposed to my previous experiments with ESX 7.0 on the box, the Nics are able to autonegotiate to the correct speed in both situations.
I am not running the nvme fling; instead I reserve the onboard NVME for running Mac OS or Windows (boot camp) hosts. I am booting off USB which is actually the slot located on the motherboard so all external slots are open. I have a promise J2i caddy up near the 28 core CPU ; I have 2 Micron 9TB drives mounted. ESXi is able to see these both easily and both are allocated right now as my VMFS storage.
I have tested USB gigabit ethernet adaptors from TPLink including the UE300C (USB C style) and UE306 (USB type A style). They all work as well. IN theory just leveraging those gigabit USB adaptors on available external ports, I could easily have 6 Nics on the back and a couple more on the top (desktop model). More than enough for any kind of network topology you have.
Although I'm still searching for a good NVME raid that is reliable for the system, I can currently utilize the promise pegasus R4i RAID.
I am accessing the raid for Time machine backup server and Plex Server utilizing passthrough from a VM running Mac OS Big Sur.
It's a caveat that I do still need to use the unlocker (https://github.com/shanyungyang/esxi-unlocker.git) python script for the Mac OS VM to launch, even though tests indicate the SMC is visible. So that's something I have to manage. I am accessing the Mac VM right now seamlessly by connecting to it using Apple Screen sharing in full screen. I attempted to passthrough the RX6900 AMD card; however I was unsuccessful in getting the VM to boot. The Mac VM has 32GB of RAM and 8 CPU virtual cores allocated so plenty able to run big apps, albeit without the GPU.
It's been a fairly good experience using the hosted Mac VM with plenty of compute and no visible lag. So I'm still using the Mac "as a mac" while hosting a pretty heavy ESX workloads (including a VM with 600Gb of RAM and 32 dedicated virtual cores). With all that, the machine is still at less than half capacity and purring like a kitten with the fans audible but not nearly as high as they are able to go.
Caveat emptor as I said for the brave. If this configuration breaks in an upcoming ESXi release, it's not VMware's fault. This is not a configuration they need to regression test! However, I am thinking that as long as the Mac Mini 2018 is supported, the aquantia cards and many other things will continue to work "just fine".
Really really glad this thing is not the paperweight I thought it was going to be when VMware announed they would not pursue HCL.
None of this stuff would be working without Martin's help- we are all indebted.
We recently bought another batch of these, configuration is exactly the same with the caveat that they came with Monterey. We can not vmotion to the new hypervisors due to what we suspect is a SMC mismatch. Has anyone come across this yet? Suggestions on how to maybe downgrade the SMC version? I verified "smcPresent = true"
Greg Christopher says
I am thinking of buying another newer one and hope I don't hit this issue. Some questions:
-Are you running ESX 7.0.3 latest?
-Are you only having trouble with vmotion of Mac OS?
-Have you tried to boot the older boxes to Mac OS and then run all the latest updates in Big Sur and/or Monterey? Those do include firmware updates.
I am hoping to use vmotion with the new box, but for me it won't be mac OS I vmotion. Still important.
so i just tried wiping the internal monterey drive and installing big sur in its place and that didn't do the trick. Also tried using the esxi unlocker project and still no luck. Running vim-cmd hostsvc/hosthardware | grep smcPresent returns smcPresent = true, on both the working systems and new busted. My hope was clean install of big sur would flash the SMC back but i'm not sure thats the case. Haven't tried catalina yet. Not sure if there is any other mechanism to flash the smc
Greg Christopher says
I'll be getting the other one soon I will see if it's a problem for me. But you haven't mentioned if you are vmotioning mac OS vms or linux? It matters I think.
Greg Christopher says
Also, just to be clear: You want to run Mac OS on the OLD system so that you can potentially Advance the firmware so that both the new machine and the old machine have the same firmware. There would be no way to get a new mac to go "back" to an older firmware. Macs have never been able to run versions of mac software (firmware or software) that existed prior to the day that macintosh shipped. So you need to flash the older box and cross fingers. But please let us know what you are vmotioning.
Apologies. We are running MacOS vms, turns out vmotion was just the beginning. We cannot run MacOS at all on these, not even the ISO will boot. We can cold migrate but the vms will not boot. The dream is dead.
sorry ISO will NOT boot
Greg Christopher says
to be clear as mentioned above everything we are doing is not supported. but people are making macOS on VMware work.
maybe that system for example is running on Mac mini but I seriously doubt it given the memory ceiling.
The esxi unlocker python script is a must for me even on Apple hardware are you using it? of course it was created to allow running mac vms on a non-Mac server but… Of course it works on this Mac too and others on this forum are using it as well. There seem to be various versions of the script floating around… and 7.0.3d really only works with the latest one. does that break v motion? I don’t know. I just know that without it I can’t boot Mac VM’s either.
so my advice would be to make sure you try that.