I started to notice yesterday that a few folks in the community were running into the following error after upgrading their ESXi hosts to latest 7.0 Update 2 release:
Failed to load crypto64.efi
Fatal error: 15 (Not Found)
Upgrading my #VMware #homelab to #vSphere7Update2 is not going so well. 🙁 #vExpert pic.twitter.com/pGOlCGJIOF
— Tim Carman (@tpcarman) March 10, 2021
UPDATE (04/29/2021) - VMware has just released ESXi 7.0 Update 2a which resolves this issue and includes other fixes. Please make sure to read over the release notes and do not forget to first upgrade your vCenter Server to the latest 7.0 Update 2a release which came out earlier this week.
UPDATE (03/13/2021) - It looks like VMware has just pulled the ESXi online/offline depot and has updated KB 83063 to NOT recommend customers upgrade to ESXi 7.0 Update 2. A new patch is actively being developed and customers should hold off upgrading until that is made available.
UPDATE (03/10/2021) - VMware has just published KB 83063 which includes official guidance relating to the issue mentioned in this blog post.
Issue
It was not immediately clear to me on how folks were reaching this state and I had reached out to a few folks in the community to better understand their workflow. It turns out that the upgrade was being initiated from vCenter Server using vSphere Update Manager (VUM) and applying a custom ESXi 7.x Patch baseline to remediate. Upon reboot, the ESXi host would then hit the error as shown above.
Interestingly, I personally have only used Patch baselines for creating ESXi patches (e.g. 6.7p03, 7.0p01) and never for major ESXi upgrades. I normally would import the ESXi ISO and create an Upgrade baseline. At least from the couple of folks I spoke with, it seems like the use of Patch baseline is something they have done for some time and has never given them issues whether it was for a patch or major upgrade release.
Workaround
I also had some folks internally reach out to me regarding this issue and provided a workaround. At the time, I did not have a good grasp of what was going on. It turns out the community also figured out the same workaround, including how to recover an ESXi host which hits this error as you can not just go through recover workflow.
For those hitting the error above, you just need to create a bootable USB key with ESXi 7.0 Update 2 ISO using Rufus or Unetbootin. Boot the ESXi 7.0 Update 2 Installer and select the upgrade option which will fix the host.
To prevent this from happening, instead of creating or using a Patch baseline, create an Upgrade baseline using ESXi 7.0 Update 2 ISO. You will first need to go to Lifecycle Manager Management Interface in vCenter Server and under "Imported ISOs", import your iage.
Then create ESXi Upgrade baseline and select the desired ESXi ISO image and use this baseline for your upgrade.
I am not 100% sure, but I believe the reason for this change in behavior is mentioned in the ESXi 7.0 Update 2 release notes under "Patches contained in this Release" section which someone pointed me to. In any case, for major upgrades, I would certainly recommend using Upgrade baseline as that is something I have always used even when I was a customer back in the day.
Awesome, hit this today.. But for some reason I only hit it on my 3rd node, the first two i upgraded with the VS7.0u2 patch via LCM didn't have this error.. Very wierd.
Hi,
I'm getting the same error but i did use an upgrade-baseline to upgrade to vsphere 7.0. Doing the upgrade with the iso (i had a 7.0 iso (update 2 is not yet on the supported list for hpe synergy (10/3/2021))) after getting the problem does solve it but it comes back when you install the non-critical host patches (predefined baseline) (+100 patches). (Host security patches baseline and Critical Host Patches baseline don't give a problem when installing)
Greetings
J
I ran into this too. To get that error, in vcenter, I clicked the host > Updates > Check Compliance. 1 Non-Compliant baseline popped up which was the "Non-Critical Host Patches (Predefined)" baseline, which I think was a default one. Clicked the attach checkbox and then remediate and boom that error popped.
After a few min, the host would reboot and the same error would pop up. Going to try the iso upgrade to see if that fixes it.
How did your ISO upgrade go? My upgrade ISO is not bootable. When I examine the ISO I only see a collection of RPM's.
I attempted to boot to my ISO through iDRAC, but the virtual media is not bootable. For clarity I am attempting to boot to VMware-vCenter-Server-Appliance-7.0.2.00000-17694817-patch-FP.iso
My host and VM's are all offline and my trouble ticket with vMware has gone unanswered so far. Any help here would be GREATLY appreciated.
UPDATE:
SHFT + R was unable to rollback to my previous version. So I booted back to my previous visor installer. Recovered my host and performed the host update from CLI.
I was able to upgrade it through iLO by attaching the ISO but now one of the datastores that is attached to the host wont connect. It actually come back up, but in a disconnected state.
Trying to reconnect it gave errors saying, host has a datastore that conflicts with an existing datastore in the datacenter. So far it's the only host with issues, but I wont update anymore until I know that updating wont break anything else.
I ended up removing it from a cluster and can't re add it because it says that the datastore conflicts with an existing datastore?
UPDATE:
Just rolled back to a previous version with Shift + R and everything is working just fine
In my case Upgrade option haven't fix the issue after boot from ESXi 7.0.2 USB stick - after upgrade and reboot i have got on console a bit unusual ESXi DCUI screen (gray/yellow field but without any text on - no VMware version, hostname, etc). When i entered to DCUI through F2 i saw management network in completely wasted state - no any vswitch, portgroup, vlans, ips, etc.
I have the exact problem with you, happened 3 times on my two nesting ESXis, the only solution is reset all configurations and re-configure everything.
I ran into this as well, made an offline bundle using powercli and updated from that by esxcli software vib update -d [zip-file]
Thanks, man. Saved my lab!
Ran into the same issue. With HPE Custom image and alsooffline bundle update with esxcli resulted in same behavior.
I also hit this and used the USB drive to get around it on 2 hosts. BTW, Shit-R would not work. The 3rd host, I just went to the USB to start off with and it failed horribly. This time the Shift-R worked and the VUM worked. I am wondering if simply rebooting the esxi hosts before starting the upgrade with VUM would have done the trick in the first place since the third host was just rebooted before the attempt.
I tried a CLI update to 7.0.0U2 from 7U1d and my AMD 3600X homelab wouldn't come up afterwards. Looking at the console screen,everything was blank except there the keyboard was unresponsive wiht a message to view the logs. However once I booted from USB media with 7.0.0 U2 ISO on it and then ran an in-place upgrade - the system upgraded fine.
One improvement I did noticed with GPU pass through previously I had to modify the /etc/rc.local.d/local.sh file as per https://tinkertry.com/vmware-vsphere-esxi-7-gpu-passthrough-ui-bug-workaround to get the GPU passthrough to "stick" but this now appears to work without this... so all good. We now have working home automation/TV and Nvidia gamestream working again!
Upgrading my #VMware #homelab to #ESXi-7.0U2 did not go very well also. 🙂
Booting from USB and perform the upgrade again solved the issue. The ESXi host runs fine again.
Thank you for sharing the solution so quickly.
I did an upgrade via VLM, and got that error. I find an old KB https://kb.vmware.com/s/article/76159 i did as mentioned there.
Changed my BIOS setting (SYnergry) from UEFI Optimized / Secure boot: Enabled to Legacy BIOS.
My ESX started corretly. Changed back to UEFI, crashed again, changed back to Legacy, ESX works fine.
Changing Firmware from UEFI->BIOS is NOT a solution and we do not recommend that. It can lead to other unknown behaviors. I've already heard of at least one example where this has caused the system to come in incorrectly (even though it boots). If you've attempted this for GuestOSes, it can render it from booting. This is going to be made clear in the recently published KB (which was published yesterday) that changing Firmware is NOT a solution
I have a case open with vmware to investigate this further. The KB they issued on 10/3/2021 is no solution for me. (I have used an upgrade-baseline and still have the problem) It arises when patching with “Non-Critical Host Patches (Predefined)” baseline (critical and host security baselines are fine).
Booting from 7.0U2 ISO gets error "Loading /EFI/BOOT/boot.cfg. Failed to load crypto64.efi. Fatal error:27 (Security violation)"
My friend also get this error “Loading /EFI/BOOT/boot.cfg. Failed to load crypto64.efi. Fatal error:27 (Security violation)”
Did you find the way to pass this? Thank you
Here too. Installing from scratch on a IBm SystemX 3650 M4 with UEFI updated
No way: installed 7.0U1d [VMware-ESXi-7.0.1.-17325551-LNV-20210209-Milan-21A] + [ESXi-7.0U1d-17551050].
Waiting for news.
Can anyone speak to the current status of this issue? It can be frustrating to see the continued marketing blitz behind the 7.0. Update 2 release - near daily tweets and blog posts touting all the great new Update2 features- yet this release has been pulled for some time now with guidance in KB 83063 stating "VMware recommends to NOT upgrade to 7.0 U2 until this patch is available." and "please pause all upgrade activity if possible." And per KB that guidance even applies to the listed workaround of using the .iso to perform the upgrade. Does that same guidance also hold true for OEM Customized 7.0.2 Installer CDs from HPE, Lenovo, and DellEMC or only the vanilla 7.0.2 .iso from VMware? I appreciate it's impossible to put the marketing genie back in the bottle, but I'm just looking for any kind of progress update or ETA for the 7.0.2 patch release?
Hi Patrick,
I can't go into specific timelines, but please know there is definitely progress being made in regards to this issue and hopefully there should be news on this soon. In terms of your OEM question, I believe those are also affected but please reach out to your specific hardware vendor to confirm since those images are built by our partners. Thanks for your patient
Hi tehre,
My 5 cents: I confirm this image is buggy, I've tried to replace the image in my nested lab (so on PXE server) and I can't deploy new hosts with current update 2 ISO.
But I could manage to update all existing nested hosts with it... weird 😉
William,
Two weeks on now. What news?
See my previous reply 🙂
FYI - ESXi 7.0 Update 2a has just GA'ed and resolves this issue along with few others
Download: https://my.vmware.com/web/vmware/downloads/details?downloadGroup=ESXI70U2A&productId=974&rPId=65022
Release Notes: https://rna.vmware.com/document/preview/html?documentId=2473 (still being staged as of posting this comment)
Hi, for HP gen10 servers is safe to update to 7.0U2a with vmware-standard image if the server has 7.0.1 HPE custom image?
Hi Wiliam,
I've updated my image to 70U2A in my nested lab to test it this morning, but it seems it doesn't run the Python script that I use to join my hosts to the vCenter after the initial deployment. If I rollback the previous image, no problem. It is like it doesn't run it at all.
Do you have the same issue too?
PS: I was never able to join the vCenter with DNS names, any hint to make that work? 😉
using ISO I am able to upgrade ESXi from 7.0 U1 to U2.
I have upgraded to U2 by ISO inside ILO COnsole but at restart the management network is down with no way to recover it. I have reverted it and I have updated to 7.0U2a via Lifecycle Manager inside Vcenter using standard vwmare image and hpe add-on bundle with success.
Ive done a fresh install on a Gen10 using the newly released Custom HPE ISO 7u2a and all is good.
However remediating again the non critical baseline after the install (which updates 20 things) stops AMS working.
iLo no longer believes the AMS is running, and there are entries in the ams.log saying No response from iLO for Hello.
I have tried uninstalling AMS comonent and also the iLo Native driver, but neither fixes it.
Not sure which of the 20 updates breaks it.
I also tried resetting the iLo.
Perhaps other who have done fresh installs (and applied non critical patches) could check if AMS still works.
After a little more testing, it is not anything in the non-critical updates baseline which stops AMS working, it is the process of adding as iSCSI server address to the Dynamic Discovery on an HBA, after which AMS stops working, and proceeds to fail talking to the iLo. AMS proceeds to try to establish a channel to the iLo over other devices, but eventually fails and the process stops. Looks like a call to HPE.