WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple
You are here: Home / ESXi / Update on running ESXi on Intel NUC Hades Canyon (NUC8i7HNK & NUC8i7HVK)

Update on running ESXi on Intel NUC Hades Canyon (NUC8i7HNK & NUC8i7HVK)

11.02.2018 by William Lam // 55 Comments

The Intel NUC is one of the most popular and affordable hardware platform for running vSphere and vSAN Home Labs. For customers that want a bit more computing power, Intel also has their Skull Canyon platform which was released back in 2016 and has also gained in popularity amongst VMware Home Labbers. To be clear, the none of the Intel NUC platforms are on VMware HCL and therefore are not officially supported.

Earlier this year, Intel released their second generation of their higher-end Intel NUCs dubbed Hades Canyon which comes in two flavors NUC8i7HNK and NUC8i7HVK, with the latter being the higher-end unit. Based on the previous generation of hardware, most customers assumed ESXi should just work and went out and purchased the lower-end "HNK" version just to find out that was not case. The ESXi Installer would boot up to a certain point and then stop with the following error:

“Shutting down firmware services…..

Using “simple offset” UEFI RTS mapping policy”

To add to the confusion, this issue was not observed with the higher-end NUC8i7HVK model which was also quite interesting. Over on the nucblog.net, they also confirmed ESXi runs fine on "HVK" model and the issue seems to be isolated to the lower-end "HNK" model.

UPDATE (01/15/19) - For those interesting in passing through the iGPU in Hades Canyon, take a look at this blog post for more details.

UPDATE (11/02/18) - After publishing this article, I had noticed Intel just released a new BIOS Update (HNKBLi70.86A) v51 and while reading the release notes I was surprised to find that there was a fix from Intel regarding the ESXi issue.

Fixed the issue where an error would occur when installing VMware* ESXi versions 6.5 and 6.7. 

Given this breaking news, I just finished flashing my system as I was running v50 and I can confirm that I am now able to successfully boot and install ESXi 6.7 Update 1 without any issues and I suspect this should also work for ESXi 6.5 Update 2. No additional tweaks are required, simply follow instructions for downloading the latest BIOS update and updating your system using either the UEFI Shell or Interactive BIOS menu.

Given the number of reports from the community, I wanted to see if there was something that I could help investigate from a VMware standpoint, knowing that this is an unsupported platform and this is best effort on our end.


Note: A really cool feature of the Hades Canyon platform is that the color of the "Skull" logo on the top of the chassis is actually now configurable along with the power button and disk activity light. In fact, the screenshot above is not the default color the system ships with. This can be done by going into the interactive Intel BIOS (F2 during bootup) or if you decide to run Windows on the system initially, you can use the Intel LED Manager application.

After getting access to the hardware in-house and reproducing the issue, we think we have a workaround that can be used for the latest version of ESXi 6.7 Update 1. This could not have been possible without the amazing help from one of our Engineers, Andrei Warkentin, who is also one of the Tech Leads for our ESXi on ARM initiative which was just announced at VMworld US 2018. We are still working out the details on how to get the fix out as this requires some changes to our bootloader, but the fix will automatically be included in future update and releases of ESXi. I will update this article as I have more information, so please stay tuned.

Here is a screenshot of the Hades Canyon using the ESXi Embedded Host Client and as you can see everything is fully functional and I also have vSAN configured!

Networking

The Hades Canyon comes with two on-board NIC and both are automatically recognized by ESXi, so no additional drivers or tweaks are required. Having dual NICs is very useful, especially for those wanting to run vSAN and dedicating one of the interfaces for storage traffic and then running everything else on the other interface.

[root@hades-canyon:~] esxcli network nic list
Name PCI Device Driver Admin Status Link Status Speed Duplex MAC Address MTU Description
------ ------------ ------ ------------ ----------- ----- ------ ----------------- ---- -------------------------------------------------
vmnic0 0000:00:1f.6 ne1000 Up Up 1000 Full d4:5d:df:09:b7:36 1500 Intel Corporation Ethernet Connection (2) I219-V
vmnic1 0000:05:00.0 igbn Up Up 1000 Full d4:5d:df:09:b7:37 1500 Intel Corporation I210 Gigabit Network Connection

Storage

The Hades Canyon supports two M.2 slots and both controllers are automatically recognized by ESXi, so no additional drivers or tweaks are required. This is fantastic for running an all-flash, high performance vSAN setup.

[root@hades-canyon:~] esxcli storage core adapter list
HBA Name Driver Link State UID Capabilities Description
-------- ------ ---------- ------------ ------------ ------------------------------------------------------------------
vmhba0 nvme link-n/a pscsi.vmhba0 (0000:72:00.0) Sandisk Corp <class> Non-Volatile memory controller
vmhba1 nvme link-n/a pscsi.vmhba1 (0000:73:00.0) Sandisk Corp <class> Non-Volatile memory controller
vmhba33 vmkusb link-n/a usb.vmhba33 () USB

Hades Canyon BOM

I had a number of folks ping me about my particular setup, so below is the build of materials. Since this hardware was purchased using our internal preferred vendor, you are no limited to what I had selected below. In fact, you can use any NVMe PCIe M.2 device or DDR4 2400 SODIMM memory.

  • 1 x NUC8i7HNK
  • 2 x Western Digital 250GB NVMe PCIe M.2
  • 2 x Crucial 16GB DDR4 2400 SODIMM

More from my site

  • 64GB memory on the Intel NUCs?
  • Quick Tip - Auditing ESXi boot firmware type
  • Customizing SMBIOS strings (hardware manufacturer and vendor) for Nested ESXi 
  • Troubleshooting ESXi Shutting down firmware services and UEFI Runtime Services (RTS) error message
  • VMware Cloud Foundation 5.0 running on Intel NUC

Categories // ESXi, Home Lab, Not Supported, vSphere Tags // ESXi 6.7 Update 1, Hades Canyon, Intel NUC, NUC8i7HNK, NUC8i7HVK, UEFI

Comments

  1. *protectedIan M. Miller says

    11/02/2018 at 10:11 am

    Great news! Any word on the NUC8I7BEH?

    Reply
    • William Lam says

      11/14/2018 at 7:37 am

      It doesn't look like it, take a look at https://communities.intel.com/message/592674#592674

      Reply
      • *protectedIan M. Miller says

        11/17/2018 at 1:38 pm

        Thanks, William! It looks like there's a path forward: https://www.reddit.com/r/homelab/comments/9k0hm8/esxi_on_nuc8i7beh_a_success_story/. Maybe a combination of Intel BIOS updates and VMWare ESXi updates will get the BEH working.

        Reply
  2. *protectedJeff Newman says

    11/02/2018 at 12:27 pm

    I found the sweet spot for home lab vSAN and clustering hardware: Refurb Dell Optiplex desktops. They're often available for under $170 (3rd Gen Core i5) or $200 (3rd Gen Core i7) with free shipping, come with a warranty, take cheap Intel NICs and cheap-ish DIMMs. Two or three of them don't take up that much space and cost about as much as a single NUC. Not sexy, but very workable.

    Reply
  3. *protectedmikestoica says

    11/05/2018 at 4:43 am

    So rather going for 2 of these instead of building a nested lab on one supermicro?

    Reply
  4. *protectedArnaldo Pirrone says

    11/08/2018 at 6:52 am

    Hello,
    I'm having the same troubles when i try to install esxi 6.7 or 6.7 U1 on a HP Prodesk 400 G5 MT, (product code is 4CZ66EA#ABZ) with the latest UEFI firmware installed (02.04.00). The system hangs on "Shutting down firmware services" "Using 'simple offset' UEFI RTS mapping policy" just like the intel NUC. Could it be the same bug?

    Reply
    • William Lam says

      11/14/2018 at 7:39 am

      Hi Arnaldo,

      It's very possible, from my understanding the updated BIOS that Intel released was more of a workaround regarding this issue. As mentioned, we've got our own fix that should hopefully solve the problem and it'll be available in a future update of ESXi

      Reply
  5. *protectedArnaldo Viegas says

    11/19/2018 at 8:50 am

    I have a NUC8i7NHK since its debut, and have struggled to install ESXi for my personal lab. Firmware version 51 does indeed allow for booting any other OS. Before V51 I could only boot Windows 10: no ESXi, no Linux, etc. Currently I've installed ESXi 6.7 and ESXi 6.7U1, a few different Linux distros and even macOS Mojave (makes nice Hackintosh).

    So my guess is that Intel indeed fixed whatever has "broken" in the firmware. Booting anything but Windows was possible with the NVK model since a much earlier firmware and they finally made it possible on the HNK.

    If it's a workaround or a real fix, based on the fact that ESXi and Linux boot fine one several other hardware, I think we can call it a fix as it allows the hardware to boot several different OS versions now, something that was impossible before FW 51.

    Reply
  6. *protectedRay says

    11/30/2018 at 2:08 am

    I want to use vm directpath for the GPU. When I tried to install driver, HOST and guest all dead..
    Anyone has a chance to enable such function?

    Reply
  7. *protectedGene says

    01/03/2019 at 6:36 pm

    I am trying libvirt and quem, with several switches, i can passtrough the USB 3.1 controller and the SD card controller, sucessfully, which is in the same IOMMU group, but i cannot do it with the gpu. The Vega GH GPU passes trough really but the driver reverts to Microsoft Basic Display Adapter and when i try to install Vega GH drivers i get the same behaviour, ie guest and host both hang! i have tried vfio several boot switches and kvm configurations, all the same. Any brave soul capable of making this work for the hades canyon (whateverthe os)?

    Reply
  8. *protectedChris78 says

    01/14/2019 at 12:30 am

    Same here. Passthrough the Vega GH GPU and installing the driver takes down the host. Using hypervisor.cpuid.v0 = "FALSE" solves the take down of the host but the VM will keep crashing immediately after installation of the driver (running Windows Server 2016 or 2019 with modified Win10-64Bit-Radeon-Software-Adrenalin-2019-Edition-18.12.3-Dec19 drivers as mentioned here: https://forums.intel.com/s/question/0D70P000006BJE4SAO/nuc8i7hvknuc8i7hnk-cannot-install-amd-vega-driver-on-windows-server-2016?language=en_US on BIOS v53).

    Would love a working Vega GH GPU in my VM for transcoding reasons.

    Reply
    • William Lam says

      01/14/2019 at 11:21 am

      Curious if you've tried Windows 10? I've got that deployed using this driver from Intel https://downloadcenter.intel.com/download/28194/Radeon-RX-Vega-M-Graphics-Driver-for-Windows-10-64-bit-for-the-Intel-NUC-Kit-NUC8i7HNK-NUC8i7HVK?product=126143 and the installer completes and hasn't crashed even launching a few basic benchmark tools

      Reply
      • *protectedChris78 says

        01/14/2019 at 1:22 pm

        Care to share some ESXi and VM information? Which ESXi and VM version did you use? I tried it on ESXi 6.5 which failed with the Server 2016 and 2019. Trying now on 6.7 but had to uncheck "Expose hardware assisted virtualization to the guest OS", "Expose IOMMU to the guest OS" to be able to power on the VM).

        Will try Windows 10 also, although I need Server 2016 or 2019 actually for AD services

        Reply
        • William Lam says

          01/14/2019 at 2:26 pm

          * ESXi 6.5 Update 2
          * Default VM Settings for a Windows 10 VM (2 vCPU & 4GB memory) - No need to enable VHV
          * Windows 10 Enterprise (details below after patching to latest)
          Major 4
          Minor 0
          Build: 30319
          Revision: 42000

          Reply
          • *protectedIllyse says

            01/24/2019 at 1:39 pm

            William, i thought this blog posting was about 6.7.u1 ?

          • William Lam says

            01/25/2019 at 4:57 am

            Not exactly sure what you mean by this? If you read the article, it mentions BOTH 6.7u1/6.5u2 🙂

      • *protectedChris78 says

        01/14/2019 at 3:11 pm

        Tried Windows 10 also on ESXi 6.7. The driver you linked is for installing the Intel HD Graphics 630 card. That one is not a problem. Problems arise as soon as you install the AMD Radeon RX Vega M GH driver (Intel package or AMD Adrenalin driver doesn't matter). Host freezes and can't connect to it. Hard power off and on is the only solution. Using hypervisor.cpuid.v0 = 'FALSE' prevents the host from crashing but the VM instantly gives a BSOD during or right after installation of the driver.

        So I hope you can share your ESXi and VM settings about how you installed the AMD Radeon RX Vega M card.

        Reply
        • William Lam says

          01/14/2019 at 3:17 pm

          Sorry, the link was wrong. I installed the AMD Driver https://downloadcenter.intel.com/download/28194/Radeon-RX-Vega-M-Graphics-Driver-for-Windows-10-64-bit-for-the-Intel-NUC-Kit-NUC8i7HNK-NUC8i7HVK?product=126143 I'm about to try this on Windows 2019 system to see if it works. I need to keep the system on 6.5 for testing, but I don't see why it shouldn't work on 6.7 as well.

          Reply
          • *protectedChris78 says

            01/14/2019 at 3:40 pm

            Did you set any advanced settings like hypervisor.cpuid.v0='FALSE' or anything else?

            The driver provided by Intel for Windows Server 2016 is not working. It is already reported on the Intel forum and they are investigating it. To get the AMD card installed on WIndows Server 2016/2019 you need to edit the .inf file in the Adrenalin package.

            I posted a link earlier to the Intel forum how I got the driver working on Windows Server 2019 installed straight onto the NUC (so not inside a VM).

            Thank you for your information so far. Unfortunately I can't get it to work inside a VM for Windows 10 or Windows Server 2016/2019 on ESXi 6.5u2 or 6.7u1. Your help is much appreciated.

          • William Lam says

            01/14/2019 at 3:44 pm

            I'll post a blog regarding the steps I did but didn't have to do anything special. This was on the HNK model and as I mentioned a few times, the driver is installed in both the Windows 10 (64-Bit) and Windows Server 2019 (64-Bit) VM running on the NUC w/o any issues

  9. *protectedChris78 says

    01/14/2019 at 4:30 pm

    I edited the VM before installation of Windows 10 or Windows Server (VMware Paravirtual instead of LSI Logical SAS, VMXNET3 instead of E1000e and EFI instead op BIOS). Will restart with default values and sees how that goes. Thank you again.

    Reply
    • *protectedChris78 says

      01/15/2019 at 5:22 am

      So.. I really tried. I have a NUC8i7HVK (not the HNK). Tried it with BIOS version 51 and 53. With ESXi 6.5U2 latest patch level and ESXi 6.7U1 latest patch level.

      - Toggled Passthrough of Intel GPU, AMD GPU + AMD Audio from GPU (3 checkmarks) + reboot

      - Created default Windows 10 VM with exception of memory reservation which is required when adding passthrough devices to power on
      - Installed Windows 10 Enterprise build 1809
      - Installed VMware tools (run as Administrator)
      - Updated to latest patch level
      - Installed Intel GPU (on ESXi 6.7 VM version 13 this ended in a black screen, on ESXi 6.7 VM version 14 successful, on ESXi 6.5 (also version 13) also successful)
      - Installed AMD GPU but Windows crash during installation on all tries

      - Created custom Windows 10 VM with VMware Paravirtual HDD controller, VMXnet3 NIC and EFI Bios, also dedicated memory
      - Installed Windows 10 Enterprise build 1809, selecting paravirtual driver for HDD controller
      - Installed VMware tools (had to do it twice as administrator, second time repair only otherwise ESXi does not detect installation)
      - Updated Windows to latest patch level
      - Installed Intel GPU without any problems
      - Installed AMD driver results in freeze of host
      - add hypervisorcpuid.0 to advanced settings of VM
      - Installed AMD driver results in BSOD
      - tried pciHole.start and pciHole.end without success

      I'm out of options. Only difference seems to be the NUC version (HVK vs HNK). HNK uses AMD Radeon RX Vega M GL graphics while the HVK uses AMD Radeon RX Vega M GH graphics

      Reply
      • William Lam says

        01/15/2019 at 10:06 am

        FYI - Just posted the full details https://www.williamlam.com/2019/01/gpu-passthrough-of-radeon-rx-vega-m-in-intel-hades-canyon.html

        I did notice that if you try to use EFI and VMXNET3, that also causes the GOS to crash after attaching the iGPU, so for now it seems you can only use BIOS Firmware and E1000E driver

        Reply
  10. *protectedmario witdoek says

    01/15/2019 at 11:54 am

    Hi William, does it consume 40% of the RAM with no VMs configured? Or do I misread the screenshot? Mario

    Reply
    • William Lam says

      01/16/2019 at 4:16 am

      That's the vSAN memory overhead that you're seeing.

      Reply
  11. *protectedFrank says

    02/28/2019 at 2:02 am

    I have just purchased a NUC7i7BNH for my homelab. Configured with two 1TB SSD's that I configured in RAID1. When trying to install ESXi 6.7 the installer sees two disks each of 1TB, not the RAID volume.

    Any idea whether it's possible to fix this or is thay setup not possible, i.e. I cannot get the "security" I was hoping for by RAID1.

    Thanks in advnace.

    Reply
    • *protectedLucky says

      03/04/2019 at 8:57 am

      I too am trying to install ESXi 6.7U1 and thought I had configured the two SSDs as RAID. But the ESXi installer sees them as two different disks. Is there a way to get ESXi to use the two SSDs in a RAID-1 configuration? Thanks!

      Reply
    • *protectedChris78 says

      03/04/2019 at 3:09 pm

      Software Raid (as used by Intel NUC) is not supported by ESXi.

      Reply
      • *protectedFrank says

        03/06/2019 at 12:50 am

        Thanks for the reply, Chris78 - spares further frustration - I'll have to work with the secondary disk for backup purposes instead.

        Reply
      • *protectedGeoffrey says

        06/25/2019 at 1:33 pm

        is there a way to do that inside ESXi then?

        Reply
  12. *protectedNullByte says

    04/03/2019 at 11:56 pm

    NUC8I7HNK does not support "XPG SX8200 Pro PCIe Gen3x4 M.2 2280 SSD"
    using esxi 6.7u1. =(

    Reply
    • *protectedGeoffrey says

      06/25/2019 at 1:32 pm

      does it work on the HVK?

      Reply
  13. *protectedAlex says

    05/10/2019 at 8:41 pm

    Hi Guys,
    I bought a HNK recently and I'm getting issues to install ESXi 6.5 or 6.7...
    I tried both.
    Can we install ESXi in this NUC model? I'm a little frustrated....

    Reply
    • *protectedChris78 says

      05/10/2019 at 11:09 pm

      Installation of ESXi 6.5 and 6.7 can be done without problems if you have at least BIOS version 51. If you need more help we will need more information what exactly your issue is.

      Reply
  14. *protectedYasir says

    06/05/2019 at 4:35 am

    Hi Can I use the Intel NUC8i7HNK for building EVE-NG labs . Can i run Cisco ISE instance and VIRL instanes with no issues

    Reply
  15. *protectedPoe82 says

    06/06/2019 at 9:55 am

    William , Is there any possible way to pass through the wireless card to a guest ? ESXI 6.7 u2 . Would make for an amazing portable lab setup. Thanks in advance , keep up all the great work . Thanks to you and the community I am running my whole lab on a NUC8i7HVK with GPU passthrough for my main gaming system , its a great option for my trailer trailer and uses way less power and make almost no noise compared to my hp dl360 g7 . Thanks so much !

    Reply
    • *protectedPoe82 says

      06/06/2019 at 11:13 am

      NM I now see the wireless AC adapter in the pci pass through section . I have passed it to my guest and all is good.

      Reply
      • *protectedDavid Ross says

        07/04/2019 at 2:49 am

        Hi Poe82, I'm looking to do this as well. Have you had any issues since you set it up? Any tips on config?
        Thanks
        David

        Reply
        • *protectedpoe82 says

          08/13/2019 at 9:12 am

          Sorry for the late delay I have been changing my setup a lot , I am seeing issues with the GPU when under heavy load this is with windows bare metal install and with esxi passthrough , that being said I believe it is not related to the passthough. The WIFI passthrough works great I was able to use it in a windows vm and now also in a Kali vm and it supports monitor mode for WIFI pentesting . overal I am happy with the setup other than poor high end gaming issues . Makes great home lab and is able to run everything I need .

          Reply
      • *protectedKaramjot Singh Kohli says

        11/13/2019 at 9:59 pm

        Could you share some details on howto get NUC wifi Adapter detected on esxi? Which version of firmware and ESXi you are running?

        Reply
  16. *protectedGeoffrey says

    06/25/2019 at 1:31 pm

    Specific to HVK model:

    > Has anyone used the ADATA XPG SX8200 Pro 1TB successfully?
    > Is onboard Intel RAID possible for ESXi 6.7 on HVK?

    Reply
  17. *protectedUlrik says

    06/26/2019 at 11:31 am

    Hi,
    Started out with the Hades Canyon and the suggested WD Black NVMe 1TB. Unfortunately two Hades in a row had defects, so decided to go the SuperMicro E200-8D way. BUT the SuperMicro does not recognize the WD NVMe disk. Has anybody had success to use WD Black NVMe SSD on the SuperMicro E200-8D with ESXi 6.7 ?

    Reply
  18. *protectedChris78 says

    06/28/2019 at 2:33 pm

    William,

    How stable is your ESXi installation on the Hades Canyon (HNK version)? Once in a while (can be after weeks but can also happen multiple times a day) my ESXi seems to crash and all VMs are unreachable. Happens on 6.5 and 6.7. Little bit of a problem when running pfSense as one of the VMs.

    None of the VMs are reachable and also ESXi web interface can't be reached. Passthrough of the GPU to one of the VM's don't make things easier as I can't see the ESXi screen anymore so hard to troubleshoot.

    Thinking about trying ESXi on my HVK version (my son will kill me for it as he is using it as a gaming PC) to see if I encounter the same crashes. In that case it could be hardware related.

    You got any magical tips or tricks?

    Reply
    • William Lam says

      06/29/2019 at 7:02 pm

      I’ve not had or heard of any issues, I know many folks in the community using this platform for ESXi. May want to ensure all firmware/BIOs is updated

      Reply
      • *protectedChris78 says

        11/12/2019 at 8:46 am

        Old post, I know. Just to share some info: when installing ESXi, only one NIC will detect the ethernet connection. I previously solved this by going into the CLI and execute the command

        esxcli system settings kernel set -s preferVmklinux -v TRUE

        This was mentioned on another blog post:

        https://nucblog.net/2018/10/vmware-esxi-on-hades-canyon-nuc/

        However, this made the NUC highly instable as you revert back to legacy drivers cause the NIC to crash and making the NUC unreachable.

        Only solution was to revert back to original setting and disable autonegotiate on the NICs. Since that moment, I had no problems anymkre with the NICs or the NUC.

        Cheers,

        Chris

        Reply
  19. *protectedMars says

    06/30/2019 at 4:16 pm

    Installed the ESXi on Datatraveler USB Flah for a NUC8i7BEH, Couldn't see the SSD ADATA XPG SX8200 Pro

    Reply
  20. *protectedAlex Rocha says

    07/03/2019 at 6:06 pm

    Hi Guys,

    Thank you for helping me regarding ESXi install in my brand new NUC. It is now working fine after BIOS upgrade to latest version.

    I'm wondering, if someone install the ESXI on USB (pen driver sticker) and a Windows in another USB sticker. Is possible to swap OS for labs and fun?

    How the storage (on NUC) will see the files? Thanks

    Reply
  21. *protectedGeoffrey says

    07/10/2019 at 11:09 am

    regarding the ADATA XPG SX8200 Pro, I have found the post below.

    https://vm.knutsson.it/2019/02/vsan-downgrading-nvme-driver-in-esxi-6-7-update-1/#comment-1377

    It appears to be an issue with the DRIVER for NVMe in 6.5 and newer, but I haven't had time to work though the solution. If anyone else can give it a crack and let us know, it would be appreciated.

    Reply
  22. *protectedALEX says

    07/21/2019 at 6:52 pm

    Hi Guys,

    Does someone install ESXi and Windows on USB flash drive and use for both propose, as NUC home Lab and windows as PC for fun?
    I'm wondering if it is useful and possible... I don't know how the NUC SSD will see/found the files when I'm using 2 OS...
    Any help?
    cheers

    Reply
  23. *protectedDan says

    07/29/2019 at 8:21 pm

    Great stuff William. I'm new to NUCs. Picked up an HVK and have been attempting to get ESXi installed to SD card (so I can leave my USB3 ports for other stuff). An inserted SD card doesn't show up as an available target for the ESXi installation. I thought I was being really clever by sticking the card in a USB2 SD card reader and installing to that, then transferring the card to the built-in reader. I was actually somewhat surprised that it worked--ESXi booted and everything *seemed* fine.

    Then I noticed I couldn't install your USB Network driver fling (got the error that it "cannot be live installed"). Google pointed me towards that error being caused by lack of space on the drive. And I had used a paltry 8GB card. So I went through the whole process again with a 32GB card. Same issue.

    Then I noticed that none of my configuration changes were being preserved on reboots. I double-checked that ScratchConfig.CurrentScratchLocation was set properly to my NVMe datastore drive. So I was puzzled, until I came across a helpful post (https://serverfault.com/a/557950) indicating what is likely the problem. That the NUC's BIOS is able to see the SD card reader in order to boot from it, but ESXi lacks a driver for it and thus can't see the SD card once it's booted. And if it can't see the SD card it can't save my changes.

    Sure enough, if I put the SD card back in a USB reader everything is fine.

    Kind of stupid in retrospect... the fact that the SD reader didn't show up from the installer should have been a clue that it didn't have a driver for it.

    So onto my question: Any chance there is or will be a compatible driver for the built-in SD card reader? It shows as an O2 Micro with vendor id 0x1217 and device id 0x8621. If not, it's not the end of the world. But since I have no other use for the card reader anyway I hoped to be able to spare a USB port.

    Thanks again for all your helpful posts!

    Reply
  24. *protectedDamien says

    11/19/2019 at 12:08 pm

    Hi, I currently faces an issue with adding devices to the datastore after fresh installing ESXi 6.7u1 on my Hades Canyon 8i7HNK3.

    Running BIOS version HNKBLi70.86A.0058.2019.0705.1646

    Thru ESXi Console, it manage to find my adapters,However, when i tried to set up the datastore, it stated "No device with free space".

    Currently running on Adata XPG SX8200 512GB Nvme with a Single Cruial 16GB RAM.

    In my bios, I have also disable UEFI shell, and secure boot. I do not have the option to disable boot legacy except for my USB drive.

    I have also tried using the command "esxcli software vib install -v h***s://hostupdate.vmware.com/software/VUM/PRODUCTION/main/esx/vmw/vib20/vmware-esx-esxcli-nvme-plugin/VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.32-0.0.8169922.vib" to install VIB and also " esxcli software vib install -v https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/esx/vmw/vib20/vmware-esx-esxcli-nvme-plugin/VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.32-0.0.8169922.vib"

    Could you please help me out? Researched and tried various different way.

    Reply
  25. *protectedG Leurch says

    02/12/2020 at 10:27 pm

    Hello,

    I am running Lenovo thinkcentre M720Q i7-8700. I installed 6.5 successfully with no issues. This installation is using the ne1000 driver.

    Now my problem is that the software i am trying to install in VMWare only installs on 5.5 or 6.0 versions of ESXi.

    Is there a way that I can move the files needed to get this box up and running in 6.5 to version 5.5 and have it work there?

    Currently, I am getting the No Network adapters found error.

    Thanks,

    Reply
  26. *protectedShane says

    08/20/2023 at 2:07 pm

    I've run into this with the 11th gen Intel NUC when using esxi 801U1 & U1a. I can load the NUC via tftp, but via http the error " “Shutting down firmware services….. Using “simple offset” UEFI RTS mapping policy” appears.

    Reply
  27. *protectedRichard Hughes says

    10/17/2023 at 6:26 pm

    Will ESXi 8 run on the Hades Canyon NUC8i7HVK?

    Reply
    • William Lam says

      10/17/2023 at 7:58 pm

      Don’t know, try it

      Reply

Leave a Reply to William LamCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Automating the vSAN Data Migration Pre-check using vSAN API 06/04/2025
  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...