WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple
You are here: Home / Home Lab / ESXi on 11th Gen Intel NUC (Panther Canyon) & (Tiger Canyon)

ESXi on 11th Gen Intel NUC (Panther Canyon) & (Tiger Canyon)

01.13.2021 by William Lam // 90 Comments

The highly anticipated 11th Generation Intel NUCs based on the new Tiger Lake processors has just been announced by Intel and I am excited to share my first hand experience with this new NUC platform. There are currently two models in the new 11th Gen lineup: the Intel NUC 11 Performance codenamed Panther Canyon (pictured on the left) which is the successor to the 10th Gen (Frost Canyon) NUC and the Intel NUC 11 Pro codenamed Tiger Canyon (pictured on the right) which is the successor to the 8th Gen (Provo Canyon) NUC.


There are a number of new improvements and capabilities that will make these new NUCs quite popular for anyone looking to build or upgrade their vSphere environment in 2021.

Before diving right in, I must say I love the new aesthetic look of the NUC chassis. In previous versions, the lid had a glossy and shiny finish, which easily left hand prints. These new models now have a clean matte finish. The NUC 11 Performance has a smoother feel compared to the NUC 11 Pro which has more of a texture to the finish, which I personally prefer. The other noticeable change is the power adapter, which is now half the size now which is nice for those looking to have several of these new kits sitting next to each other.

UPDATE (08/23/21) - For those interested in purchasing the Intel NUC 11 Expansion Module, GoRite is a vendor who is now selling this accessory that I was recently made aware of.

UPDATE (02/17/21) - The Community Networking Driver for ESXi Fling has been released and is required for ESXi to recognize the new onboard 2.5GbE network adapter on all Intel NUC 11 models

NUC 11 Performance (Panther Canyon)

The NUC 11 Performance is similar to the previous 4x4 NUC models and will include three different configurations:

  1. "Slim" K chassis (one pictured below)
  2. "Tall" H chassis with a 2.5" SATA3 storage drive bay
  3. "Tall" Q chassis with a 2.5" SATA3 storage drive bay and for the first time, a wireless charging lid!

Here is a quick summary of some of the new new hardware specs as they pertains to running ESXi:

  • Includes i3, i5 & i7 SKUs
  • 64GB SO-DIMM (DDR4-3200)
  • 1 x M.2 (2280), PCIe x4 Gen 4 NVME or SATA3
  • 1 x SATA3 (Tall Chassis, the one pictured below is Slim)
  • 1 x 2.5GbE onboard NIC
  • 2 x Thunderbolt 3
  • 2 x USB 3.1 Gen 2

The NUC 11 Performance is a solid kit for anyone looking to upgrade or purchase a new system for their vSphere homelab. The maximum amount of memory is still 64GB but it does support DDR4-3200 DIMMs. On the storage front, the M.2 (2280) has been upgraded to support the latest PCIe x4 Gen NVMe for those who may need an extra boost in storage performance, but I suspect for most it will be unnoticeable compared to PCIe x3.

For those considering the Tall chassis model, you also have your standard SATA3 which will allow you to setup vSAN or just have two separate vSphere datastores. The IO connectivity on the system has also been updated to support Thunderbolt 4 / USB 4 ports (one in the front and one in the back), this is a nice upgrade from previous NUC models which only had a single Thunderbolt 3 port aside from the Hades or Skull Canyon NUC models. With two Thunderbolt ports which are now capable of 40Gbps, you have even more flexibility in expanding either the storage and/or networking including 10GbE which a number of folks in the community have been doing when deploying vSAN and/or NSX-T. It will be interesting to see what new Thunderbolt 4 / USB 4 peripherals will be available in the market later this year.

Last but not least is the networking which has also been upgraded from a standard 1GbE to a 2.5GbE interface (Intel I225). Multi-gigabit network adapters have been rolling out slowly (here, here and here) and it was only a matter of time before they started to show up on the NUC platform. One of the challenges with a new network device is of course driver support that will allow ESXi to recognize the new device, which I will cover later in the post.

NUC 11 Pro (Tiger Canyon)

The NUC 11 Pro as the name implies is the higher end version of the NUC 11 Performance and the biggest differentiator is vPro capability and a new expandability option, more on this in a bit. There will be two different configurations for the NUC 11 Pro:

  1. "Slim" K chassis
  2. "Tall" H chassis with a 2.5" SATA3 storage drive bay

Here is a quick summary of some of the new hardware specs as they pertains to running ESXi:

  • Includes i3, i5, i5 vPro, i7, i7 vPro SKUs
  • 64GB SO-DIMM (DDR4-3200)
  • 1 x M.2 (2280), PCIe x4 Gen 4 NVME or SATA3
  • 1 x M.2 (2242), PCIe x1 Gen 3
  • 1 x 2.5GbE onboard NIC
  • 1 x Thunderbolt 4 / USB 4
  • 1 x Thunderbolt 3
  • 3 x USB 3.2 Gen 2


I will not rehash the similarities between the NUC 11 Performance and NUC 11 Pro, but if you are interested, you can read the assessment above. I do want to focus on the differences and why you might consider getting a NUC 11 Pro. Earlier, I mentioned the biggest difference is expandability and I literally do mean that. The NUC 11 Pro comes with an optional expansion module (pictured below) that is located at the bottom of NUC (pictured above) which includes an additional 2.5GbE interface (exactly the same as the onboard 2.5GbE) and two additional USB 2.0 ports. The standard onboard USB ports have also been updated to support latest USB 3.2 Gen 2.


This is really the first 4x4 NUC which can be expanded outside of the larger NUC 9 Pro / Extreme NUC, which was just released last Spring. This expansion module connects to a newly added M.2 (2242) B-Key slot which you can see in the picture below. This is definitely going to be useful for those wanting an additional onboard NIC for setting up advanced networking with NSX-T.


If adding a secondary onboard NIC is not your cup of tea, the M.2 B-Key slot can also be used for expanding storage. The number of vendors and options for an M.2 2242 is limited when when compared to the traditional M.2 2280 or 22110 form factor. In fact, I was skeptical on whether I would even be able to find an SSD that ESXi would recognize given most of the vendors that showed up on Amazon were ones that I had never heard of before.

I ended up selecting this 256GB M.2 2242 from a vendor called KingShark 🦈, I figure if its going to be random vendor and without sinking too much money into this test, I might as well pick the coolest name 😉

To use the M.2 2242 slot, you will need to remove the expansion module, if you have purchased it. There are three screws to remove, one for the M.2 slot itself and then two more for the back panel. After that, slide out the expansion module. You can see in the screenshot below that the M.2 SSD has been installed.


To my complete surprise, the KingShark M.2 was fully recognized by the latest version of ESXi! This is a really interesting enhancement with the NUC 11 Pro, with previous 4x4 NUC models, the maximum number of storage devices has always been two. With the addition of another SSD, customers now have even more options when it comes to configuring their vSphere Datastores. You can have three separate VMFS datastores, a combination of both a vSAN and VMFS datastore (especially useful for vSAN traces) or a larger vSAN Datastore with two capacity devices.

Here are a couple of comparison picture (front and back) between the NUC 11 Pro (top) and NUC 10 (bottom). You can see the NUC 11 is slightly wider and taller to accommodate for the new expanded capabilities.


I personally think the new NUC 11 Pro will give customers the greatest flexibility when it comes to running a vSphere Homelab! In terms of availability, Intel will be shipping both the Panther and Tiger Canyon to their partners in the coming weeks and they will be available for purchase later in Q1 of this year. Intel also has plans to release a successor to Hades Canyon which will be called Intel NUC 11 Enthusiast (Phantom Canyon) and while there is no information on when this system will be available, there are some technical details from Intel about the discrete GPU which will be RTX2060 Discrete Graphics with 6GB GDDR6 and it also looks like they have removed the secondary onboard NIC which was a very desirable feature in both the Skull and Hades Canyon models. As I learn more information about the upcoming Phanton Canyon NUC, I will share that in a future blog post.

Finally, lets now take a look at running ESXi on these new NUC 11 systems 🙂

ESXi on NUC 11

Here is the latest ESXi 7.0 Update 1c running on the new NUC 11 Pro, no issues with storage as mentioned above and I have been able to setup both standard VMFS as well as vSAN without any issues as long as you are using an M.2 NVMe/SATA that ESXi recognizes using devices from Intel, Samsung and WD which are known to just work out of the box.


On the networking front, because the 2.5GbE onboard network adapters are brand new devices, ESXi does not recognize these devices out of the box. With that said, we have developed a new ESXi Native Driver which you can find more details here, which customers will be able to incorporate into a new ESXi custom image for installation. The Fling will support both ESXi 7.0 and 7.0 Update 1 and once incorporated into a new ESXi custom image, ESXi will automatically detect the onboard network device for both the NUC 11 Pro and NUC 11 Performance.


Here is screenshot of ESXi 7.0 Update 1c also running on the NUC 11 Performance, as mentioned already, the new Community Networking Driver for ESXi Fling will be required for networking and customers can setup both standard VMFS and/or vSAN, for those purchasing the "Full" height chassis configuration.

More from my site

  • VMware Cloud Foundation 5.0 running on Intel NUC
  • VMware Cloud Foundation on Intel NUC?
  • ESXi on Intel NUC 12 Enthusiast (Serpent Canyon)
  • Considerations for future vSphere Homelabs due to upcoming removal of SD card/USB support for ESXi
  • Homelab considerations for vSphere 7

Categories // Home Lab, vSphere Tags // homelab, Intel NUC, Panther Canyon, Tiger Canyon

Comments

  1. *protectedTom C says

    01/13/2021 at 6:46 am

    Difficult choice: 2 NICs or 2 M.2 drives. I think I'd go for the second M.2 drive myself in my vSAN setup and dump the USB boot drive.

    Still, is there any news on perhaps a 5 Gb add-in NIC of if boot from an SD card works for ESXi ?

    Reply
  2. *protectedBob Swani says

    01/13/2021 at 7:28 am

    4 Core CPU for lab is a joke in 2021. Intel needs to update this to 8 Core CPUs.

    Reply
  3. *protectedSteve Ballmers says

    01/13/2021 at 7:36 am

    William, please add Power Adapters to images so we can compare the size difference.

    Reply
    • William Lam says

      01/13/2021 at 10:33 am

      Added

      Reply
  4. *protectedRic L says

    01/13/2021 at 8:31 am

    Wish that the 64GB RAM is doubled in this newer model.

    Reply
    • William Lam says

      01/13/2021 at 10:33 am

      Doubt we'll see that any time soon given NUCs uses SODIMM and there's been no hints of 64GB SODIMM modules ... so until that happens, I think 64GB will be the max

      Reply
  5. *protectedTom says

    01/13/2021 at 10:41 am

    You state in the text (not the bullet list) that the Panther Canyon supports Thunderbolt 4. Also, the Info on USB 3.1 Gen2 / USB 3.2 Gen 2 is contradictory to what Intel has up on their page. I'm confused: https://www.intel.com/content/www/us/en/products/compare-products.html/boards-kits?productIds=205029,205607

    Reply
    • William Lam says

      01/13/2021 at 11:35 am

      That was fixed earlier. Panther Canyon does NOT support TB4 and USB is 3.1 Gen 2

      Reply
      • *protectedPierre says

        06/17/2021 at 6:20 am

        You’ve still got almost a whole paragraph on TB4 for Panther Canyon. It begins: “The IO connectivity on the system has also been updated to support Thunderbolt 4 / USB 4 ports (one in the front and one in the back), this is a nice upgrade…”

        Reply
  6. *protectedCurtis B says

    01/13/2021 at 11:27 am

    Hmm. I was considering a second Frost Canyon i7 (I like the 6 cores) and adding a USB-C 2.5GbE port. The 11 Gen i7 is only a quad core (though a bit faster than the 10th gen from what I've read), though the prospect of built in dual 2.5GbE ports is tempting. Decisions...

    Reply
    • *protectedMohammed says

      01/13/2021 at 1:04 pm

      Same here difficult choice !!!

      Reply
  7. *protectedBogBeast says

    01/14/2021 at 9:58 am

    Will the Fling update for 2.5GbE support other Multi-gig cards? or just the NUCs?

    I cant get my Intel Intel X550-T2 to connect at 2.5 or 5Gbe in my homelab machines - just 1Gbe or 10 Gbe

    Reply
    • William Lam says

      01/14/2021 at 10:47 am

      The Fling will add support for PCI IDs found in the upcoming NUC 11. For other devices, we can consider based on demand from community and as time permits.

      For your negotiation issues, try using just Windows or Linux and see if the device is acting properly. If not, its not an ESXi driver issue (since it already detects it) but rather your setup

      Reply
      • *protectedBogBeast says

        01/15/2021 at 3:13 am

        Hello William, thanks for the reply.

        Yup, installed the card in a window host and it happily negotiates at 2.5 and 5Gbe.

        I posted more over at https://communities.vmware.com/t5/vSphere-Hypervisor-Discussions/Multi-Gig-2-5-5GBe-Support-in-VMware-ESX-7-for-the-Intel-X550-T1/td-p/2821662 and in the Intel Technology network

        Would be more than pleased to provide such things as PCI IDs

        Reply
        • William Lam says

          01/15/2021 at 7:47 am

          Yes, please provide the PCI ID + full details brand/make of the device.

          The planned Fling will be for PCIe-based Network Adapters and as you've already mentioned, we (VMware) do not have any support for multi-gig, so this would be the first time as part of this upcoming Fling and initial enablement is for the Intel NUC 11

          Reply
          • *protectedBogBeast says

            01/18/2021 at 10:25 am

            Hi William,

            Here you go:

            Intel® Ethernet Converged Network Adapter X550-T2

            https://ark.intel.com/content/www/us/en/ark/products/88209/intel-ethernet-converged-network-adapter-x550-t2.html

            Partnumber: X550T2
            UPC: 735858307352
            EAN: 5032037080699

            Name PCI Device Driver Admin Status Link Status Speed Duplex MAC Address MTU Description
            ------ ------------ ------ ------------ ----------- ----- ------ ----------------- ---- -----------
            vmnic2 0000:61:00.0 ixgben Up Down 0 Half b4:96:91:77:2b:e8 1500 Intel(R) Ethernet Controller 10G X550
            vmnic3 0000:61:00.1 ixgben Up Down 0 Half b4:96:91:77:2b:e9 1500 Intel(R) Ethernet Controller 10G X550

            \==+PCI Device :
            |----Segment.........................................0x0000
            |----Bus.............................................0x61
            |----Slot............................................0x00
            |----Function........................................0x00
            |----Runtime Owner...................................vmkernel
            |----Has Configured Owner............................false
            |----Configured Owner................................vmkernel
            |----Vendor Id.......................................0x8086
            |----Device Id.......................................0x1563
            |----Sub-Vendor Id...................................0x8086
            |----Sub-Device Id...................................0x0001
            |----Vendor Name.....................................Intel(R)
            |----Device Name.....................................Ethernet Controller 10G X550
            |----Device Class....................................512
            |----Device Class Name...............................Ethernet controller
            |----PIC Line........................................11
            |----Old IRQ.........................................255
            |----Vector..........................................0
            |----PCI Pin.........................................1
            |----Spawned Bus.....................................0
            |----Flags...........................................12289
            \==+BAR Info :
            |----Module Id.......................................42
            |----Chassis.........................................0
            |----Physical Slot...................................3
            |----Numa Node.......................................3
            |----VmKernel Device Name............................vmnic2
            |----Slot Description................................CPU SLOT3 PCI-E 3.0 X8
            |----Device Layer Bus Address........................s00000003.00
            |----Passthru Disabled...............................false
            |----Passthru Capable................................false
            |----Parent Device...................................PCI 0:96:1:3
            |----Dependent Device................................PCI 0:97:0:0
            |----Reset Method....................................1
            |----FPT Shareable...................................true

            Many Thanks

  8. *protectedSteve Ballmers says

    01/14/2021 at 1:07 pm

    Great site William!

    Can you please post some comparison of the ASRock Mini 4x4 Box 4800u vs Intel Nuc 11th gen Slim?

    Talk about performance and pictures of them side by side including the power adapters?

    Keep up the great work!

    Reply
  9. *protectedTommy Kuhler says

    01/23/2021 at 11:45 am

    Would you mind posting a screenshot of the tiger canyon NUCs ESXi passthrough PCI-Devices screen?

    Wanna know which devices can be passed through...

    Reply
  10. *protectedMichelle Laverick says

    02/04/2021 at 7:36 am

    Is that fling for the NUC 11th Gen online yet? I rather liked the fact the 10th Gen's NIC was recognised natively by ESXi 7.x

    Reply
    • William Lam says

      02/04/2021 at 4:56 pm

      Not yet

      Reply
  11. *protectedsealinsd says

    02/16/2021 at 12:45 am

    Can you share the esxi installation iso for nuc11 or the esxi driver files for intel i225? I just bought nuc11, but I cannot install esxi because there is no network card driver, thank you very much

    Reply
    • William Lam says

      02/16/2021 at 4:48 am

      Did you actually read the article? It mentions a (yet to be release) driver will be required 🙂

      Reply
      • *protectedsealinsd says

        02/16/2021 at 9:38 pm

        I have read the article, because I am anxious to use this nuc, then I will wait for the driver to be released, thank you for your article and tutorial.

        Reply
        • William Lam says

          02/17/2021 at 9:17 am

          The Fling has been released https://www.williamlam.com/2021/02/new-community-networking-driver-for-esxi-fling.html

          Reply
          • *protectedsealinsd says

            02/17/2021 at 9:19 am

            Tanks^_^

          • *protectedsealinsd says

            02/17/2021 at 9:19 am

            Thanks with the lost h ^_^

  12. *protectedCraig says

    02/17/2021 at 2:11 am

    Looks like your content has been copied lock stock and all - https://nucfans.com/p/712.html

    Reply
  13. *protectedalexander says

    02/17/2021 at 10:14 am

    Is it possible to buy the second 2.5gbe lan interface separately?

    Reply
    • *protectedChris says

      04/07/2022 at 10:26 am

      Here I have found only one GbE adapter.
      https://www.gorite.com/catalogsearch/result/?q=Intel+NUC+Front+Panel

      Reply
  14. *protectedGustavo says

    02/20/2021 at 12:58 pm

    Hi William,
    so ESXi will be fine on NUCs without ECC memory?
    Thank you!

    Reply
    • William Lam says

      02/20/2021 at 3:02 pm

      ECC memory is NOT a requirement to run ESXi, but is it certainly recommended in general for x86 platforms, especially for officially supported platforms.

      Outside of the recent Intel NUC 9 Pro, all NUCs do NOT support ECC memory, meaning you do not even have a choice 🙂

      Reply
  15. *protecteddazza says

    03/02/2021 at 1:06 am

    Hi William. Great work. I'm looking for the cheapest route to obtaining an esxi cluster with vsan. Would love your recommendation here. Have you come across/considered anyone offering cost effective cloud hosted esxi cluster labs?

    Reply
  16. *protectedstich86 says

    03/14/2021 at 2:16 pm

    Hi Williams,

    I want to get a NUC11TNHv50L (the only one that I can find here Italy), to setup an ESXi 7.0 (single host) to run multiple VM (3 at the moment) and consolidate some of my home stuff (Firewall with OPNSense, Ubuntu running Homeassistant and W10 VM for working stuff) i need some clarification:

    - is it possible to configure this NUC to power on after a power loss?
    - can I pass-thru Intel AX card to a Linux VM? I want to use the Bluetooth adapter for my home automation
    - can I pass-thru USB device to a Linux VM? Need to pass a Zigbee adapter
    - are IPMI driver available to see the hardware status on ESXi?

    Thanks in advance!

    Reply
    • *protectedTom C says

      03/14/2021 at 2:31 pm

      1. Yes
      2. Maybe
      3. Maybe
      4. No

      Reply
  17. *protectedMarco says

    03/17/2021 at 11:27 pm

    Hi William thanks for the great article, from what you write and what is found on the Intel specifications the NUC11TNHi7 or NUC11TNHv7 (for example) models offers the possibility to choose essentially three configurations:
    1. n° 3 total internal drive (2 x on M.2 slot + 1 SATA 2.5" Drive)
    2. n° 2 total internal drive (1 x on M.2 slot M.2 + 1 x SATA 2.5" Drive)
    3. n° 2 total internal drive (1 x on M.2 slot M.2 + 1 x SATA 2.5" Drive) + Expansion Slot for dual ethernet

    Correct?

    Reply
    • William Lam says

      03/18/2021 at 9:00 am

      That's correct, depending on your needs, you can select from one of those configurations

      Reply
  18. *protectedalexg says

    03/22/2021 at 1:19 am

    Hi William, I also had a NUC11TNHv50L come in for testing. With the Fling driver, both network cards are detected clorrectly. However, I have a difficulty with vPro. When I run the ESXi installer with VNC/KVM session connected or start the install ESXi there is a PSOD. If I let the installer or the server first start up and connect to the server with VNC after a few minutes everything is okay. Maybe I have the wrong settings, but I have already tried a few combinations.

    Reply
    • *protectedWHNS says

      03/22/2021 at 7:25 am

      Hi, i have exactly the same problem with my NUC11TNKv7. I managed to install 7U1 with usb nic fling. So it seems to be a problem with the NIC driver and vPro. I am currently running with an USB NIC.

      Another problem is that passthrough of the Iris XE GPU does not work. I can pass the Iris XE to Windows 10 but the Intel driver always reports code 43.
      I already tried
      hypervisor cpuid v0 = FALSE and smbios.reflecthost = TRUE

      Reply
      • *protecteddeividfortuna says

        11/04/2022 at 2:37 pm

        I'm having the same issue :/

        Reply
    • William Lam says

      03/22/2021 at 1:50 pm

      I've not personally played with vPro, I'd need to double check whether the kit I've got even has access to vPro. Let me see if there's any particular settings that need to be applied or whether this is reproducible on our end.

      For the PSOD, is there anyway to get support bundle w/core dump when this happens? This might help Engr team better understand where the issue night be

      Reply
      • *protectedalexg says

        03/23/2021 at 12:38 am

        Unfortunately, I can't send a coredump because I only had the device for a few days for testing. It would be great if you could find a solution. Because with vPro, the NUC would become a real alternative to the E200-8D in the Homelab. Maybe WHNS can provide a DUMP and additionally the info if the USB Fling was the solution for the PSOD. Maybe vPro supplies a USB network adapter or something similar with an active KVM/VNC connection....

        Reply
      • *protectedWHNS says

        03/23/2021 at 9:48 am

        Hi,
        i dont know how to get the core dump when booting from usb installer/iso.
        Here is a screenshot of the PSOD:

        https://pasteboard.co/JTYaWvN.png

        Reply
        • *protectedWHNS says

          03/23/2021 at 9:52 am

          http://folio.ink/TwYdxk

          Reply
          • William Lam says

            03/23/2021 at 11:36 am

            Let me share this with Engineering to see if they've got any thoughts ...

            It sounds like there maybe two optional workarounds for now:

            1) Let the ESXi installer fully boot up prior to connecting to KVM. If you can let us know this is a functional workaround, that would be helpful

            2) Connect a USB NIC and use that to connect to the KVM which doesn't run into this problem

      • *protectedWHNS says

        03/24/2021 at 12:05 am

        Hi,
        i cant reply to your last answer, so I'll do it here.
        1) I was able to workaround the problem when disconnecting from vPro Remote Desktop, after selecting the usb installer to boot from (F10 Boot Menu), then waiting ~5min and reconnecting again.

        2) I also do not think, that Intel vPro can run from an USB-NIC. It only works with the integrated NIC of the NUC11. So that workaround is not possible

        Reply
        • William Lam says

          03/24/2021 at 9:18 am

          With the help of WHNS, we were able to identify the issue and verify the fix. The Engr team will work on producing an updated version of the driver that'll resolve the vPro issue. They're currently busy with other higher priority items, but we'll try to get that released as soon as we can.

          Reply
          • *protectedalexg says

            03/25/2021 at 10:17 am

            In William,
            Great News. If it will work the new cluster will run with NUCs 🙂
            is it possible to connect a ssd via USB to install ESXi on it? With the USB Stick I‘ve no scratch partition and so on.

  19. *protectedPaulMUC says

    03/22/2021 at 9:00 am

    Hi William,
    i just ordered a NUC11TNHi50W and would also like to use the NIC expansion module as well, but i can't find it anywhere, it's not even mentioned on Intels NUC site. Could you please provide a part-number, a name or a link where you got yours from ? Or possibly a source here in Europe ?
    Thanks a lot...

    Reply
    • William Lam says

      03/22/2021 at 6:12 pm

      Hi Paul,

      I reached out to the Intel and they mentioned for the European region, you can reach out to one of their partners called GoRite https://www.gorite.com/contact-us and will should be able to help.

      Reply
  20. *protectedstich86 says

    04/04/2021 at 2:39 am

    Hi William,

    i don't know if you can help me. I've got a NUC 11th with i5 + vPro.
    I'm running ESXi 7.0u1, on first pNIC i've one vSwitch with PG "VM Network" and another one with VLAN ID 2 for "IoT Network", on the second pNIC just a vSwitch with PG "WAN Network", this port is directly connected to my ISP router for internet access. I'm running OPNSense as VM and assigned the PG in this order:

    - VM Network -> Internal LAN, acts ad DHCP Server
    - IoT Network -> IoT LAN, acts ad DHCP Server
    - WAN Network - WAN with PPPoE client

    Until last week everything was fine, after a reboot to add a new SSD to the NUC, ONLY DHCP on pNIC1 and on untagged PG (so the VM Network) stop working. Doing some debug it looks like the Intel AMT solution broke something on the pNIC1. A confirmation was done swapping the pNIC on the vSwitch, when moving the vSwitch that has PG VM Network on pNIC2 (and swap also on the physical switch) DHCP works without any issue. Other strange thing: after VM start on the hypervisor Intel AMT IP stop responding to any type of traffic (ICMP, TCP, UDP).

    Do you know if there can be an issue using AMT on an interface where DHCP Server is running?

    Thanks!

    Reply
    • William Lam says

      04/05/2021 at 5:39 am

      I've not done anything with Intel AMT nor do I have a system with the functionality, so won't be of much help. You may to post on the Intel forums to see if anyone can help. I will say there is a known issue where connecting to the vPro interface during ESXi bootup can cause PSOD (but seems like your issue is after its started, so most likely not related)

      Reply
  21. *protectedMIAO WANG says

    04/04/2021 at 10:55 pm

    Hi William
    I installed ESXi 7.0 u1c on 11th NUC with Community network driver filing.
    It is not running normally(cannot find the compatible network driver) when every start-up from shut off. I have to restart the machine and it can back to normal. Do you have any suggestion?

    Reply
    • William Lam says

      04/05/2021 at 5:37 am

      I'm not sure what you mean by not running normally ... did you create a custom image that contains the required driver? There's been at least couple dozen customers who have been able to get it working. So its possible it could be hardware issues, you may want to ensure all BIOS/Firmware is up to date.

      Reply
      • *protectedMIAO WANG says

        04/05/2021 at 6:25 am

        Thank you for your remind. I updated the latest BIOS and I can boot esxi normally.

        Reply
      • *protectedKav says

        05/25/2021 at 6:30 am

        I had the same issue (NUC11PAHi5). I succesfully installed ESXi 7U2 with a custom ISO containing the fling driver. However if you booted from a shutdown state, the NIC would not get recognised, but if you rebooted, it would.

        I also updated the firmware to the latest (0039) and this seemed to resolve it. Hoora!

        Reply
  22. *protectedMIke says

    04/12/2021 at 12:41 pm

    HI.
    Any idea how can i get this expansion module OR USB-connectors for the internal headers? I was searching some hours but nothing found.

    Reply
  23. *protectedVincent says

    04/19/2021 at 1:37 pm

    I found this article and a few others online showing off the NUC 11 units, but in the US I'm not seeing them in-stock. Anyone have an idea of when the NUC 11 units are expected to be available in the US? For use as an ESXi system for a home lab are the NUC 11 units a significant upgrade over the NUC 10 units?

    Reply
  24. *protectedantonymaja says

    04/25/2021 at 8:05 pm

    Hi Will, I have a NUC 11 (NUC11PAHi5) and have injected the fling driver into an vSphere 7 Update 2 zip to create and export my custom ISO. All works and correct outputs show however when I boot the USB it gets to the network adapters section and shows "No Network Adapters". I have my network adapter plugged into a unifi switch and it shows that it's connected. (Led is flashing).

    Reply
    • *protectedMarkus Brody says

      04/27/2021 at 12:18 am

      Update your bios. Out of the box my nuc (NUC11PAHi5) had the 0035 bios, it wouldn't pickup the nic with the fling injected into a custom iso (7.0.1), and presented errors when scanning for devices using a usb nic.

      I updated to latest bios (0039), it's picked up both the onboard and USB nic and installed successfully.

      Reply
      • *protectedantonymaja says

        04/27/2021 at 6:48 pm

        You are amazing. Thanks so much! Didn't think of updating the BIOS. I update to 0039 and ran the same custom ISO with fling injected into 7.0.2. Install went smooth however ESXi didn't pick up my newish HP mechanical keyboard so I had to use an older one.

        Thanks again!

        Reply
  25. *protectedCinvivo says

    04/29/2021 at 9:51 am

    Hi William, can i add a 2nd WiFi module using the 2242 plus adapter or is there something internally that would prevent me from running two M.2 Wifi Modules ? thanks. Bernard

    Reply
    • William Lam says

      04/29/2021 at 4:14 pm

      Any 2242 device should work, assuming it fits within the system

      Reply
  26. *protectedSam says

    06/08/2021 at 7:04 pm

    Thanks for this great article William. My requirement of NUC is more of setting up kubernetes cluster and not that much of graphical usage. Typically more CPU is better for me, but still I’m confused if I should choose Frost Canyon i7 (with hexa core) vs Panther Canyon i7 (with quad core). Any thoughts on this?

    Reply
    • *protectedKav says

      06/09/2021 at 4:24 pm

      Multiply the number of cores you have by the clock speed, this is your CPU 'capacity', the higher the better for your case. Without knowing the clock speed, I would guess though that the hex core will give you a higher figure. Keep in mind, for single threaded applications this makes no difference and single core clock speed is the most important factor in that case.

      Reply
  27. *protectedChrisD says

    06/12/2021 at 9:08 pm

    Hey William, have you encountered any issues where issuing a shutdown from ESXi results in a reboot immediately afterwards?

    I'm running BIOS version 0056 on a NUC11TNHi5 with ESXi 7.0.2 Build 17867451.

    I've tried various combinations of BIOS settings but no change.

    Booting an Ubuntu live CD and running a 'shutdown -h now' results in a proper shutdown.

    Reply
  28. *protectedflyzipper says

    07/13/2021 at 1:20 pm

    Thanks for the write-up!
    Any insights into vGPU passthrough for Intel XE graphics using ESXi?
    It looks like Workstation 16 can do it, with the appropriate tools installed on the Windows 10 vm, but haven't found confirmation for ESXi.

    Reply
    • William Lam says

      07/13/2021 at 1:31 pm

      See https://williamlam.com/2021/07/passthrough-of-intel-iris-xe-integrated-gpu-on-11th-gen-nuc-results-in-error-code-43.html

      Reply
      • *protectedflyzipper says

        07/13/2021 at 1:48 pm

        Thank you!
        I should have kept reading your site 🙂

        Reply
  29. *protectedkang says

    08/24/2021 at 6:23 pm

    Hello, is your NUC11 running on esxi7.0 shutdown host normal ?
    My nuc11 model is nuc11tnhv5. When I running exi7.0, I cannot shutdown the host. When I click shutdown on the web console, it will reboot automatically,It's like click reboot.Do you know the possible reasons or how to troubleshoot the problem ?

    Reply
    • *protectedSteve says

      01/18/2022 at 4:20 am

      Hello Kang,

      the problem is also described here:
      https://www.sbcpureconsult.co.uk/2021/04/12/lab-problems-with-intel-nuc-11th-generation-hardware-with-vmware-esxi-7-0-1/

      But now good solution.

      Regards
      Steve

      Reply
    • *protectedveilus says

      02/27/2022 at 3:47 am

      I have the same issue and it seems like it will never be fixed, seeing as that post is from April 2021.

      Reply
      • *protectedTom C says

        04/17/2022 at 6:11 am

        It's a shame but I think you're right. Noone seems to care. No big surprise though, Intel doesn't market these things as virtualization hosts. I am happy that I can run vSAN on a NUC again since VMware killed SD and USB drives as boot devices.

        Reply
  30. *protectedkrinix_rog says

    11/22/2021 at 9:44 am

    I installed customized esxi ISO with the community driver on my new NUC11PAHi7.
    This has a Intel Ethernet Controller i225-V

    But it shows

    Link speed:1000 Mbps
    Driver:cndi_igc

    Have i got the wrong driver installed? How to I get the 2.5gbit/s for vmnic0

    I need help with this.

    Reply
  31. *protectedSteve says

    01/18/2022 at 2:31 am

    Hello William,

    thank you for this geat website. What memory option are you using 2x 32GB or 1x 64GB?
    With which manufacturer do you have experience together with the NUC 11?

    Regards
    Steve

    Reply
    • William Lam says

      01/20/2022 at 5:11 am

      All Intel NUC uses SODIMM memory and largest capacity for a single DIMM is 32GB (64GB doesn't exists, sadly)

      You can check out https://williamlam.com/2019/03/64gb-memory-on-the-intel-nucs.html for memory options

      Reply
  32. *protectedAkushida says

    04/15/2022 at 9:24 pm

    Hi William,

    Need your insights please! My NUC BNUC11TNHI70L00 running ESXi 7.0U3d-19482537, BIOS version TN0064. It runs into the shutdown issue as stated above, ESXi is not able to actually shutdown the NUC. It seems that when ‘Shut down’ of an ESXi host is performed the system ignores the BIOS power setting (e.g. to remain off, or power on etc.) and will immediately restart the operating back to a running condition (almost as if a reboot instead of shut down were chosen). Any thoughts, recommendations should be greatly appreciated!

    Reply
    • William Lam says

      04/17/2022 at 8:05 am

      I noticed the same with the 11th Gen, it’s possibly change with their BIOS. I’d recommend posting on the Intel NUC community forums and see if anyone can help

      Reply
    • *protectedFederico says

      03/10/2023 at 9:01 am

      It depends from ESX shutdown method but seems to be not a solution for all NUC11 model. I have 2x NUC11i5 with same problem.
      BIOS is updated to the latest version.

      Reply
  33. *protectedDanny says

    06/11/2022 at 1:06 pm

    My install went pretty smooth, but I have random lockups. Beginning to believe it is the NUC. I have swapped ssd and memory. ESX doesn’t log anything and the screen will have static stripes when it occurs. The only thing you can do is hold power button to reset. I assume that I could load a supported OS and if the issue occurs I could get warranty support/replacement?

    Reply
  34. *protectedMartin says

    07/28/2022 at 2:52 pm

    Hello William!
    Great post that got my buying 3 NUC 11 Pro (NUC11TNHv5) for a vSAN + Tanzu cluster. My goal is to use a QNAP TB3-->10GbE and an external SSD in a TB3 enclosure as the boot drive with the internal M.2 for cache and a SATA SSD for capacity. I have an issue when using 2 TB3 devices at the same time on the WindowsToGo USB drive. The 2nd TB3 device I plug isn't detected.
    Were you able to use 2 TB3 devices at the same time on your NUC11 Pro?
    Thank you!

    Reply
  35. *protectedVille says

    10/30/2022 at 6:01 am

    Anyone having problems with NUC11 and sata 2,5" drive?
    I have kingston A400 2,5" drive and it does not regonize in ESXi 8.0...

    Reply
    • William Lam says

      10/30/2022 at 11:41 am

      If you’ve checked your connections, then it’s most likely due to drivers for device or lack there of … I’ve got NUC 11 Pro and SATA is fine, it’s Intel SSD

      Reply
      • *protectedVille says

        10/30/2022 at 12:51 pm

        Hmm.. Ok. I have checked the cables and the disk is used in another system, so I think it is same kind of driver issue. I have couple other SSD to test. At least, Kingston A2000 NVME SSD is working fine with same machine NUC11TNHv50L Pro.

        Reply
        • *protectedVille says

          11/04/2022 at 11:33 am

          Damn. This SSD was user error. The small cable from board to SSD bracket was little bit pulled up, so the contact was not good. All good now!!

          Reply
  36. *protectedLoic says

    03/16/2023 at 7:19 am

    Hi,

    I just test to install Vsphere 8 on NUC11PAHI5 but it do a purple screen on each reboot or shutdown. Do you have experience with that ?

    Reply
  37. *protectedBaz Curtis says

    07/18/2023 at 2:44 am

    Great article. Can you use the Thunderbolt ports for storage for ESXi or as an extra network port?

    Reply
    • William Lam says

      07/18/2023 at 5:56 am

      Yes. There's both TB Storage & Network options. See https://williamlam.com/2015/01/thunderbolt-storage-for-esxi.html and https://williamlam.com/2019/04/new-thunderbolt-3-to-10gbe-options-for-esxi.html

      Reply
  38. *protectedDuncan John Butcher says

    01/20/2024 at 4:55 am

    Hi William,
    First off, thanks for all the information you've provided so far, it's really helped me build my home lab. I'm having such a specific issue that you may not be able to help, but perhaps you can point me in the right direction.
    I have 3 Intel NUC 11's which I have configured as a 3 node vSAN cluster. Currently I am running:
    vCenter 8.0.2 - 22617221
    ESXi 8.0.1, 22088125
    When attempting to use LCM to update via cluster based image to 8.0 U2 - 22380479 I get the following errors on each host during the compliance check, (remediation also fails):
    vSAN health test 'SCSI controller is VMware certified' reported for cluster 'VSANC1'. Check the VSAN health.
    vSAN health test 'NVMe device is VMware certified' reported for cluster 'VSANC1'. Check the VSAN health.
    I get it, these devices are not VMware certified, but it's a lab so I don't care. But can I suppress or disable these alerts in order to allow for patching to continue? I'd imagine if anyone else has had this issue they might be here, googling turns up KB's which are not related to ESXi 8.0.1, 22088125.

    Reply
    • *protectedDuncan John Butcher says

      01/20/2024 at 5:22 am

      After doing some additional poking around I found a ridiculously simple fix for this issue:
      From the cluster --> Monitor tab --> vSAN --> Skyline Health 
      Review each warning and select silence.

      Hope this helps someone!

      Reply
  39. *protectedThomas says

    02/09/2024 at 1:45 am

    Got a cheap Intel Nuc Enthusiast 11, will it support Esxi 8U2 out of the box or do I need the Fling Driver again?
    Would it be possible to passthrough the Nvidia Card to a Windows 10 or 11 VM?

    Reply

Thanks for the comment!Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...