The highly anticipated 11th Generation Intel NUCs based on the new Tiger Lake processors has just been announced by Intel and I am excited to share my first hand experience with this new NUC platform. There are currently two models in the new 11th Gen lineup: the Intel NUC 11 Performance codenamed Panther Canyon (pictured on the left) which is the successor to the 10th Gen (Frost Canyon) NUC and the Intel NUC 11 Pro codenamed Tiger Canyon (pictured on the right) which is the successor to the 8th Gen (Provo Canyon) NUC.
There are a number of new improvements and capabilities that will make these new NUCs quite popular for anyone looking to build or upgrade their vSphere environment in 2021.
Before diving right in, I must say I love the new aesthetic look of the NUC chassis. In previous versions, the lid had a glossy and shiny finish, which easily left hand prints. These new models now have a clean matte finish. The NUC 11 Performance has a smoother feel compared to the NUC 11 Pro which has more of a texture to the finish, which I personally prefer. The other noticeable change is the power adapter, which is now half the size now which is nice for those looking to have several of these new kits sitting next to each other.
UPDATE (08/23/21) - For those interested in purchasing the Intel NUC 11 Expansion Module, GoRite is a vendor who is now selling this accessory that I was recently made aware of.
UPDATE (02/17/21) - The Community Networking Driver for ESXi Fling has been released and is required for ESXi to recognize the new onboard 2.5GbE network adapter on all Intel NUC 11 models
NUC 11 Performance (Panther Canyon)
The NUC 11 Performance is similar to the previous 4x4 NUC models and will include three different configurations:
- "Slim" K chassis (one pictured below)
- "Tall" H chassis with a 2.5" SATA3 storage drive bay
- "Tall" Q chassis with a 2.5" SATA3 storage drive bay and for the first time, a wireless charging lid!
Here is a quick summary of some of the new new hardware specs as they pertains to running ESXi:
- Includes i3, i5 & i7 SKUs
- 64GB SO-DIMM (DDR4-3200)
- 1 x M.2 (2280), PCIe x4 Gen 4 NVME or SATA3
- 1 x SATA3 (Tall Chassis, the one pictured below is Slim)
- 1 x 2.5GbE onboard NIC
- 2 x Thunderbolt 3
- 2 x USB 3.1 Gen 2
The NUC 11 Performance is a solid kit for anyone looking to upgrade or purchase a new system for their vSphere homelab. The maximum amount of memory is still 64GB but it does support DDR4-3200 DIMMs. On the storage front, the M.2 (2280) has been upgraded to support the latest PCIe x4 Gen NVMe for those who may need an extra boost in storage performance, but I suspect for most it will be unnoticeable compared to PCIe x3.
For those considering the Tall chassis model, you also have your standard SATA3 which will allow you to setup vSAN or just have two separate vSphere datastores. The IO connectivity on the system has also been updated to support Thunderbolt 4 / USB 4 ports (one in the front and one in the back), this is a nice upgrade from previous NUC models which only had a single Thunderbolt 3 port aside from the Hades or Skull Canyon NUC models. With two Thunderbolt ports which are now capable of 40Gbps, you have even more flexibility in expanding either the storage and/or networking including 10GbE which a number of folks in the community have been doing when deploying vSAN and/or NSX-T. It will be interesting to see what new Thunderbolt 4 / USB 4 peripherals will be available in the market later this year.
Last but not least is the networking which has also been upgraded from a standard 1GbE to a 2.5GbE interface (Intel I225). Multi-gigabit network adapters have been rolling out slowly (here, here and here) and it was only a matter of time before they started to show up on the NUC platform. One of the challenges with a new network device is of course driver support that will allow ESXi to recognize the new device, which I will cover later in the post.
NUC 11 Pro (Tiger Canyon)
The NUC 11 Pro as the name implies is the higher end version of the NUC 11 Performance and the biggest differentiator is vPro capability and a new expandability option, more on this in a bit. There will be two different configurations for the NUC 11 Pro:
- "Slim" K chassis
- "Tall" H chassis with a 2.5" SATA3 storage drive bay
Here is a quick summary of some of the new hardware specs as they pertains to running ESXi:
- Includes i3, i5, i5 vPro, i7, i7 vPro SKUs
- 64GB SO-DIMM (DDR4-3200)
- 1 x M.2 (2280), PCIe x4 Gen 4 NVME or SATA3
- 1 x M.2 (2242), PCIe x1 Gen 3
- 1 x 2.5GbE onboard NIC
- 1 x Thunderbolt 4 / USB 4
- 1 x Thunderbolt 3
- 3 x USB 3.2 Gen 2
I will not rehash the similarities between the NUC 11 Performance and NUC 11 Pro, but if you are interested, you can read the assessment above. I do want to focus on the differences and why you might consider getting a NUC 11 Pro. Earlier, I mentioned the biggest difference is expandability and I literally do mean that. The NUC 11 Pro comes with an optional expansion module (pictured below) that is located at the bottom of NUC (pictured above) which includes an additional 2.5GbE interface (exactly the same as the onboard 2.5GbE) and two additional USB 2.0 ports. The standard onboard USB ports have also been updated to support latest USB 3.2 Gen 2.
This is really the first 4x4 NUC which can be expanded outside of the larger NUC 9 Pro / Extreme NUC, which was just released last Spring. This expansion module connects to a newly added M.2 (2242) B-Key slot which you can see in the picture below. This is definitely going to be useful for those wanting an additional onboard NIC for setting up advanced networking with NSX-T.
If adding a secondary onboard NIC is not your cup of tea, the M.2 B-Key slot can also be used for expanding storage. The number of vendors and options for an M.2 2242 is limited when when compared to the traditional M.2 2280 or 22110 form factor. In fact, I was skeptical on whether I would even be able to find an SSD that ESXi would recognize given most of the vendors that showed up on Amazon were ones that I had never heard of before.
I ended up selecting this 256GB M.2 2242 from a vendor called KingShark 🦈, I figure if its going to be random vendor and without sinking too much money into this test, I might as well pick the coolest name 😉
To use the M.2 2242 slot, you will need to remove the expansion module, if you have purchased it. There are three screws to remove, one for the M.2 slot itself and then two more for the back panel. After that, slide out the expansion module. You can see in the screenshot below that the M.2 SSD has been installed.
To my complete surprise, the KingShark M.2 was fully recognized by the latest version of ESXi! This is a really interesting enhancement with the NUC 11 Pro, with previous 4x4 NUC models, the maximum number of storage devices has always been two. With the addition of another SSD, customers now have even more options when it comes to configuring their vSphere Datastores. You can have three separate VMFS datastores, a combination of both a vSAN and VMFS datastore (especially useful for vSAN traces) or a larger vSAN Datastore with two capacity devices.
Here are a couple of comparison picture (front and back) between the NUC 11 Pro (top) and NUC 10 (bottom). You can see the NUC 11 is slightly wider and taller to accommodate for the new expanded capabilities.
I personally think the new NUC 11 Pro will give customers the greatest flexibility when it comes to running a vSphere Homelab! In terms of availability, Intel will be shipping both the Panther and Tiger Canyon to their partners in the coming weeks and they will be available for purchase later in Q1 of this year. Intel also has plans to release a successor to Hades Canyon which will be called Intel NUC 11 Enthusiast (Phantom Canyon) and while there is no information on when this system will be available, there are some technical details from Intel about the discrete GPU which will be RTX2060 Discrete Graphics with 6GB GDDR6 and it also looks like they have removed the secondary onboard NIC which was a very desirable feature in both the Skull and Hades Canyon models. As I learn more information about the upcoming Phanton Canyon NUC, I will share that in a future blog post.
Finally, lets now take a look at running ESXi on these new NUC 11 systems 🙂
ESXi on NUC 11
Here is the latest ESXi 7.0 Update 1c running on the new NUC 11 Pro, no issues with storage as mentioned above and I have been able to setup both standard VMFS as well as vSAN without any issues as long as you are using an M.2 NVMe/SATA that ESXi recognizes using devices from Intel, Samsung and WD which are known to just work out of the box.
On the networking front, because the 2.5GbE onboard network adapters are brand new devices, ESXi does not recognize these devices out of the box. With that said, we have developed a new ESXi Native Driver which you can find more details here, which customers will be able to incorporate into a new ESXi custom image for installation. The Fling will support both ESXi 7.0 and 7.0 Update 1 and once incorporated into a new ESXi custom image, ESXi will automatically detect the onboard network device for both the NUC 11 Pro and NUC 11 Performance.
Here is screenshot of ESXi 7.0 Update 1c also running on the NUC 11 Performance, as mentioned already, the new Community Networking Driver for ESXi Fling will be required for networking and customers can setup both standard VMFS and/or vSAN, for those purchasing the "Full" height chassis configuration.
Tom C says
Difficult choice: 2 NICs or 2 M.2 drives. I think I'd go for the second M.2 drive myself in my vSAN setup and dump the USB boot drive.
Still, is there any news on perhaps a 5 Gb add-in NIC of if boot from an SD card works for ESXi ?
Bob Swani says
4 Core CPU for lab is a joke in 2021. Intel needs to update this to 8 Core CPUs.
Steve Ballmers says
William, please add Power Adapters to images so we can compare the size difference.
William Lam says
Added
Ric L says
Wish that the 64GB RAM is doubled in this newer model.
William Lam says
Doubt we'll see that any time soon given NUCs uses SODIMM and there's been no hints of 64GB SODIMM modules ... so until that happens, I think 64GB will be the max
Tom says
You state in the text (not the bullet list) that the Panther Canyon supports Thunderbolt 4. Also, the Info on USB 3.1 Gen2 / USB 3.2 Gen 2 is contradictory to what Intel has up on their page. I'm confused: https://www.intel.com/content/www/us/en/products/compare-products.html/boards-kits?productIds=205029,205607
William Lam says
That was fixed earlier. Panther Canyon does NOT support TB4 and USB is 3.1 Gen 2
Pierre says
You’ve still got almost a whole paragraph on TB4 for Panther Canyon. It begins: “The IO connectivity on the system has also been updated to support Thunderbolt 4 / USB 4 ports (one in the front and one in the back), this is a nice upgrade…”
Curtis B says
Hmm. I was considering a second Frost Canyon i7 (I like the 6 cores) and adding a USB-C 2.5GbE port. The 11 Gen i7 is only a quad core (though a bit faster than the 10th gen from what I've read), though the prospect of built in dual 2.5GbE ports is tempting. Decisions...
Mohammed says
Same here difficult choice !!!
BogBeast says
Will the Fling update for 2.5GbE support other Multi-gig cards? or just the NUCs?
I cant get my Intel Intel X550-T2 to connect at 2.5 or 5Gbe in my homelab machines - just 1Gbe or 10 Gbe
William Lam says
The Fling will add support for PCI IDs found in the upcoming NUC 11. For other devices, we can consider based on demand from community and as time permits.
For your negotiation issues, try using just Windows or Linux and see if the device is acting properly. If not, its not an ESXi driver issue (since it already detects it) but rather your setup
BogBeast says
Hello William, thanks for the reply.
Yup, installed the card in a window host and it happily negotiates at 2.5 and 5Gbe.
I posted more over at https://communities.vmware.com/t5/vSphere-Hypervisor-Discussions/Multi-Gig-2-5-5GBe-Support-in-VMware-ESX-7-for-the-Intel-X550-T1/td-p/2821662 and in the Intel Technology network
Would be more than pleased to provide such things as PCI IDs
William Lam says
Yes, please provide the PCI ID + full details brand/make of the device.
The planned Fling will be for PCIe-based Network Adapters and as you've already mentioned, we (VMware) do not have any support for multi-gig, so this would be the first time as part of this upcoming Fling and initial enablement is for the Intel NUC 11
BogBeast says
Hi William,
Here you go:
Intel® Ethernet Converged Network Adapter X550-T2
https://ark.intel.com/content/www/us/en/ark/products/88209/intel-ethernet-converged-network-adapter-x550-t2.html
Partnumber: X550T2
UPC: 735858307352
EAN: 5032037080699
Name PCI Device Driver Admin Status Link Status Speed Duplex MAC Address MTU Description
------ ------------ ------ ------------ ----------- ----- ------ ----------------- ---- -----------
vmnic2 0000:61:00.0 ixgben Up Down 0 Half b4:96:91:77:2b:e8 1500 Intel(R) Ethernet Controller 10G X550
vmnic3 0000:61:00.1 ixgben Up Down 0 Half b4:96:91:77:2b:e9 1500 Intel(R) Ethernet Controller 10G X550
\==+PCI Device :
|----Segment.........................................0x0000
|----Bus.............................................0x61
|----Slot............................................0x00
|----Function........................................0x00
|----Runtime Owner...................................vmkernel
|----Has Configured Owner............................false
|----Configured Owner................................vmkernel
|----Vendor Id.......................................0x8086
|----Device Id.......................................0x1563
|----Sub-Vendor Id...................................0x8086
|----Sub-Device Id...................................0x0001
|----Vendor Name.....................................Intel(R)
|----Device Name.....................................Ethernet Controller 10G X550
|----Device Class....................................512
|----Device Class Name...............................Ethernet controller
|----PIC Line........................................11
|----Old IRQ.........................................255
|----Vector..........................................0
|----PCI Pin.........................................1
|----Spawned Bus.....................................0
|----Flags...........................................12289
\==+BAR Info :
|----Module Id.......................................42
|----Chassis.........................................0
|----Physical Slot...................................3
|----Numa Node.......................................3
|----VmKernel Device Name............................vmnic2
|----Slot Description................................CPU SLOT3 PCI-E 3.0 X8
|----Device Layer Bus Address........................s00000003.00
|----Passthru Disabled...............................false
|----Passthru Capable................................false
|----Parent Device...................................PCI 0:96:1:3
|----Dependent Device................................PCI 0:97:0:0
|----Reset Method....................................1
|----FPT Shareable...................................true
Many Thanks
Steve Ballmers says
Great site William!
Can you please post some comparison of the ASRock Mini 4x4 Box 4800u vs Intel Nuc 11th gen Slim?
Talk about performance and pictures of them side by side including the power adapters?
Keep up the great work!
Tommy Kuhler says
Would you mind posting a screenshot of the tiger canyon NUCs ESXi passthrough PCI-Devices screen?
Wanna know which devices can be passed through...
Michelle Laverick says
Is that fling for the NUC 11th Gen online yet? I rather liked the fact the 10th Gen's NIC was recognised natively by ESXi 7.x
William Lam says
Not yet
sealinsd says
Can you share the esxi installation iso for nuc11 or the esxi driver files for intel i225? I just bought nuc11, but I cannot install esxi because there is no network card driver, thank you very much
William Lam says
Did you actually read the article? It mentions a (yet to be release) driver will be required 🙂
sealinsd says
I have read the article, because I am anxious to use this nuc, then I will wait for the driver to be released, thank you for your article and tutorial.
William Lam says
The Fling has been released https://www.williamlam.com/2021/02/new-community-networking-driver-for-esxi-fling.html
sealinsd says
Tanks^_^
sealinsd says
Thanks with the lost h ^_^
Craig says
Looks like your content has been copied lock stock and all - https://nucfans.com/p/712.html
alexander says
Is it possible to buy the second 2.5gbe lan interface separately?
Chris says
Here I have found only one GbE adapter.
https://www.gorite.com/catalogsearch/result/?q=Intel+NUC+Front+Panel
Gustavo says
Hi William,
so ESXi will be fine on NUCs without ECC memory?
Thank you!
William Lam says
ECC memory is NOT a requirement to run ESXi, but is it certainly recommended in general for x86 platforms, especially for officially supported platforms.
Outside of the recent Intel NUC 9 Pro, all NUCs do NOT support ECC memory, meaning you do not even have a choice 🙂
dazza says
Hi William. Great work. I'm looking for the cheapest route to obtaining an esxi cluster with vsan. Would love your recommendation here. Have you come across/considered anyone offering cost effective cloud hosted esxi cluster labs?
stich86 says
Hi Williams,
I want to get a NUC11TNHv50L (the only one that I can find here Italy), to setup an ESXi 7.0 (single host) to run multiple VM (3 at the moment) and consolidate some of my home stuff (Firewall with OPNSense, Ubuntu running Homeassistant and W10 VM for working stuff) i need some clarification:
- is it possible to configure this NUC to power on after a power loss?
- can I pass-thru Intel AX card to a Linux VM? I want to use the Bluetooth adapter for my home automation
- can I pass-thru USB device to a Linux VM? Need to pass a Zigbee adapter
- are IPMI driver available to see the hardware status on ESXi?
Thanks in advance!
Tom C says
1. Yes
2. Maybe
3. Maybe
4. No
Marco says
Hi William thanks for the great article, from what you write and what is found on the Intel specifications the NUC11TNHi7 or NUC11TNHv7 (for example) models offers the possibility to choose essentially three configurations:
1. n° 3 total internal drive (2 x on M.2 slot + 1 SATA 2.5" Drive)
2. n° 2 total internal drive (1 x on M.2 slot M.2 + 1 x SATA 2.5" Drive)
3. n° 2 total internal drive (1 x on M.2 slot M.2 + 1 x SATA 2.5" Drive) + Expansion Slot for dual ethernet
Correct?
William Lam says
That's correct, depending on your needs, you can select from one of those configurations
alexg says
Hi William, I also had a NUC11TNHv50L come in for testing. With the Fling driver, both network cards are detected clorrectly. However, I have a difficulty with vPro. When I run the ESXi installer with VNC/KVM session connected or start the install ESXi there is a PSOD. If I let the installer or the server first start up and connect to the server with VNC after a few minutes everything is okay. Maybe I have the wrong settings, but I have already tried a few combinations.
WHNS says
Hi, i have exactly the same problem with my NUC11TNKv7. I managed to install 7U1 with usb nic fling. So it seems to be a problem with the NIC driver and vPro. I am currently running with an USB NIC.
Another problem is that passthrough of the Iris XE GPU does not work. I can pass the Iris XE to Windows 10 but the Intel driver always reports code 43.
I already tried
hypervisor cpuid v0 = FALSE and smbios.reflecthost = TRUE
deividfortuna says
I'm having the same issue :/
William Lam says
I've not personally played with vPro, I'd need to double check whether the kit I've got even has access to vPro. Let me see if there's any particular settings that need to be applied or whether this is reproducible on our end.
For the PSOD, is there anyway to get support bundle w/core dump when this happens? This might help Engr team better understand where the issue night be
alexg says
Unfortunately, I can't send a coredump because I only had the device for a few days for testing. It would be great if you could find a solution. Because with vPro, the NUC would become a real alternative to the E200-8D in the Homelab. Maybe WHNS can provide a DUMP and additionally the info if the USB Fling was the solution for the PSOD. Maybe vPro supplies a USB network adapter or something similar with an active KVM/VNC connection....
WHNS says
Hi,
i dont know how to get the core dump when booting from usb installer/iso.
Here is a screenshot of the PSOD:
https://pasteboard.co/JTYaWvN.png
WHNS says
http://folio.ink/TwYdxk
William Lam says
Let me share this with Engineering to see if they've got any thoughts ...
It sounds like there maybe two optional workarounds for now:
1) Let the ESXi installer fully boot up prior to connecting to KVM. If you can let us know this is a functional workaround, that would be helpful
2) Connect a USB NIC and use that to connect to the KVM which doesn't run into this problem
WHNS says
Hi,
i cant reply to your last answer, so I'll do it here.
1) I was able to workaround the problem when disconnecting from vPro Remote Desktop, after selecting the usb installer to boot from (F10 Boot Menu), then waiting ~5min and reconnecting again.
2) I also do not think, that Intel vPro can run from an USB-NIC. It only works with the integrated NIC of the NUC11. So that workaround is not possible
William Lam says
With the help of WHNS, we were able to identify the issue and verify the fix. The Engr team will work on producing an updated version of the driver that'll resolve the vPro issue. They're currently busy with other higher priority items, but we'll try to get that released as soon as we can.
alexg says
In William,
Great News. If it will work the new cluster will run with NUCs 🙂
is it possible to connect a ssd via USB to install ESXi on it? With the USB Stick I‘ve no scratch partition and so on.
PaulMUC says
Hi William,
i just ordered a NUC11TNHi50W and would also like to use the NIC expansion module as well, but i can't find it anywhere, it's not even mentioned on Intels NUC site. Could you please provide a part-number, a name or a link where you got yours from ? Or possibly a source here in Europe ?
Thanks a lot...
William Lam says
Hi Paul,
I reached out to the Intel and they mentioned for the European region, you can reach out to one of their partners called GoRite https://www.gorite.com/contact-us and will should be able to help.
stich86 says
Hi William,
i don't know if you can help me. I've got a NUC 11th with i5 + vPro.
I'm running ESXi 7.0u1, on first pNIC i've one vSwitch with PG "VM Network" and another one with VLAN ID 2 for "IoT Network", on the second pNIC just a vSwitch with PG "WAN Network", this port is directly connected to my ISP router for internet access. I'm running OPNSense as VM and assigned the PG in this order:
- VM Network -> Internal LAN, acts ad DHCP Server
- IoT Network -> IoT LAN, acts ad DHCP Server
- WAN Network - WAN with PPPoE client
Until last week everything was fine, after a reboot to add a new SSD to the NUC, ONLY DHCP on pNIC1 and on untagged PG (so the VM Network) stop working. Doing some debug it looks like the Intel AMT solution broke something on the pNIC1. A confirmation was done swapping the pNIC on the vSwitch, when moving the vSwitch that has PG VM Network on pNIC2 (and swap also on the physical switch) DHCP works without any issue. Other strange thing: after VM start on the hypervisor Intel AMT IP stop responding to any type of traffic (ICMP, TCP, UDP).
Do you know if there can be an issue using AMT on an interface where DHCP Server is running?
Thanks!
William Lam says
I've not done anything with Intel AMT nor do I have a system with the functionality, so won't be of much help. You may to post on the Intel forums to see if anyone can help. I will say there is a known issue where connecting to the vPro interface during ESXi bootup can cause PSOD (but seems like your issue is after its started, so most likely not related)
MIAO WANG says
Hi William
I installed ESXi 7.0 u1c on 11th NUC with Community network driver filing.
It is not running normally(cannot find the compatible network driver) when every start-up from shut off. I have to restart the machine and it can back to normal. Do you have any suggestion?
William Lam says
I'm not sure what you mean by not running normally ... did you create a custom image that contains the required driver? There's been at least couple dozen customers who have been able to get it working. So its possible it could be hardware issues, you may want to ensure all BIOS/Firmware is up to date.
MIAO WANG says
Thank you for your remind. I updated the latest BIOS and I can boot esxi normally.
Kav says
I had the same issue (NUC11PAHi5). I succesfully installed ESXi 7U2 with a custom ISO containing the fling driver. However if you booted from a shutdown state, the NIC would not get recognised, but if you rebooted, it would.
I also updated the firmware to the latest (0039) and this seemed to resolve it. Hoora!
MIke says
HI.
Any idea how can i get this expansion module OR USB-connectors for the internal headers? I was searching some hours but nothing found.
Vincent says
I found this article and a few others online showing off the NUC 11 units, but in the US I'm not seeing them in-stock. Anyone have an idea of when the NUC 11 units are expected to be available in the US? For use as an ESXi system for a home lab are the NUC 11 units a significant upgrade over the NUC 10 units?
antonymaja says
Hi Will, I have a NUC 11 (NUC11PAHi5) and have injected the fling driver into an vSphere 7 Update 2 zip to create and export my custom ISO. All works and correct outputs show however when I boot the USB it gets to the network adapters section and shows "No Network Adapters". I have my network adapter plugged into a unifi switch and it shows that it's connected. (Led is flashing).
Markus Brody says
Update your bios. Out of the box my nuc (NUC11PAHi5) had the 0035 bios, it wouldn't pickup the nic with the fling injected into a custom iso (7.0.1), and presented errors when scanning for devices using a usb nic.
I updated to latest bios (0039), it's picked up both the onboard and USB nic and installed successfully.
antonymaja says
You are amazing. Thanks so much! Didn't think of updating the BIOS. I update to 0039 and ran the same custom ISO with fling injected into 7.0.2. Install went smooth however ESXi didn't pick up my newish HP mechanical keyboard so I had to use an older one.
Thanks again!
Cinvivo says
Hi William, can i add a 2nd WiFi module using the 2242 plus adapter or is there something internally that would prevent me from running two M.2 Wifi Modules ? thanks. Bernard
William Lam says
Any 2242 device should work, assuming it fits within the system
Sam says
Thanks for this great article William. My requirement of NUC is more of setting up kubernetes cluster and not that much of graphical usage. Typically more CPU is better for me, but still I’m confused if I should choose Frost Canyon i7 (with hexa core) vs Panther Canyon i7 (with quad core). Any thoughts on this?
Kav says
Multiply the number of cores you have by the clock speed, this is your CPU 'capacity', the higher the better for your case. Without knowing the clock speed, I would guess though that the hex core will give you a higher figure. Keep in mind, for single threaded applications this makes no difference and single core clock speed is the most important factor in that case.
ChrisD says
Hey William, have you encountered any issues where issuing a shutdown from ESXi results in a reboot immediately afterwards?
I'm running BIOS version 0056 on a NUC11TNHi5 with ESXi 7.0.2 Build 17867451.
I've tried various combinations of BIOS settings but no change.
Booting an Ubuntu live CD and running a 'shutdown -h now' results in a proper shutdown.
flyzipper says
Thanks for the write-up!
Any insights into vGPU passthrough for Intel XE graphics using ESXi?
It looks like Workstation 16 can do it, with the appropriate tools installed on the Windows 10 vm, but haven't found confirmation for ESXi.
William Lam says
See https://williamlam.com/2021/07/passthrough-of-intel-iris-xe-integrated-gpu-on-11th-gen-nuc-results-in-error-code-43.html
flyzipper says
Thank you!
I should have kept reading your site 🙂
kang says
Hello, is your NUC11 running on esxi7.0 shutdown host normal ?
My nuc11 model is nuc11tnhv5. When I running exi7.0, I cannot shutdown the host. When I click shutdown on the web console, it will reboot automatically,It's like click reboot.Do you know the possible reasons or how to troubleshoot the problem ?
Steve says
Hello Kang,
the problem is also described here:
https://www.sbcpureconsult.co.uk/2021/04/12/lab-problems-with-intel-nuc-11th-generation-hardware-with-vmware-esxi-7-0-1/
But now good solution.
Regards
Steve
veilus says
I have the same issue and it seems like it will never be fixed, seeing as that post is from April 2021.
Tom C says
It's a shame but I think you're right. Noone seems to care. No big surprise though, Intel doesn't market these things as virtualization hosts. I am happy that I can run vSAN on a NUC again since VMware killed SD and USB drives as boot devices.
krinix_rog says
I installed customized esxi ISO with the community driver on my new NUC11PAHi7.
This has a Intel Ethernet Controller i225-V
But it shows
Link speed:1000 Mbps
Driver:cndi_igc
Have i got the wrong driver installed? How to I get the 2.5gbit/s for vmnic0
I need help with this.
Steve says
Hello William,
thank you for this geat website. What memory option are you using 2x 32GB or 1x 64GB?
With which manufacturer do you have experience together with the NUC 11?
Regards
Steve
William Lam says
All Intel NUC uses SODIMM memory and largest capacity for a single DIMM is 32GB (64GB doesn't exists, sadly)
You can check out https://williamlam.com/2019/03/64gb-memory-on-the-intel-nucs.html for memory options
Akushida says
Hi William,
Need your insights please! My NUC BNUC11TNHI70L00 running ESXi 7.0U3d-19482537, BIOS version TN0064. It runs into the shutdown issue as stated above, ESXi is not able to actually shutdown the NUC. It seems that when ‘Shut down’ of an ESXi host is performed the system ignores the BIOS power setting (e.g. to remain off, or power on etc.) and will immediately restart the operating back to a running condition (almost as if a reboot instead of shut down were chosen). Any thoughts, recommendations should be greatly appreciated!
William Lam says
I noticed the same with the 11th Gen, it’s possibly change with their BIOS. I’d recommend posting on the Intel NUC community forums and see if anyone can help
Federico says
It depends from ESX shutdown method but seems to be not a solution for all NUC11 model. I have 2x NUC11i5 with same problem.
BIOS is updated to the latest version.
Danny says
My install went pretty smooth, but I have random lockups. Beginning to believe it is the NUC. I have swapped ssd and memory. ESX doesn’t log anything and the screen will have static stripes when it occurs. The only thing you can do is hold power button to reset. I assume that I could load a supported OS and if the issue occurs I could get warranty support/replacement?
Martin says
Hello William!
Great post that got my buying 3 NUC 11 Pro (NUC11TNHv5) for a vSAN + Tanzu cluster. My goal is to use a QNAP TB3-->10GbE and an external SSD in a TB3 enclosure as the boot drive with the internal M.2 for cache and a SATA SSD for capacity. I have an issue when using 2 TB3 devices at the same time on the WindowsToGo USB drive. The 2nd TB3 device I plug isn't detected.
Were you able to use 2 TB3 devices at the same time on your NUC11 Pro?
Thank you!
Ville says
Anyone having problems with NUC11 and sata 2,5" drive?
I have kingston A400 2,5" drive and it does not regonize in ESXi 8.0...
William Lam says
If you’ve checked your connections, then it’s most likely due to drivers for device or lack there of … I’ve got NUC 11 Pro and SATA is fine, it’s Intel SSD
Ville says
Hmm.. Ok. I have checked the cables and the disk is used in another system, so I think it is same kind of driver issue. I have couple other SSD to test. At least, Kingston A2000 NVME SSD is working fine with same machine NUC11TNHv50L Pro.
Ville says
Damn. This SSD was user error. The small cable from board to SSD bracket was little bit pulled up, so the contact was not good. All good now!!
Loic says
Hi,
I just test to install Vsphere 8 on NUC11PAHI5 but it do a purple screen on each reboot or shutdown. Do you have experience with that ?
Baz Curtis says
Great article. Can you use the Thunderbolt ports for storage for ESXi or as an extra network port?
William Lam says
Yes. There's both TB Storage & Network options. See https://williamlam.com/2015/01/thunderbolt-storage-for-esxi.html and https://williamlam.com/2019/04/new-thunderbolt-3-to-10gbe-options-for-esxi.html
Duncan John Butcher says
Hi William,
First off, thanks for all the information you've provided so far, it's really helped me build my home lab. I'm having such a specific issue that you may not be able to help, but perhaps you can point me in the right direction.
I have 3 Intel NUC 11's which I have configured as a 3 node vSAN cluster. Currently I am running:
vCenter 8.0.2 - 22617221
ESXi 8.0.1, 22088125
When attempting to use LCM to update via cluster based image to 8.0 U2 - 22380479 I get the following errors on each host during the compliance check, (remediation also fails):
vSAN health test 'SCSI controller is VMware certified' reported for cluster 'VSANC1'. Check the VSAN health.
vSAN health test 'NVMe device is VMware certified' reported for cluster 'VSANC1'. Check the VSAN health.
I get it, these devices are not VMware certified, but it's a lab so I don't care. But can I suppress or disable these alerts in order to allow for patching to continue? I'd imagine if anyone else has had this issue they might be here, googling turns up KB's which are not related to ESXi 8.0.1, 22088125.
Duncan John Butcher says
After doing some additional poking around I found a ridiculously simple fix for this issue:
From the cluster --> Monitor tab --> vSAN --> Skyline Health
Review each warning and select silence.
Hope this helps someone!
Thomas says
Got a cheap Intel Nuc Enthusiast 11, will it support Esxi 8U2 out of the box or do I need the Fling Driver again?
Would it be possible to passthrough the Nvidia Card to a Windows 10 or 11 VM?