I am super excited to announce the release of a new Community Networking Driver for ESXi Fling! The idea behind this project started about a year ago when we released an enhancement to the ne1000 driver as a community update which enabled ESXi to recognize the onboard network adapter for the Intel 10th Gen (Frost Canyon) NUC. Although the Intel NUC is not an officially supported VMware platform, it is extremely popular amongst the VMware Community. In working with the awesome Songtao, we were able to release this driver early last year for customers to take advantage of the latest Intel NUC release.
At the time, I knew that this would not be the last occurrence dealing with driver compatibility. We definitely wanted an easier way to distribute various community networking drivers that is packaged into a single deliverable for customers to easily consume and hence this project was born. In fact, it was quite timely as I had just received engineering samples of the new Intel NUC 11 Pro and Performance (Panther Canyon and Tiger Canyon) at the end of 2020 and work needed to be done before we could enable the onboard 2.5GbE (multi-gigabit) network adapter which is a default component of the new Intel Tiger Lake architecture. As reported back in early Jan, Songtao and colleague Shu were successful in getting ESXi to recognize the new 2.5GbE network adapter and has also been incorporated into this new Fling. In addition, we also started to receive reports from customers that after upgrading to a newer ESXi 7.0 releases, the onboard network adapters for the Intel 8th Gen NUC was no longer functioning. In an effort to help customers with this older platform, we have also updated the original community ne1000e driver to include the relevant PCI IDs within this Fling.
The new Community Networking Driver for ESXi is for PCIe-based network adapters and currently contains the following two driver modules:
- igc-community - which adds support for Intel 11th Gen NUCs and any other hardware platform that uses the same 2.5GbE devices
- e1000-community - which adds support for Intel 8th Gen NUC and any other hardware platform that uses same 1GbE devices
For a complete list of supported devices (VendorID/ProductID), please take a look at the Requirements tab on the Fling website. As with any Fling, this is being developed and supported in our spare time. In the future, we may consider adding other types of devices based on feedback from the broader community. I know Realtek-based PCIe NICs is something that many have been asking about and as mentioned back in this blog post, I have been in engaged with the Realtek team and hopefully in the near future, we may see an ESXi driver that can support some of the more popular devices in the community. If there are other PCIe-based networking adapters that could fit the Fling model, feel free to leave a comment on the Fling website and we can evaluate as time permits.
Jose Gomes says
Great stuff!
Conor says
Lovely. Now we need one to back add the 10gb stuff that was dropped from 6.7 to 7
Gary says
Thank you very much, I’m a happy nuc 8 user 🙂
Phooba says
I picked up an ASUS PN50 thinking they might be great lab machines but it seems it suffers from net drivers issues as well.
William Lam says
Majority of the AMD SFF kits uses RTL-based NIC which don't have ESXi Native Drivers. See my last paragraph 🙂
Phooba says
Would be exciting if that were to happen. I know there are work arounds for RealTek for 6.5 but struggling to get ImageBuilder to work but still in the very new stages with all this.
calc says
Excited to get the new RTL8125B fling driver for 7.0 👍
Stuck on 6.7 for now until it becomes available.
Unfortunately very few Ryzen boards have I225 as an option or I would have gone that route.
Gnanaprakasam Karuppasamy says
Where we can buy 11th Gen Intel NUC (Panther Canyon) & (Tiger Canyon) ? I don't see it's availability in Amazon
William Lam says
My understanding is that they're getting rolled out from now till end of the month. There was some production challenges, so the Panther Canyon was mentioned it may only be available (initially) in EMEA/APAC region and the Tiger Canyon should be available every where. I know several customers have been able to purchase through local suppliers/resellers. SimplyNUC is one such vendor who's shipping, so you can check there or look at other Intel NUC resellers
Rob Mallicoat says
I JUST received my NUC 11 Beast Canyon on 1/7/2022 🙂 I had an Intel quad GB nIc I used for the install as the 2.5GB Intel NIc was not found. looking for the fling for it... having troubles as this is all new to me after learning ESX 8-10 years ago and then starting at MSFT the last 8 year and moved to hyper-v to be cool 😛
William Lam says
Welcome back to the VMware eco-system 🙂
If you've already got ESXi installed already, you simply just need to install the Community Networking Driver for ESXi Fling and reboot. If you wish to install ESXi w/driver, then you'll need to create a new ESXi ISO that contains the driver. To do so, you will need access to vCenter Server since that provides you with the Image Builder service (which is exposed both UI and CLI). If you prefer UI, check out https://williamlam.com/2021/03/easily-create-custom-esxi-images-from-patch-releases-using-vsphere-image-builder-ui.html for steps or if you prefer CLI and using PowerCLI, check out https://guido.appenzeller.net/2021/12/11/installing-esxi-on-an-intel-nuc11/
It sounds like you've already got ESXi installed, so no need to create custom ISO, unless you want to have it and simply install the driver based on the instructions found on the VMware Fling site
Rob says
Thanks. Downloaded the nuc 11 network fling, installed from ESXcli and rebooted and setup a switch port and using it now. Thx
Pawel says
Great job!
Thanks.
Peter says
Perfect, thanks a lot. Really appreciate your effort, keep going *thumbs up“
Nexxic says
Awesome! Still waiting for my 11 gen NUC, but trying to prepare by creating a custom ESXi ISO image.
First time doing this via PowerCLI and might be a stupid question, but which software package should be added to the new custom image? I've imported the net-Community-driver and ESXi-7.0.0 zip file software depot's, but I can't find the igc-community software package.. There is one called net-community, and I can see one called ne1000..
Do you think there's something I've missed?
William Lam says
The driver name that you'll want to add is called "net-community", that will contain BOTH igc-community and e1000-community modules. Its also much easier to use the Image Builder UI in vCenter, couple of clicks and you're done
Grzegorz Kulikowski says
Just wanted to confirm that with the latest version of the driver jumbo frames are working. With the previous version it was not working for me (NUC11TNHv50L).
Grzegorz Kulikowski says
Spec:NUC11TNHv5
BIOS: 0054 (latest in moment of writing)
So, it looks like either i have done something wrong or there is some issue going on. It does not matter if one uses the old version of the igc-community or the newest version of the fling. The behavior is still the same. If one wants to reproduce this :
1) Configure nuc,enable 2 network cards
2) configure mbex (ctrl+p)
3) install esxi
At this point everything works, you can connec to AMT using for example mesh commander, you can connect to esxi, etc
4) When one switches mtu on vswitch to 9000, both internal network cards go down. Leds are off, and esxcfg-nics -l shows both of them as down.
5) Reboot
6) Before the esxi would load up, we can connect to AMT, but at the moment ESXi kicks in the cards are going down again, and one can't ping the esxi, nor use the AMT.
Workaround:
Power off the nuc, take out eth cables from 2 ports.
Power on the nuc, wait until esxi is fully loaded.
Attach network cables.
AMT is online, esxi is online, jumbo frames can be cofnigured on vmk, and they are working just fine.
I have no idea if this is the AMT problem or the esxi, or the driver.
Does anyone have any idea what is going on , or if i have misconfigured here anything?
Ben says
Any update on realtek support being added?
Thanks
William Lam says
No
Sergio Kappel says
Hi William,
Thanks for sharing this information with the community. I can confirm that this fling works with a Lenovo ThinkCentre M70q (003VMH) that comes with a built-in Intel (11) I-219V network card.
In order to get ESXi 7.0u2a (build 17867351) installed on this tiny computer you have to follow the following steps:
- Download the community network driver fling: https://flings.vmware.com/community-networking-driver-for-esxi
- Create a custom ESXi ISO with vCenter AutoDeploy or with the PowerCLI script
- Disable Secure Boot in the BIOS
- Create a bootable usb stick with Rufus and use the Custom ESXi ISO as your source.
- Install ESXi 7.0.2 from the USB stick.
Coy says
Hello,
I was just wondering if this is normal behavior. I have a nuc11 and was able to manage to install Esxi already. But what I noticed is after I shutdown my esxi, the next time I turn it on it won’t detect the network card again and I have to reboot, and doing so that makes the nuc detect the network card again.
Power on nuc + reboot is what I am doing to make it work. is this expected or I am doing something wrong? Thanks in advance
William Lam says
No, this is not expected and I've not seen this with my setup or heard others running into this. You may want to double check that you've got the latest firmware/BIOS update on your NUC
Coy says
Hell yeah! I honestly haven’t thought of that. I’ve updated the uefi and now I am fixed!
Thorsten Drönner says
Is it possible to add support for the e1000e - 0x150c ?
Freddy says
Hello! Just got my hands on the Intel NUC nuc11pah model and little did I know, the NIC needs to be updated. How do I update the NIC so ESXi can be installed since during the boot install of ESXi, no network adapters is detected on ESXi 7.0.3.
Note: I do not have vCenter in my home lab environment. What would be an easier way to update the NIC driver?
Thanks!
Paul Caffrey says
Does anyone know id this works on U3?
John says
Just wanted to share my experience in case anyone runs into the same issues I had. I was building a vCenter cluster using 9 x 8th gen NUC8v5PNK NUCs. I was able to install ESXi v7.0U3f on the NUC devices and somehow was able to deploy vCenter to one of the NUCs but was unable to add the remaining hosts to the cluster. vCenter gave an unhelpful error: Unspecified condition when trying to deploy/communicate with the host and vCenter agent. I couldn't upload files to ESXi on any of the systems due to the issues with the built-in network adapters. I tried updating to the latest BIOS/firmware version available.
I built a custom ESXi v7.0U3f image using PowerCLI tools, adding this e1000-community driver and removing the VMware ne1000 driver from the profile. Even after installing this image on all the NUCs I continued to have the same network issues. It turns out that there's an obscure setting in the BIOS that I disabled to fix the network slowness issue. Under the Power tab -> Secondary power settings, the PCI-e ASPM option needs to be disabled. This is the PCI-express Active State Power Management feature. I'm guessing that when the NIC saw any activity for more than a few seconds the NIC was being put in some kind of low power state.
Disable PCI-express Active State Power Management (PCI-e ASPM) in the BIOS before deploying anything.
Ben says
wow, I have been struggling with this for days, John you just saved me so much headache.