Many in the VMware Community, including myself, started with the classic 4x4 Intel NUC for running a VMware homelab. Over the years, this powerful little Intel NUC continues to enable a wide variety of new VMware use cases from running vSphere, vSAN, NSX, Tanzu and even vRealize (now Aria). It felt like it was just yesterday that I had switched from using an Apple Mac Mini to the latest Intel NUC (6th Generation) to build my new vSphere/vSAN Homelab, which was more than 6 years ago! 😲
While Intel continues to expand and grow their "NUC" portfolio to include many other form factors, the classic 4x4 design still has a special place for many people in the VMware community. The classic Intel NUC is not only small, portable but also extremely capable, especially with last few releases which makes this an ideal kit for those just getting started with a new VMware Homelab.
If you are in the market for an upgrade this year, definitely check out the latest refresh of this classic 4x4 design with Intel's recent launch of the Intel NUC 12 Pro, formally known as the Wall Street Canyon NUC.
Let's take a closer look at this new Intel NUC 😀
Compute
There are five different CPU configurations to select from across Intel Core i7, i5 and i3 processors, two of which include support for Intel vPro.
- Intel 12th Generation Intel® Core i7-1270P (vPro)
- 12 Processor Cores (4P+8E), 16 threads, 18MB Intel® Smart Cache, 35W TDP
- P-Cores: 4.8GHz Turbo; E-Cores : 3.5GHz Turbo
- Intel 12th Generation Intel® Core i7-1260P
- 12 Processor Cores (4P+8E), 16 threads, 18MB Intel® Smart Cache, 35W TDP
- P-Cores: 4.8GHz Turbo; E-Cores: 3.4GHz Turbo
- Intel 12th Generation Intel® Core i5-1250P (vPro)
- 12 Processor Cores (4P+8E), 16 threads, 12MB Intel® Smart Cache, 35W TDP
- P-Cores: 4.4GHz Turbo; E-Cores: 1 3.3GHz Turbo
- Intel 12th Generation Intel® Core i5-1250P
- 12 Processor Cores (4P+8E), 16 threads, 12MB Intel® Smart Cache, 35W TDP
- P-Cores: 4.4GHz Turbo; E-Cores: 3.3GHz Turbo
- Intel 12th Generation Intel® Core i3-1220P
- 10 Processor Cores (2P+8E), 12 threads, 12MB Intel® Smart Cache, 20W TDP
- P-Cores: 4.4GHz Turbo; E-Cores: 3.3GHz Turbo
Similar to the Intel NUC 11 Pro, the Intel NUC 12 Pro will also include a "Slim" K or "Tall" H chassis option and the latter providing a secondary onboard network interface and two additional USB ports. All kits support up to 64GB SO-DIMM (DDR4-3200) similiar to previous Intel NUC generations.
For a more detailed breakdown across the various Intel NUC 12 Pro kits, please refer to the product brief.
Network
The built-in onboard network interface includes an Intel i225 (2.5GbE) which is similiar to previous Intel NUC models and is automatically recognized when using ESXi 8.0 or using the Community Networking Driver for ESXi Fling for those that wish to run earlier ESXi 7.0 versions.
For those interested in the Tall chassis option, you also have the ability to add a secondary Intel I225 (2.5GbE) expansion module that includes two additional USB ports. Again, ESXi 8.0 will automatically recognize the network adapter or you can use the Community Networking Driver for ESXi Fling if you plan to run ESXi 7.0. A couple of purchasing options for the expansion module is either SimplyNUC or Gorite.
If you need even more networking, you can take advantage of the two Thunderbolt 4 ports using these Thunderbolt 10GbE solutions for ESXi or look at USB-based networking by using the popular USB Network Native Driver for ESXi Fling.
Storage
There is support for 1 x M.2 PCIe x4 Gen 4 (2280) and 1 x M.2 SATA (2242) for the Slim chassis option. For those interested in vSAN, I recommend looking at the vendor KingShark, which has a compatible M.2 SATA (2242) which I have shared in a previous blog post using their 256GB SATA SSD. Historically, the Slim chassis only supported a single M.2 NVMe and now with an extra 2242 slot, you can run vSAN while still getting the benefit of the Slim chassis option.
For those interested in the Tall chassis option, you also have the ability to add an additional 2.5" SATA SSD on the back of the chassis lid, which will future proof your investment since you will now have up to three storage devices, one of which can be used to install the ESXi OSDATA. Although ESXi can be installed from USB, this option has been deprecated and will be removed post-ESXi 8.0, so something to really consider. See this blog post for additional considerations for vSphere 8.
If you need more storage or performant external storage options, you can also use the two Thunderbolt 4 ports and add these Thunderbolt M.2 NVMe solutions for ESXi which will give you plenty more storage capacity.
Graphics
Please see this blog post here for details on how to use the iGPU in passthrough mode with a VM.
ESXi
The latest release of ESXi 7.0 Update 3 installs on the Intel NUC 12 Pro without any issues, but it will require the use of the Community Networking Driver for ESXi Fling to recognize the onboard network devices. For those interested in running ESXi 8.0, no additional drivers are required as the Community Networking Driver for ESXi has been productized as part of the ESXi 8.0 release.
It is recommended to disable the E-cores within the Intel NUC BIOs following the instructions HERE to prevent ESXi from PSOD'ing due to non-uniform CPU cores, which will result in following error "Fatal CPU mismatch on feature". If for some reason you prefer not to disable either the P-cores or E-Cores, then you can add the following ESXi kernel option cpuUniformityHardCheckPanic=FALSE to workaround the issue which needs to be appended to the existing kernel line by pressing SHIFT+O during the boot up. Please see this video HERE for the detailed instructions for applying the workaround.
Does ESXi properly handle the two core types in these 12-Generation processors? I thought that Windows 11 was the only operating system that currently does.
I'd also like to understand if ESXi and it's hosted VMs are stable on this big.little architecture.
Yes, I’ve been running workloads including VC for past month+ without any noticeable issues
Did you have to make any changes in the BIOS to run the virtual workload?
No, the default should be fine as long as Intel-VT is enabled. You may still want to customize based on your own needs, but typical stock BIOS will work out of the box
Hi, how to order the additional LAN module?
nvm, found the correct kit that includes the additional LAN and USB ports. Look for NUC12WSHv50L.
The i7 Gall with Dual LAN and vPro looks like an awesome machine - NUC12WSHv70L - but will it ever actually become available? Why don’t they just sell the dual-LAN expansion module separately, this would simplify all the SKUs…
Make sure not to remove
runweasel cdromboot else you will strugle alot to get it to work. my bootcfg statement is like this without ""
"kernelopt=runweasel cdromBoot cpuUniformityHardCheckPanic=FALSE"
the issue i had, was i wasnt able to install esxi on the harddrive it just booted from the usb disk.
hope this can help others to save some hours 😉
Thanks for this tip! William directed me to this and it's installed now. My current issue is configuration changes. It's not saving anything I do. Assigning license, localcli command covered here, etc.
Seems like newer version of ESXi 7 doesn't like USB sticks anymore. Installed on NVMe and it's saving configs now.
I can't get ESXI 8 to install on a NUC12. Fatal CPU Mismatch.
During boot if I do the shift O and type: cpuUniformityHardCheckPanic=FALSE
it lets me continue, but doesn't finish the install, basically I'm booted from the usb disk.
If i edit the boot.cfg on the usb when installing it doesn't work at all and i get a PSOD. Fatal CPU Mismatch.
My boot.cfg statement is: kernelopt=runweasel cdromBoot cpuUniformityHardCheckPanic=FALSE
Any help would be appreciated.
Thanks,
The above was the one line from my boot.cfg, this is what my boot.cfg looks like:
bootstate=0
title=Loading ESXi installer
timeout=5
prefix=
kernel=/b.b00
kernelopt=runweasel cdromBoot cpuUniformityHardCheckPanic=FALSE
modules=/jumpstrt.gz --- /useropts.gz --- /features.gz --- /k.b00 --- /uc_intel.b00 --- /uc_amd.b00 --- /uc_hygon.b00 --- /procfs.b00 --- /vmx.v00 --- /vim.v00 --- /tpm.v00 --- /sb.v00 --- /s.v00 --- /atlantic.v00 --- /bcm_mpi3.v00 --- /bnxtnet.v00 --- /bnxtroce.v00 --- /brcmfcoe.v00 --- /cndi_igc.v00 --- /dwi2c.v00 --- /elxiscsi.v00 --- /elxnet.v00 --- /i40en.v00 --- /iavmd.v00 --- /icen.v00 --- /igbn.v00 --- /ionic_en.v00 --- /irdman.v00 --- /iser.v00 --- /ixgben.v00 --- /lpfc.v00 --- /lpnic.v00 --- /lsi_mr3.v00 --- /lsi_msgp.v00 --- /lsi_msgp.v01 --- /lsi_msgp.v02 --- /mtip32xx.v00 --- /ne1000.v00 --- /nenic.v00 --- /nfnic.v00 --- /nhpsa.v00 --- /nmlx5_co.v00 --- /nmlx5_rd.v00 --- /ntg3.v00 --- /nvme_pci.v00 --- /nvmerdma.v00 --- /nvmetcp.v00 --- /nvmxnet3.v00 --- /nvmxnet3.v01 --- /pvscsi.v00 --- /qcnic.v00 --- /qedentv.v00 --- /qedrntv.v00 --- /qfle3.v00 --- /qfle3f.v00 --- /qfle3i.v00 --- /qflge.v00 --- /rdmahl.v00 --- /rste.v00 --- /sfvmk.v00 --- /smartpqi.v00 --- /vmkata.v00 --- /vmksdhci.v00 --- /vmkusb.v00 --- /vmw_ahci.v00 --- /bmcal.v00 --- /clusters.v00 --- /crx.v00 --- /drivervm.v00 --- /elx_esx_.v00 --- /btldr.v00 --- /esx_dvfi.v00 --- /esx_ui.v00 --- /esxupdt.v00 --- /tpmesxup.v00 --- /weaselin.v00 --- /esxio_co.v00 --- /loadesx.v00 --- /lsuv2_hp.v00 --- /lsuv2_in.v00 --- /lsuv2_ls.v00 --- /lsuv2_nv.v00 --- /lsuv2_oe.v00 --- /lsuv2_oe.v01 --- /lsuv2_sm.v00 --- /native_m.v00 --- /qlnative.v00 --- /trx.v00 --- /vdfs.v00 --- /vmware_e.v00 --- /vsan.v00 --- /vsanheal.v00 --- /vsanmgmt.v00 --- /tools.t00 --- /xorg.v00 --- /gc.v00 --- /imgdb.tgz --- /basemisc.tgz --- /resvibs.tgz --- /esxiodpt.tgz --- /imgpayld.tgz
build=8.0.0-1.0.20513097
updated=0
So I figured out my issue. Editing the boot.cfg by adding "kernelopt=runweasel cdromBoot cpuUniformityHardCheckPanic=FALSE" didn't work for me. I had to do the SHIFT-O, however when I was adding the "cpuUniformityHardCheckPanic=FALSE" I wasn't appending it. I was deleting the runweasel cdromBoot. Once I appended it, I still get a quick error, but it allows me to install ESXI to the SSD. After that I added the ESXCLI command: "localcli system settings kernel set -s cpuUniformityHardCheckPanic -v FALSE" to make the change permanent.
thanks for the feedback
boot.cfg is the same as SHIFT+O, you're basically editing the kernel options and as you said, you do NOT delete the existing lines (blog makes this clear) and you append.
For the boot.cfg, you need to edit the one located in efi directory, since you're most likely using EFI firmware
I didn't think this would be challenging but it seems a number of folks may not be familiar with ESXi Kernel Boot options and have ran into issues, so I've recorded a video that outlines this process to hopefully help those that need to see the steps visually. Should have that posted later today
Here's the blog post w/video demonstrating the workaround visually https://williamlam.com/2023/01/video-of-esxi-install-workaround-for-fatal-cpu-mismatch-on-feature-for-intel-12th-gen-cpus-and-newer.html
The TPM chip of the NUC 10 is not supported in combination with ESXi. Do you know whether the TPM chip of the NUC12 works?
Wondering that too. You need at least the vPro version, as this is the only version with TPM onboard.
See the new post on this forum: the TPM chip in the Intel NUC 12 Pro is not supported by ESXi 8.0. Disappointing
Nuts!
I noticed that when I powered on the NUC and added it to vCenter. Very disappointing.
"Host TPM attestation alarm"
"TPM 2.0 device detected but a connection cannot be established"
Honestly, I even have issues with TPM 2.0 devices on Dell servers, that came preinstalled with ESXi. -sigh-
Not all TPM devices are implemented equally ... see https://williamlam.com/2022/10/quick-tip-tpm-2-0-connection-cannot-be-established-after-upgrading-to-esxi-8-0.html for more details on the requirements for ESXi to use a TPM, this is really an issue with HW vendors and what they've chosen to implement
Why your esxi GUI screen show only 12 CPU?
Not should be 16 with?:
* 4 Core P hyperthreading --> 8?
* 8 Core E
Sorry, my bad this amount is displaying over logical amount,
Thanks for your work
And again xD
I installed esxi or NUC12 today with 1260p and only 12 logical core appear :/
I just managed to get 7.03g installed on my i5 12th gen nuc. I was also surprised to see that I see 12 logical processors rather than 16. So I wasnt sure if it was reading hyper-threading correctly.
Also, on the esxi UI, when I look at hyperthreading it says "Yes, Disabled". Which is weird.
Will a Samsung 980 Pro 2TB NMVe disk with heatfink fit in the small NUC version?
I am curious about gpu passthrough.
NUC11 doesn't work.
Does this get code 43 too?
Just a shout out to William as the tip about the "cpuUniformityHardCheckPanic=FALSE" really saved my frustration levels.
I have one question for you, when you share your UI screenshots, they always look nicer than what I ever see. I thought it was a vCentre theme or something, but your ESXi 8 screenshot was a stand-alone. Keep up the great work, us home lab geeks really appreciate it. Thanks very much. Colin.
Thanks Colin. The screenshot in this post is a standalone ESXi host, the vSphere UI from vCenter Server will look a different. You can also customize standalone ESXi Embedded Host Client UI, see https://williamlam.com/2022/10/quick-tip-accessing-new-custom-theme-editor-for-esxi-8-0-host-client.html
I am new to building VMWare lab with NUC and just doing my research. What do I need vPRO Platform for ? What is it used for in general terms and to run VMWare lab?
Thank you
Great article, was planning to refresh my lab with AMD built machines and now I've gone with these instead. One note .. you should change the 1260p comment at the top. The CPU does do 16 threads, but ESXi v7 (maybe 8 also) doesn't enable it. I am guessing that the disable panic directive also disables the hyperthreading on the P cores; as you just get "Inactive (Active on restart)" as a comment.
Is this what the boot.cfg is supposed to look like to correct the PSOD ?
bootstate=0
title=Loading ESXi installer
timeout=5
prefix=
kernel=/b.b00
kernelopt=runweasel cdromBoot
modules=/jumpstrt.gz --- /useropts.gz --- /features.gz --- /k.b00 --- /uc_intel.b00 --- /uc_amd.b00 --- /uc_hygon.b00 --- /procfs.b00 --- /vmx.v00 --- /vim.v00 --- /tpm.v00 --- /sb.v00 --- /s.v00 --- /atlantic.v00 --- /bnxtnet.v00 --- /bnxtroce.v00 --- /brcmfcoe.v00 --- /elxiscsi.v00 --- /elxnet.v00 --- /i40en.v00 --- /iavmd.v00 --- /icen.v00 --- /igbn.v00 --- /ionic_en.v00 --- /irdman.v00 --- /iser.v00 --- /ixgben.v00 --- /lpfc.v00 --- /lpnic.v00 --- /lsi_mr3.v00 --- /lsi_msgp.v00 --- /lsi_msgp.v01 --- /lsi_msgp.v02 --- /mtip32xx.v00 --- /ne1000.v00 --- /nenic.v00 --- /netcommu.v00 --- /nfnic.v00 --- /nhpsa.v00 --- /nmlx4_co.v00 --- /nmlx4_en.v00 --- /nmlx4_rd.v00 --- /nmlx5_co.v00 --- /nmlx5_rd.v00 --- /ntg3.v00 --- /nvme_pci.v00 --- /nvmerdma.v00 --- /nvmetcp.v00 --- /nvmxnet3.v00 --- /nvmxnet3.v01 --- /pvscsi.v00 --- /qcnic.v00 --- /qedentv.v00 --- /qedrntv.v00 --- /qfle3.v00 --- /qfle3f.v00 --- /qfle3i.v00 --- /qflge.v00 --- /rste.v00 --- /sfvmk.v00 --- /smartpqi.v00 --- /vmkata.v00 --- /vmkfcoe.v00 --- /vmkusb.v00 --- /vmw_ahci.v00 --- /bmcal.v00 --- /crx.v00 --- /elx_esx_.v00 --- /btldr.v00 --- /esx_dvfi.v00 --- /esx_ui.v00 --- /esxupdt.v00 --- /tpmesxup.v00 --- /weaselin.v00 --- /esxio_co.v00 --- /loadesx.v00 --- /lsuv2_hp.v00 --- /lsuv2_in.v00 --- /lsuv2_ls.v00 --- /lsuv2_nv.v00 --- /lsuv2_oe.v00 --- /lsuv2_oe.v01 --- /lsuv2_oe.v02 --- /lsuv2_sm.v00 --- /native_m.v00 --- /qlnative.v00 --- /trx.v00 --- /vdfs.v00 --- /vmware_e.v00 --- /vsan.v00 --- /vsanheal.v00 --- /vsanmgmt.v00 --- /tools.t00 --- /xorg.v00 --- /gc.v00 --- /imgdb.tgz --- /basemisc.tgz --- /resvibs.tgz --- /imgpayld.tgz
build=7.0.3-0.55.20328353
updated=0
cpuUniformityHardCheckPanic=FALSE
No. As article says, it’s appended to kernelopt line
I've got problem with installing ESXi 7.x and 8.0 on NUC 12 series. I've tried two ways: bootable flash and installations files and zalman hdd case with iso mount function, but got the same - ESXi boots from flash or iso (zalman) like it was already installed before... Installation wizard didn't start. Any suggestions?
You most likely did not correctly create bootable installer. Use unetbootin, it’s multi-platform and you give it ISO and USB device and it’s ready to go
Thanks for suggestion William, but it didn't help. I've got the same - ESXi boots from Flash as it was already installed on it... I've tried to install Ubuntu and other Linux - no problem...
Did you change any of the kernel option by chance when creating the installer? If you properly created an installer via USB, you will be prompted to install after it boots up, it will not just boot into ESXi. This only would happen if you somehow manually edited and specifically deleted lines that existed prior like runweasel string
I've re-downloaded ESXi from VMware, use the App you've suggested and press Shift+O before installation start for adding this option "cpuUniformityHardCheckPanic=FALSE" that's all. Should I check any BIOS settings?
correct, but make sure you append to line and not remove any of existing entries. If you do that, it will boot into installer and you should get to EULA screen. If you don’t see that, then something is wrong with your setup
Read all messages once again in this post and found
Martin Aslak posts 10/06/2022 at 11:00 am - it helped
which is exactly what I said in the blog post and in reply, APPEND do not remove existing entries 🙂
I am interested in purchasing the Intel® NUC 12 Pro X Kit - NUC12DCMv9 with vPro. Will this model work with ESXi 8.0.
Processor: Intel® Core™ i9-12900 Processor (30M Cache, up to 5.10 GHz)
Chipset Support: Intel® W680 Chipset
That's the NUC 12 Extreme and yes, it works fine. See https://williamlam.com/2022/02/esxi-on-intel-nuc-12-extreme-dragon-canyon.html 🙂
Hi William !
Thx a lot for all those informations.
i already got a home made esxi lab with an Intel® Core™ i5-10500 Processor and i'd like to add a new server to create a cluter.
I did some rearch and i'd like to order an intel nuc NUC12WSHI50Z with an Intel® Core™ i5-1240P Processor and i was wondering if the CPUs will be compatible or if i won't be able to create a cluster and use vmotion.
Maybe should i use EVC Mode ? In that case i really don't know what option i'll have to use and if i'll have some trouble with performances.
can you help me about this please ? thx a lot
ps : sorry english isn't my original language
Hi William,
First of all, great blog, very useful and informative!
I recently purchased a NUC12 (NUC12WSHi7), with 500GB NVMe SSD as well as a 4TB SATA SSD. ESXi is installed on the NVMe storage and after some time once it's booted up (could be 10min or 2 hours), I get this message in the ESXi GUI:
Lost connectivity to the device t10.NVMe____WD_BLACK_SN770_500GB____________________8B45634E8B441B00 backing the boot filesystem /vmfs/devices/disks/t10.NVMe____WD_BLACK_SN770_500GB____________________8B45634E8B441B00. As a result, host configuration changes will not be saved to persistent storage.
I tried with both ESxi 7.0.3 and 8.0, same result. I am now thinking of maybe installing ESXi on the SATA SSD instead of the NVMe to see if that changes anything. But this is a strange issue. I have a NUC10 with same configuration and I don't have any problem.
Would you have any idea what could cause this?
Thanks!
The system is literally telling you what's wrong 🙂
>>Lost connectivity to the device ....
It is probably the drive and/or connector or just bad NUC, its hardware and these things will happen. Double check that everything is secured as even slight looseness could cause issues
thanks, you just confirmed what I was suspecting but didn't want to admit to myself yet...now will try to find out which piece of hardware is causing this.
For anyone facing similar issue, what worked me was to upgrade the firmware of the NVMe SSD. NUC is now fully operational!
How did you find the correct driver for your NVME SSD?
I had to install the SSD on a windows setup and use the driver utility update software from the vendor (WD in my case). I simply updated to the last firmware version.
Hi:
Happy Holidays and thanks for all you do! I've edited my boot.cfg file like so and am still getting the PSOD! What am I doing wrong?
bootstate=0
title=Loading ESXi installer
timeout=5
prefix=
kernel=/b.b00
kernelopt=runweasel cdromBoot cpuUniformityHardCheckPanic=FALSE
modules=/jumpstrt.gz --- /useropts.gz --- /features.gz --- /k.b00 --- /uc_intel.b00 --- /uc_amd.b00 --- /uc_hygon.b00 --- /procfs.b00 --- /vmx.v00 --- /vim.v00 --- /tpm.v00 --- /sb.v00 --- /s.v00 --- /atlantic.v00 --- /bcm_mpi3.v00 --- /bnxtnet.v00 --- /bnxtroce.v00 --- /brcmfcoe.v00 --- /cndi_igc.v00 --- /dwi2c.v00 --- /elxiscsi.v00 --- /elxnet.v00 --- /i40en.v00 --- /iavmd.v00 --- /icen.v00 --- /igbn.v00 --- /ionic_en.v00 --- /irdman.v00 --- /iser.v00 --- /ixgben.v00 --- /lpfc.v00 --- /lpnic.v00 --- /lsi_mr3.v00 --- /lsi_msgp.v00 --- /lsi_msgp.v01 --- /lsi_msgp.v02 --- /mtip32xx.v00 --- /ne1000.v00 --- /nenic.v00 --- /nfnic.v00 --- /nhpsa.v00 --- /nmlx5_co.v00 --- /nmlx5_rd.v00 --- /ntg3.v00 --- /nvme_pci.v00 --- /nvmerdma.v00 --- /nvmetcp.v00 --- /nvmxnet3.v00 --- /nvmxnet3.v01 --- /pvscsi.v00 --- /qcnic.v00 --- /qedentv.v00 --- /qedrntv.v00 --- /qfle3.v00 --- /qfle3f.v00 --- /qfle3i.v00 --- /qflge.v00 --- /rdmahl.v00 --- /rste.v00 --- /sfvmk.v00 --- /smartpqi.v00 --- /vmkata.v00 --- /vmksdhci.v00 --- /vmkusb.v00 --- /vmw_ahci.v00 --- /bmcal.v00 --- /clusters.v00 --- /crx.v00 --- /drivervm.v00 --- /elx_esx_.v00 --- /btldr.v00 --- /esx_dvfi.v00 --- /esx_ui.v00 --- /esxupdt.v00 --- /tpmesxup.v00 --- /weaselin.v00 --- /esxio_co.v00 --- /loadesx.v00 --- /lsuv2_hp.v00 --- /lsuv2_in.v00 --- /lsuv2_ls.v00 --- /lsuv2_nv.v00 --- /lsuv2_oe.v00 --- /lsuv2_oe.v01 --- /lsuv2_sm.v00 --- /native_m.v00 --- /qlnative.v00 --- /trx.v00 --- /vdfs.v00 --- /vmware_e.v00 --- /vsan.v00 --- /vsanheal.v00 --- /vsanmgmt.v00 --- /tools.t00 --- /xorg.v00 --- /gc.v00 --- /imgdb.tgz --- /basemisc.tgz --- /resvibs.tgz --- /esxiodpt.tgz --- /imgpayld.tgz
build=8.0.0-1.0.20513097
updated=0
TIA, Seth
UPDATE:
I hit shift-O on boot and manually entered " kernelopt=runweasel cdromBoot cpuUniformityHardCheckPanic=FALSE" and then hit enter, the unit boots up but stops at "vmtoolsd". After about 2 minutes, I can see a PSOD in the background and then it reboots into an Install screen again.
I'm using just one WD SN570 NVMe SSD on board as my install drive, do I need to load drivers for it? Why am I getting stuck on "vmtoolsd"?
Thanks..
This is exactly where I am stuck, and I'm using the same hard drive.
Did you find any solution
Did you disable secure boot?
It is now. Same error. Still won't let me finish the install.
Anything else that should be disabled in Bios?
thanks,
Not yet. Tried with ESXi 8.0. Created bootable ISO on USB via Rufus and edited /EFI/boot/boot.cfg as per the above. 1st time install works fine, but after taking out USB and reboot it gets stuck in an install loop. Going to try recreating ISO on USB and *not* editing anything on the USB itself to see if that works.
As far as I understand, after the reboot, you have to press Shift-O again. Only after ESXi is running, you can make the change permanent.
Seth, try this:
So I figured out my issue. Editing the boot.cfg by adding "kernelopt=runweasel cdromBoot cpuUniformityHardCheckPanic=FALSE" didn't work for me. I had to do the SHIFT-O, however when I was adding the "cpuUniformityHardCheckPanic=FALSE" I wasn't appending it. I was deleting the runweasel cdromBoot. Once I appended it, I still get a quick error, but it allows me to install ESXI to the SSD. After that I added the ESXCLI command: "localcli system settings kernel set -s cpuUniformityHardCheckPanic -v FALSE" to make the change permanent.
thanks for the feedback
Looking at purchasing 4 x NUC12WSHi5 (tall option) to run vSAN. Should I get two drives or three? With the deprecation of USB, I was thinking of the 256GB Kingshark (2242) for the equivalent of a boot drive, 500 GB M2 (2280) for a general drive, and 2TB SSD in the expansion slot in the tall module for the vSAN capacity drive. Is this necessary, or can I get by with two drives overall? Do I need a dedicated cache SSD for vSAN?
Thanks,
Robert
In one of the post someone mentioned the use of EVC with the CPUs for the NUC 12 Pro i5-1240ps.
I did some playing around with the EVC settings, and unfortunately if you have old Broadwell based hosts (think Tinker Try Supermicro hosts), you will not get an EVC setting compatible with the NUC12WSHi5 and other variants.
(I got the Supermicro mini servers when I was still working for VMware as they supported 128GB of RAM and we very low power. Unfortunately, they are old now...)
Hi William. I am at the point of ordering an Intel 12th gen NUC for my ESXi 8.0 environment. I doubt between the NUC12WSKi5 and NUC12WSKv5. They are exactly the same, except for the NUC12WSKv5 having vPro and TPM. The TPM version in this NUC is currently not supported in ESXi 8.0. Don't know if it will be supported in the future, maybe after a firmware update? Might it be wise anyway to choose the vPro version? Of just a waste of money?
I've got 3 x NUC12WSKi5 "Pros" setup as ESXi 8 Hosts.
I'm currently using Thunderbolt 3 > 10G SFP+ adapters.
I've tried to add NVMe Thunderbolt 3 strorage, but the ESXi 8 hosts never detect it.
Anyone know if there is a specifics driver set that needs to be added? Or if the NUC12s don't like being fully populated? (They do not have the additional 2.5gbit expansion module installed, but do have SATA storage in the lid; no SATA storage on the mainboard.)
Not all SSD will be detected, especially lesser know consumer brands. I typically recommend Samsung, Intel and WD as these just work
Thanks for the quick reply SuperLam!!
Humph. They are all Samsung. 940/950/970 EVOs and one Pro.
I'm going to assume ESXi doesn't like the OWC Envoy Express.
Few things to try - Reseat the devices, you'd be surprised at how often the issue is physical including the SSD themselves 🙂
Ensure the TB ports are enabled, BIOS may not always have everything enabled, so good thing to check and ensure BIOS sees the devices. You can always rule out TB chassis by simply plugging the device in directly on motherboard and ensure its functional. I know folks use 970 pretty often, not sure about the others but I'd expect ESXi to be able to see them. Also, make sure devices are clean (use parted util or something as partitions may reside causing issues but typically you'd still see the device just not able to use it because it contains data)
Interestingly, looking at the BIOS, there is really nothing related to thunderbolt other than a single checkbox.
I did check to see whether everything was seated properly, etc., but it doesn’t seem to be the issue.
Make sure your hardware is also on the NUC qualified vendor list.
Oddly enough, when you type in NUC ESXi HCL or “hardware compatibility list” you don’t get anything useful.
Can you provide a link to the NUC HCL?
https://compatibleproducts.intel.com/ProductDetails?activeModule=Intel%C2%AE%20NUC&prdName=NUC12WSKi5
Rock on, thanks!
After playing around with a ton of Thunderbolt 3 devices, I've decided that the NUC12WSHi5 doesn't want the second Thunderbolt port to light up. I suspect that the power draw might be an issue. ...but that's just a suspicion.
It's very interesting that all three NUC12s exhibit the same symptoms.
The BIOS is the latest available, and there are no robust BIOS options for Thunderbolt.
FWIW: The HCL for Thunderbolt on Intel's site is extremely limited, and most of the devices are older. It's clear that Thunderbolt is being eclipsed by USB4. (...but I'm glad to have it.)
Humph. The mystery continues.
You could also look at TB chassis that support more than single NVMe, this way you can use it if it’s due to power (I’ve not tested beyond single BUT I have done easy-chaining since TB supports that) https://williamlam.com/2019/06/thunderbolt-3-enclosures-with-single-dual-quad-m-2-nvme-ssds-for-esxi.html
Ok, How to get both Thunderbolt Ports to work with ESXi 8 on the NUC 12 Pro WSH:
Turns out that the ports on the NUC 12 Pro are REALLY sensitive to detecting and allowing newly plugged in devices to work.
The primary thing you have to (aside from turning around two times in a circle, while patting your head and running your belly) is that you MUST unplug the NUC prior to connecting new thunderbolt devices. TURNING OFF THE NUC DOES NOT WORK!!
I tried every permutation to get a thunderbolt 10G NIC (QNAP) and an OWC Envoy Express (Samsung 970 EVO 1TB) to work AT THE SAME TIME. Nothing.
As such, there is clearly another issue: It doesn't appear that multiple non-self powered Thunderbolt devices will work on the NUC 12 Pro. (There is also squat for information on Intel's website regarding this.)
However, IF you get a thunderbolt powered HUB, you can then plug one of the devices into the hub... and then BOTH devices work. (The one powered but the NUC 12 Pro, and the other powered by the hub.) [OWC Thunderbolt Hub - OWCTB4HUB5P].
I have noticed an issue connecting the 10G QNAP and storage device to the HUB at the same time. Neither appear on the ESXi host, and the hub never appears to come online. However, if I plug only one device on the hub, I can get it to work IF, and only IF, I plug all the devices in as I want them with the power cord UNPLUGGED from the NUC 12 Pro.
I'm going to try adding addition storage devices to the hub and see if they are all detected / presented under ESXi.
The cool thing about the OWC hub is that it lights up blue if there is a viable connection established with the NUC upon boot / driver load completion; it remains white if it's not working.
Oh, and one more thing: I had to use my MacBook Pro to erase the prior ESXi installations from the thunderbolt NVMe SSDs, or the hosts would boot into the previous ESXi installation on those NVMe drives.
Hi there
Could you confirm if you tested any further with multiple devices using a hub?
Would like to run a few 10gbe SFP+ thunderbolt adaptors, currently I can only get one working which I understand is due to the lack of power on the second thunderbolt interface.
So I was thinking with a powered thunderbolt hub connected to the second interface I could hopefully use two adaptors this way, is it possible to use anymore than 1 device on the second interface via a powered thunderbolt hub?
I’ve had hit and miss luck… But it does appear that if you connect multiple 10 gig thunderbolt adapters to the external hub, they will all work.
It seems that if the devices are a mix of thunderbolt three and thunderbolt four, they don’t work.
The advantage of the external hub is that it’s actually powered and this allows your devices to work, but I haven’t been able to plug all my devices into the hub and get that to work. Specifically, I haven’t been able to get networks and storage to work plugged into the hub but I can get it to work if I plug the thunderbolt 10g adapter into the nuc and the hub gets the storage.
One other thing that I’ve noticed is that the ESXI hosts are crashing because they get confused with the P cores and the E cores.
I’m getting purple screens of death as the NUC 12 becomes heavily utilized during operations, such as cloning a virtual machine.
I wish there was an effective way to turn off the E or P cores, and allow hyperthreading as necessary.
Honestly, if the host keep crashing, the new lab won’t be a usable thing.
Anyone else having this problem? I am thinking about getting a few of these for lab builds, but if they crash under load I will look of another option.
No problem here. Did some vMotions and clones, no sweat.
Hi.. I am greatful to your article but my question is a bit different.. from which store I get the secondary Intel I225 (2.5GbE) expansion module? I am really want to get that but unable to find it on web. Would you please help me sharing the link of the nic card?
SimplyNUC either in the US/CA, UK, or EU.
I was able to order them from there.
Thanks a lot.. I will check if they take only the expansion order
You can also purchase from https://www.gorite.com/lan-gigabit-add-on-module-with-dual-usb-2-0-ports-for-tiger-canyon-and-wall-street-nuc which Intel shared awhile back when Tiger Canyon was released. I'll update the article to mention this and SimplyNUC as options for purchasing the expansion adapter
on simplynuc and gorite.com out of stock..but thanks a lot
SimpleNUC ordered them for me and it took a week for them to arrive. So they can be ordered….
Did anyone already figure out the most optimal BIOS settings for the NUC12? Especialy the power settings?
I left them the default settings and set the fans the balanced.
I then used the NUC 11 settings from either your post or William Lams post on the matter.
I will say that it is very important to update the bios the moment you get the computer, because there are some changes in the most recent versions of the bios.
Yes, I updated the BIOS too. But set the power to max performance. The NUC seems pretty fast now 🙂 Sadly, there are many settings in the BIOS where there is no description to be found. Intel's documentation is way behind...
Yeah, I have to agree that entails documentation on pretty much everything with the NUC12 is nonexistent. I’ve been looking for any hint of thunderbolt compatibility / thunderbolt use caveats with the NUC12… and there is nothing to be found
Well, I have my NUC running for 2 days now. Works pretty wel. I have set the power to max performance in the BIOS. However, when I look in esxtop > p, I notice that P-states P0, P1, P2 and P3 are not used. The first used P-state is P4. According to the Pstate Mhz table, this is 1700Mhz, whereas P0 would be 2101Mhz. How can we get P0 to P3 to be active as well? Sadly, I can't post a screenshot here...
Anyone wanting to build an ESXi server that supports TPM in Windows 11, please read this.
I have a NUC setup which natively supports the TPM chip. To be able to do so, you’ll need the vPro version of the NUC. These are the NUCs of which the model number ends with “v5” or “v7”. I purchased the NUC12WSKv5. It has vPro onboard, which means a discrete TPM chip is present (instead of a virtual TPM chip like in other models, which is not supported in ESXi 8).
If you install ESXi 8.0 and vCenter, you can create a cluster. This will allow the TPM chip to be passed on to any VM. Now, you can install Windows 11 with the TPM chip (add the TPM security device from vCenter). vCenter then reports for the VM "Encrypted with a native key provider”
My setup:
1x Intel NUC 12 Pro NUC12WSKv5
2x Crucial CT32G4SFD832A
1x Transcend 430S 256GB (as boot device)
1x Samsung 980 Pro (without heatsink) 2TB (primary datastore)
I’m confused: Don’t the NUC12WSHi5 / NUC12WSKi5 have TPM 2.0s?
Not according to the specs as listed on the Intel site. These offer Intel Platform Trust Technology (PTT) which includes fTPM 2.0. This might be the reason of the ""TPM 2.0 device detected but a connection cannot be established" errors, as it is not a real chip but a firmware implemenatation of TPM. The v5 NUC has a real chip, which works fine as I discovered.
Why does esxi display 12 cpu instead of 16 on your screenshot ? hyperthreading is not supported ?
Correct, due to the non-uniform CPU features, ESXi can’t support HT and those are disabled. For systems that allow for E-Cores to be turned off, then you’d get benefit of P-Cores + HT
I have NUC NUC12WSHI5 Core i5-1240P 2000GB 64GB, I've tried install ESXI 8 but doesn't see SSD drive, how can I fix it?
what SSD?
Hard drive SSD (M.2) 2GB
I mean, what brand and type etc.
SATA III, PCIe type M.2 rest of information will check in BIOS
SDD nvme - SAMSUNG MZVL22T0HBLB-00B00
Can't install system because ESXI doesn't see any hard drive
Although the SAMSUNG MZVL22T0HBLB-00B00 isn't on the compatible devices list of Intel, it's the OEM version of the 980 Pro 2TB I have, and the 512GB and 256GB versions of the PM91A1 are on the list. So, you might assume it should work. Can you see the device in the BIOS? Did you re-seat the card? Did you do a bios update? If you install Windows, do you see the device? I suspect there is something off, but I can't pinpoint it from here.
Hi, is there any way to get temperature from nuc12. I don’t found anything. I installed esx 8 successfully and now I would like to have a monitoring. thanks
No, there are no IPMI drivers to get that information
I noticed on the Wall Street Canyon that with the 101.4146 driver it doesn't throw the BSOD thread error. When it starts and having svga present it has error code 43 in dev mgr. If I disable and then reenable then it shows normal. but I can't seem to get any displays recognized. It seems really close to working but not sure what else it might be
How do I use the Thunderbolt 3 interface in esxi8.0, it cannot be assigned to a virtual machine on my nuc9
Hi - Love this blog, so much that I'm building an ESXi lab based on the NUC 12 tall chassis. I have nearly all my parts and am gearing up to install ESXi v7, however I have 1 question; I read that I need to use the Flings driver for the onboard NIC to work, but in the instructions it says to transfer the driver to host, the install it via the command. My question is, if the NIC doesn't work out of box, how do I get the driver onto the host, so I can install it? I'm sure I'm missing something simple here, but I just can't seem to work it out. Thanks! Can't wait to get this up and running!
On NUC12, the onboard NIC is supported by default, so no fling required.
Thanks VirtuGuru. Is that true for ESXi v7 or only for v8?
ESXi 8
See https://williamlam.com/2022/07/quick-tip-esxi-7-0-update-3f-now-includes-all-intel-i219-devices-from-community-networking-driver-fling.html and https://williamlam.com/2022/09/vsphere-8-productizes-community-networking-driver-fling-for-esxi.html for the specific details as ESXi 7.x has some aspect of the Networking Fling Driver, but as mentioned already, ESXi 8.x completely productizes the Fling
Ok so I downloaded ESXi v7 Update 3g from VMWare (build no. 20328353), but the installer still says no network adapters detected. Is it possible that the installer MUST be exactly v7 U3f? I must be doing something wrong here??
Outstanding! Thank you. I wanted to start with v7 so I can replicate what I have in production, then utilize my new lab to test out upgrade procedures and such, that's why I want to start with v7, I just know now to start with v7 U3f or newer. Thanks!!
There is an updated firmware for the NUC 12 series:
https://downloadmirror.intel.com/774931/WS_0087_ReleaseNotes.pdf
Seems to update a bunch of stuff and handle a vulnerability with OpenSSL
Anyone know how to get passed ESXi install hanging at "vmkusb loaded successfully"?
Hardware
* 3x NUC 12's (NUC12WSH)
* Single NIC on all NUCs
1st Attempt
ESXi-7.0U3sd-19482531 + Community NIC Driver
* All 3 NUC's hang at "vmkusb loaded successfully"
2nd Attempt
ESXi-7.0U2d-18538813 + Community NIC Driver
* All 3 NUC's hang at "vmkusb loaded successfully"
Forgot to mention... I've disabled E-Cores & Secure boot for all 3 devices.
Are you installing onto USB device? If so, double check that the device is okay (it fails more often than folks realize). Try a different USB device if you're using that. If not, switch to console F11/F12 to see if there's any errors on screen. You could also go into ESXi Shell (F1) and take a look at /var/log/esxi_install.log and see if there's any indication on where its stuck
I am also getting this issue on a NUC11
have you found a fix yet?
Did anyone find a resolution for this? I am trying to install ESXi 7.0.3 on a NUC 13 i5 (NUC13ANHi5) and am getting stuck at the same spot. I start the installation of ESXi 8.0 and it does get past this and looks like it would finish (thought I did not complete the installation). I have tried several different USB sticks to install from and get the same issue with all of them. I am using a wireless keyboard with a USB receiver.
I was able to find a solution to this issue. The problem turned out to be with the custom image that was being created with PowerCLI. If I used the custom ISO with the network fling the install hung at the "vmkusb loaded successfully" part of the install. If I used the standard ISO the install would get past that but then fail because there is no network card detected. I found two ways to solve this.
1) Workaround solution - based on these links, https://williamlam.com/2022/02/usb-network-adapters-without-using-the-usb-network-native-driver-for-esxi.html and https://www.virten.net/2020/07/solution-esxi-installation-with-usb-nic-only-fails-at-81/, I was able to use the Cable Matters USB NIC to complete the installation of ESXi 7.0U3 using the standard ISO. I could then go back in and add the network fling to gain the use of the integrated NIC and I had a complete and working installation of ESXi.
2) Solution - it turns out that the issue with PowerCLI and the custom image that was being created was that I was using to new of a version of Python. I had tried multiple versions of PowerCLI and Python and always had the same result. I tried a number of Python versions but only went back a couple of years. I found this link, https://vkasaert.com/2023/01/05/extending-my-homelab-with-the-vexpert-gift/, that indicated that it had to be version 3.7. I used the latest version of PowerCLI and version 3.7.9 of Python and created a custom image with the network fling installed. Using the custom ISO created, I was able to successfully complete the installation of ESXi 7.0U3 with full network support.
can you share your iso? i have the same prob
Warning! Do NOT upgrade to vSphere 8 update 1 on the NUC12 if you did not disable E or P cores. My NUC12 has been happily running vSphere 8.0.0 since early this year (using 8+4 cores without HT), using the cpuUniformityHardCheckPanic setting to False. Today I upgraded to 8.0.1. Upon starting the first VM, I was rewarded with the PSOD. I checked the uniformity setting, and it was still correct. So it seems VMware broke something. I reverted back to 8.0.0 (using the shift-R key during the bootloader), so no harm done. Hope this can be fixed.
Sounds like you might be hitting https://williamlam.com/2023/04/esxi-psod-due-to-gp-exception-13-in-world-with-intel-13th-generation-cpu.html See Option 2 for workaround
Update: fixed it by adding another key: esxcli system settings kernel set -s ignoreMsrFaults -v TRUE
Getting the occasional purple screen of death…
I wish I knew what was causing this. I have a screen shot, but no way to post it .
William, I've spend a day trying to sort out a time issue running ESXI 8.0.1 on a NUC12WSKi7.
The issue: when rebooting the ESXI server, the date/time get set to January 1, 2022 00:00:00 UTC.
My attempts to fix.
Manually changed the date time in the ESXI GUI - it accepts it but reverts back to jan 1, 2022 on reboot.
Set the date/time in the NUC bios to the correct UTC time, when ESXI boots up its back to jan 1, 2022
updated the NUC bios to the latest version as of today, set the date/time in bios to correct UTC, then when ESXI boots up its back to jan 1, 2022.
Connected the NUC to the internet, when ESXI came back up attempted to set date time by NTP, enabled NTPd, set a NTP server to pool.ntp.org and also tried time.windows.com. Every attempt to set up NTP results in a ESXI error stating it couldn't change the date/time settings. I really don't want to have to set the date/time on ESXI every time I reboot, but I'm not finding a solution and striking out finding a fix with my Google-fu. However your site has been very helpful and I'm hoping that maybe you've seen this and have a solution.
Update: Check the logs and I'm seeing the following lines in the /var/log/vmkwarning.log
NTPClock: 1572: Failed to read time from RTC: I/O error
NTPClock: 1972: Invalid time from RTC - setting clock to 00:00:00, 01,01,2022
So it appears ESXI 8.0.1 can't read the RTC clock when installed on a NUC12WSKi7
This seems validated when I run a "hwclock" via commandline in ESXI - the output is crazy and changes a lot between re-running the command. output looks like
1175456608:890: -350389511 00/205/0000 UTC
I run ESXi 8.0.1 on a Intel NUC 12 Pro NUC12WSKv5, which is similar to the NUC12WSKi7, and have no time issues at all. So afaik, it should just work fine.
And when I run hwclock I get:
[root@ESX8:~] hwclock
21:00:18 06/22/2023 UTC
My system connects fine to NTP server:
2022-01-01T00:00:24Z Wa(180) vmkwarning: 0:00:00:04.745 cpu0:1048576)WARNING: NTPClock: 1572: Failed to read time from RTC: I/O error
2022-01-01T00:00:24Z Wa(180) vmkwarning: 0:00:00:04.745 cpu0:1048576)WARNING: NTPClock: 1592: Invalid time from RTC - setting clock to 00:00:00, 01/01/2022
2023-05-19T08:06:17.519Z Wa(180) vmkwarning: cpu7:1049324)WARNING: NTPClock: 1771: system clock synchronized to upstream time servers
@virtguru - Thanks for the reply! Connecting to NTP was just a test to see if I was able to sync the time once online, it would then accept the hwclock time when ESXI powers up in an offline situation. BTW, I figured out getting ESXI to sync to an NTP server, Imjust had to use IP verses domain name. Once I did that, I was able to sync to UTC and "hwclock" shows a proper date. That said, the purpose of this build is for a portable cyber range environment. These NUCs will be thrown in a backpack and transported to various locations for onsite training/exercises where internet may or may not be available. So I want these to be capable of operating offline. This is still a problem. When I remove the internet connection from the NUC, when it reboots, it will again not be able to read the RTC and set its date time to Jan 1, 2022. What I want is for the ESXI to use the NUC RTC when offline verses doing this reset to Jan 1, 2022.
I have exactly the same problem with a fresh install of ESXi 8.0U1a onto a NUC12WSHi70Z.
I also tried all the various ways to resolve it like Chip the original poster mentioned.
Whenever ESXi boots, even the install from USB, it resets the bios date and time to Jan 1, 2022 00:00.
After multiple installations to try and fix it, I then started the ‘ntp’ service with ‘pool.ntp.org’ and as ‘Start with host’. This solves the issue by setting the correct UTC date and time at every boot. I can then shutdown ESXi, go into the bios, and the date and time is correct as UTC.
ESXi must therefore be able to set the date and time but has a problem reading it at initial boot time and then, before ntp starts, sets it to Jan 1, 2022 00:00. The logs indicate this at every boot with the Jan 1, 2022 00:00 shown up until 'ntp' starts and then it records the correct date.
It is therefore 'working' with ‘ntp’ but does anyone know if this is ‘stable system’ for me to start moving my home lab onto it ?
Also any suggestion of what is causing it and how it could be fixed would help too 🙂
To add:
From /var/log/vmkwarning.log
WARNING: EC: 767: Expected exactly 2 resources for embedded controller (got 6)
WARNING: TAD: 145: Initial write failed: Failure; will not use TAD
WARNING: NTPClock: 1572: Failed to read time from RTC: I/O error
WARNING: NTPClock: 1592: Invalid time from RTC - setting clock to 00:00:00, 01/01/2022
After boot with ‘ntp’ running and showing the correct date in the GUI:
[root@localhost:~] hwclock
741522304:972:1702132473 00/140/0000 UTC
[root@localhost:~] date
Sun Sep 17 08:29:49 UTC 2023
Sorry but I forgot to add that I also updated to the latest BIOS version before starting with ESXi
BIOS version: WSADL357.0088.2023.0505.1623
BIOS release date: Friday, May 05, 2023, 02:00:00 +0200
Just to add that a combination of upgrading to a new recently made available Intel BIOS update '90' and ESXI 8.02 resolved this issue. I am not sure which was at fault...
Update:
When William originally posted this article, and I decided that I would replace my supermicro hosts with unsupported processors with a bunch of NUCs.
(That’s right William, you made me spend a lot of money… 😉 )
Basically, I used four NUCs to replace two super micro hosts.
The only issue I’ve faced so far is that I have discovered if you use thunderbolt for peripherals and you move the connector for any reason that causes an interruption in the devices connection to the host, the host will pink screen. At least that’s what I think is going on….
Intel has had a number of bios updates over the last six months, so updating the bios, regularly has been somewhat important as the iron out both security, vulnerabilities and normal stuff.
Thunderbolt compatibility with certain devices seems to be hit or miss. Even though it should work with a thunderbolt hub, I find that that’s not really true. That being said, network devices do seem to work pretty well.
Otherwise, they work pretty darn well with ESXi 8.
Maybe a newb question, but with the E cores disabled, what would be the total number of cores i can assign to my VM’s? Assuming HT is active.
Also, is there a difference in usable cores if i keep the E cores enables and thus doing the workaround option described in your tutorial?
Thanks
For clarification: i am using the i3 1220p NUC
Look up your model on Intel Ark to get your answer
Update:
William, I've been using your NUC 12 layout for a 4 NUC vSphere 8 lab setup. While playing around I've had various issues with adding NICs to the setup.
(Basically, the NUC can be finnicky with respect to Thunderbolt devices, and especially so when you add that ESXi supporting the device.)
I've come to suspect power, as Intel says that Thunderbolt / USB-C does not provide power to peripherals. (With makes little sense... https://www.intel.com/content/www/us/en/support/articles/000093355/intel-nuc.html#:~:text=Intel%C2%AE%20NUC%20products%20do,USB%2DC%2FThunderbolt%E2%84%A2. )
I found a post from another personality online where they purchased a OWC Mercury Helios 3S as an expansion module ( https://www.owc.com/solutions/mercury-helios-3s ). It provides a x16 PCIe slot (x4 electrical), which seems to work great with a Intel 82599EN 10G NIC.
More importantly, I was able to add another 10G Sonnet SOLO 10G NIC to the external point on the OWC Helios, and have a total of 4 NICs (3 of them 10G), WITH the NVMe internal storage AND the 256GB SATA NV type storage you've previously recommend for VFC 4/5.
Is this cost effective.... not really. However, IF you have a NUC lab and want to get a little crazy... sure.
What it is a solution for is the situation where the NUC 12 won't allow for the use of a second external NIC most likely due to power issues. ( I played with all sorts of configurations and nothing worked outside of the single QNAP 10G NIC.)
GB
FYI: As part of our NUC 12 Pro HQL: Samsung 512GB 950 Pro NVMe causes PSOD when setup as external Thunderbolt storage.