There has been a lot of great technical content from both VMware and the broader community since the announcement of vSphere 8, which happened a few weeks ago. I know many of you are excited to get your hands on both vSphere 8 and vSAN 8 and while we wait for GA, I wanted to share some of my own personal experiences but also some of the considerations for those interested in running vSphere 8 in their homelab.
As with any vSphere release, you should always carefully review the release notes when they are made available and verify that all of your hardware and the underlying components are officially listed on the VMware HCL, which will be updated when vSphere 8 and vSAN 8 GA's. This is the only way to ensure that you will have the best possible experience and a supported configuration from VMware.
Disclaimer: The following considerations are based on early observations using pre-GA builds of vSphere 8 and it does not reflect any official guidance or support from VMware.
CPU Support
The following CPU generations listed below for both Intel and AMD will not be supported with vSphere 8.
- Intel
- SandyBridge-DT, SandyBridge-EP/EN
- IvyBridge-DT, IvyBridge-EP/EN, IvyBridge-EX
- Haswell-DT, Haswell-EP, Haswell-EX
- Broadwell-DT/H
- Avoton
- AMD
- Bulldozer - Interlagos, Valencia, Zurich
- PileDriver - Abu Dhabi, Seoul, Delhi
- Steamroller - Berlin
- Kyoto
If you boot the ESXi 8.0 installer and it detects that you have an unsupported CPU, you will see the following error message.
The default behavior of the ESXi 8.0 installer is to prevent users from installing ESXi on a system that has a CPU that is not officially supported.
UPDATE (10/05/23) - ESXi 8.0 Update 2 requires CPU processors that support XSAVE instruction or you will not be able to upgrade and means you will hardware with a minimum of an Intel Sandy Bridge or AMD Bulldozer processor or later.
ProTip: With that said, there is a workaround for those that wish to forgo official support from VMware or for homelab and testing purposes, you can add the following ESXi kernel option (SHIFT+O):
allowLegacyCPU=true
which will turn the error message into a warning message and enable the option to install ESXi, knowing that the CPU is not officially supported and accept any risks in doing so.
ProTip: If you are using Intel 12th Generation or newer consumer CPUs, an additional workaround is required due to fact that ESXi does not support the new big.LITTLE CPU architecture in these CPUs, which I had initially discovered when working with Intel NUC 12 Extreme (Dragon Canyon).
The following ESXi kernel option cpuUniformityHardCheckPanic=FALSE still needs to be appended to the existing kernel line by pressing SHIFT+O during the initial boot up. Alternatively, you can also add this entry to the boot.cfg when creating your ESXi bootable installer. Again, you need to append the entry and do not delete or modify the existing kernel options or you will boot ESXi into ramdisk only. If this entry is not added, then booting ESXi with processors that contain both P-Cores and E-Cores will result in a purple screen of death (PSOD) with following message "Fatal CPU mismatch on feature".
Note: Once ESXi has been successfully installed, you can permanently set the kernel option by running the following ESXCLI command: localcli system settings kernel set -s cpuUniformityHardCheckPanic -v FALSE before rebooting OR you can reboot host and take out the USB and manually edit EFI\boot\boot.cfg and append the kernel option and this will ensure subsequent reboots will contain the required kernel option.
I/O Device Support
Similarly for I/O devices such as networking, storage, etc. that are not be supported with vSphere 8, the ESXi 8.0 installer will also list out the type of device and its respective Vendor & Device ID (see screenshot above for an example).
To view the complete list of unsupported I/O devices for vSphere 8, please refer to the following VMware KB 88172 article for more information. I know many in the VMware Homelab community makes use of the Mellanox ConnectX-3 for networking, so I just wanted to call this out as that is no longer supported and folks should look to using either the ConnectX-4 or ConnectX-5 as an alternative.
ProTip: A very easy and non-impactful way to check whether your existing CPU and I/O devices will run ESXi 8.0 is to simply boot an ESXi 8.0 installer using USB and check whether it detects all devices. You do NOT have to perform an installation to check for compatibility and you can also drop into the ESXi Shell (Alt+F1) using root and no password to perform additional validation. If you are unsure whether ESXi 8.0 will run on your platform, this is the eaiest way to validate without touching your existing installation and workloads.
For those that require the use of the Community Networking Driver for ESXi Fling to detect onboard networking like some of the recent Intel NUC platforms, folks should be be happy to learn that this Fling has been officially productized as part of vSphere 8 and custom ESXi ISO image will no longer be needed. For those that require the USB Network Native Driver for ESXi Fling, a new version of the Fling that is compatible with vSphere 8 will be required and folks should wait for that to be available before installing and/or upgrading to vSphere 8.
USB Install/Upgrade Support
Last year, VMware published revised guidance in VMware KB 85685 regarding the installation media for ESXi which includes ESXi 8.0 and specifically when using an SD or USB device. While ESXi 8.0 will continue to support installation and upgrade using an SD/USB device, it is highly recommended that customers consider a more reliable installation media like an SSD, especially for the ESXi OSData partition. Post-ESXi 8.0, USB installation and upgrades using an SD/USB device will no longer be supported and it is best to have a solution now than to wait for that to happen, if you ask me.
If you do decide to install and/or upgrade ESXi 8.0 using SD/USB device, the following warning message will be presented before allowing you to proceed with the install or upgrade.
Hardware Platform Support
While this is not an exhaustive list of hardware platforms that can successfully run ESXi 8.0, I did want to share the list of systems that I have personally tested and hope others may also contribute to this list over time to help others within the community.
The following hardware platforms can successfully run ESXi 8.0:
- Intel NUC 5 (Rock Canyon) - Courtesy of Anthony
- Intel NUC 6 (Swift Canyon) - Courtesy of Anthony
- Intel NUC 8 (Dawson Canyon) - Courtesy of Anthony
- Intel NUC 9 Extreme (Ghost Canyon)
- Intel NUC 9 Pro (Quartz Canyon)
- Intel NUC 10 Performance (Frost Canyon)
- Intel NUC 11 Performance (Panther Canyon)
- Intel NUC 11 Pro (Tiger Canyon)
- Intel NUC 11 Extreme (Beast Canyon)
- Intel NUC 12 (Dragon Canyon)
- Intel NUC 12 Pro (Wall Street Canyon)
- Supermicro E200-8D
- Supermicro E302-12D
VCSA Resource Requirements
With all the new capabilities in vSphere 8, it should come as no surprise that additional resources are required for the vCenter Server Appliance (VCSA). Compared to vSphere 7, the only change is the amount of memory for each of the VCSA deployment sizes, which has increased to an additional 2GB of memory. For example, in vSphere 7, a "Tiny" configuration would required 12GB of memory and in vSphere 8, it now will require 14GB of memory.
ProTip: It is possible to change the memory configurations after the initial deployment and from my limited use, I have been able to configure a Tiny configuration with just 10GB of memory without noticing any impact or issues. Depending on your usage and feature consumption, you may need more memory but so far it has been working fine for a small 3-node vSAN cluster.
Nested ESXi Resource Requirements
Using Nested ESXi is still by the far the easiest and most efficient way to try out all the cool new features that vSphere 8 has to offer. If you plan to kick the tires with the new vSAN 8 Express Storage Architecture (ESA), at least from a workflow standpoint, make sure you can spare at least 16GB of memory (per ESXi VM), which is the minimum required to enable this feature.
Note: If you intend to only use vSAN 8 Original Storage Architecture (OSA), then you can ignore the 16GB minimum as that only applies to enabling vSAN ESA. For basic vSAN OSA enablement, 8GB (per ESXi VM) is sufficient and if you plan to run workloads, you may want to allocate more memory but behavior should be the same as vSphere 7.x.
A bonus capability that I think is worth mentioning is that configuring MAC Learning on a Distributed Virtual Portgroup is now possible through the vSphere UI as part of vSphere 8. The MAC Learning feature was introduced back in vSphere 6.7 but was only available using the vSphere API and I am glad to finally see this available in the vSphere UI for those looking to run Nested Virtualization!
I've not had a chance to install directly on my E300-9D, any idea if CPU Intel Xeon D-2146NT on the supported list? (fingers crossed)
I'd expect it to work, especially since the much older E200-8D works fine
Mellanox 4 is a good alternative to the unsupported and deprecated mellanox 3 but Hp 530sfp + and way cheaper than the mellanox 4 on eBay !
You mentioned the HP 530sfp+ but this card is not on the ESXi 8 HCL. So how would this be a good replacement for the ConnectX-3? If it's not supported, we're in the same boat. Can you verify that this card does indeed work with 8? Otherwise, this post could lead people to purchase something and be in the same boat as the X-3.
It’s listed in the io devices on the hcl it’s fraction of the cost of a connect x4 even including replacing the existing twin ax cable.the Intel 520 10 gb is still supported also
I have Intel ® Xeon Processor HP laptop with 32 GB RAM and 512 GB SSD and I can install ESXi 7.0 but it doesn't support VCSA. Very Sad...My money has been wasted.
what about get hardware visible -which is not in the comp.matrix- ,
drivers loaded succesfully and card is visible with "lspci | grep Ethernet"
but blocked in gui.
0000:03:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection
0000:04:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection
0000:07:00.0 Ethernet controller: Intel Corporation Gigabit ET Quad Port Server Adapter
0000:07:00.1 Ethernet controller: Intel Corporation Gigabit ET Quad Port Server Adapter
0000:08:00.0 Ethernet controller: Intel Corporation Gigabit ET Quad Port Server Adapter
0000:08:00.1 Ethernet controller: Intel Corporation Gigabit ET Quad Port Server Adapter
EXPI9404PT Intel PRO/1000 PT Quad Port Server ExpressModule (4 lowest)
isn t working while it was in ESXi 7.x
in what file is the white listing of the located
VID : 8086
SVID : 8086
DID : 10bc
SSID : 11bc
tweaking for the homelab should be possible
Hey. Do you know if P/E cores are supported in 8.0?
ESXi is not aware of the big.LITTLE CPU architecture that contains P/E cores. You will need to apply the same workaround as vSphere 7.x with ESXi kernel boot option: cpuUniformityHardCheckPanic=FALSE to allow ESXi to boot and install
But if you apply this, are there any issues or crashes?
As I said ... YMMV
I've been using my setup (Intel NUC 12 Pro) for several weeks and no issues, its also using most of the RAM (32GB) and fair amount of CPU
Hi,
I installed ESXi 8.0u2 on my HP Elite Mini 800 with Intel i9 12900 processor and i219-LM network card and, to get around the Efficiency Core problem, I followed your procedure (cpuUniformityHardCheckPanic=FALSE).
The system starts correctly but when I try to upload a file larger than 2-3 Kbyte the connection freezes and the system also becomes less responsive, this problem also occurs with version 7.03n.
On vmkwarning.log and vmkernel.log I found many warning messages.
Do you have any suggestions on this problem.
Thank you very much
vmkernel: cpu1:1048998)INFO (ne1000): false RX hang detected on vmnic0
vmkernel: cpu1:1048998)INFO (ne1000): false RX hang detected on vmnic0
vmkernel: cpu15:1050893)CpuSched: 873: user latency of 1050894 J6AsyncReplayManager 0 changed by 1050893 sftp-server -6
vmkwarning: cpu15:1048742)WARNING: HBX: 2720: Failed to cleanup registration key on volume664c6c52-5f1cd4ee-3510-30138b75e14a: Failure vmkwarning: cpu15:1048742)WARNING: Vol3: 4342: Error closing the volume: . Eviction fails: Failure
vmkernel: cpu0:1048998)INFO (ne1000): false RX hang detected on vmnic0
cpu0:1050344)WARNING: Cpu: 2220: Unable to get cpuid level eax = 24 ecx 7 from CPU_Getid(8) cpu0:1050344)WARNING: Cpu: 2220: Unable to get cpuid level eax = 24 ecx 8 from CPU_Getid(8
HBX: 2720: Failed to cleanup registration key on volume664c6c52-5flcd4ee-3510-30138b75e14a: Failure Vol3: 4342: Error closing the volume: Eviction fails: Failure
HBX: 2720: Failed to cleanup registration key on volume664c6c52-5flcd4ee-3510-30138b75e14a: Failu
Hello,
Where can I download vSphere 8, do you have an easy link?
Or is it already available on my VMware ?
Did you got the download link?
vSphere 8 has NOT GA'ed (as mentioned several times in blog post :))
It's already available from vmware customer connect for download.
Great article William.
Thanks.
Intel NUC 8 won’t be supported?
I don't have a NUC 8, so can't say if it'll work or not. If I had to guess ... it probably will work
I got error:
[HardwareError]
Hardware precheck of profile ESXi-8.0.0-20513097-standard failed with warnings:
when I'm trying to update NIC8i7BEH ;(
The warning is:
TPM_VERSION WARNING: TPM 1.2 device detected. Support for TPM version 1.2 is discontinued. Installation may proceed, but may cause the system to behave unexpectedly.
NUC8 has TPM 2.0 as far as I know.
Earlier NUC implemented TPM using Intel PTT. Only recent NUC 11/12 started to show TPM 2.0 and even then, it may not be fully compliant to work with ESXi. I’ve only had success using NUC 9 (Xeon) which didn’t have any issues. You may need to disable TPM to upgrade …
So I guess I can't install it on my Celeron with 512 MB RAM. Shame!
Is ESXi 8 supported with E5-2690 v4 processors? Regards..
E5-2690v4 is Broadwell Architecture and is listed above as out of Date. Currently you get a message in vSphere 7 about discontinued support in next major release(vSphere 8). I’m very sad about this because Dell PowerEdge R730 still expensive and now you can not officially upgrade to vSphere 8.
Not true. Broadwell-EP is supported in vSphere 8.
Hi William,
This is a great post, and I am really looking forward to the vSphere 8.0 GA. Could you please also suggest homelab setups that we can use for testing out smart nics and DPU configurations?
You'll need to wait for the VMware HCL to get updated when vSphere 8 GA if you're interested in trying out SmartNIC/DPUs
Support for Realtek pci nics? Has that been considered?
Realtek has no interests in VMware ecosystem when we tried to engaged them several years ago to develop a driver. I recommend looking at platforms that do not contain RTL-based NIC
Does Apple Mac Mini support VSphere 8?
No. See https://williamlam.com/2022/08/vsphere-esxi-7-x-will-be-last-version-to-officially-support-apple-macos-virtualization.html
William,
Truly appreciate all the knowledge you've always shared with us. My question. Does vsphere 8 require DPUs for installation? I've search many blogs and no one has directly answered the question or speaks of V8 without add hardware cost.
vSphere 8 does NOT require DPU for installation. vSphere 8 is the first release to support DPU, if you have a need for it which can help by offloading services that would typically run on your ESXi (x86) and onto DPU. See vSphere DSE for more details https://core.vmware.com/resource/whats-new-vsphere-8#sec21112-sub1
Hi William,
I've a question about automated deployment and unsupported CPUs.
In order to (re)deploy nested hosts, I use a PXE environment and use the following parameter in my boot.cfg:
kernelopt=ks=https://iis_local_ip/VMware/ks80.cfg
This worked gret with vSphere 7 but with vSphere 8 beta, I need to add "allowLegacyCPU=true" somewhere because my server is a bit too old, but if I put as it is at the end of the line, it doesn't seems to work and install get in a loop after showing the unsupported CPU warning.
This is how I tested it:
kernelopt=ks=https://iis_local_ip/VMware/ks80.cfg allowLegacyCPU=true
Do you know if it is a bug or if it is a syntax issue. It might also not be the correct place for it but I find very little people/infos mentioning it.
Thanks and best regards ! 🙂
I've not seen this before. Can you try placing allowLegacyCPU at start of the string (e.g. kernelopt=allowLegacyCPU=true ks=....) and see if that helps
Hi William,
In the meantime, I've tried that syntax too and it didn't change. I also tried with the new IS/GA ISO file (I had previously the RC in hand), same result.
Don't know if anybody else can confirm that result, but I think I'll have to try the on-place upgrade instead for a while... :-/
If anybody can try that too, thanks in advance! 😉
I have installed vSphere 8 on e5-2640v3 CPU. I didn’t need any switch at bootprompt. In setup a information pops up about unsupported cpu and if I really want to install i need to acknowledge this warning. After pressing enter a 2nd time because another warning pops up about cpu installation runs flawlessly.
Hi Sebastian,
Did you install on a physical server or on a nested VM? ISO/USB or PXE deployment?
Thx for sharing the details! 😉
I have installed baremetal on Fujitsu RX2540 M1. KVM (vSphere ISO mounted)
Hi Franck, I can confirm I am seeing the same. I have previously used allowLegacyCPU=true on ESXi 7.0 without issue. However it seems to have no effect with 8.0, I always get a warning prompt that I have to press enter past. This is preventing automated deployments in my lab currently 🙁
Further update.....Digging about in the UPGRADE\PRECHECK.PY and comparing it to ESXi 7.0 I can see that a new CPU_OVERRIDE message has been added in addition to CPU_WARNING and CPU_ERROR messages. CPU_OVERRIDE is not evaluated against allowLegacyCPU which I suspect is not the intended behaviour and possibly a bug. If the CPU falls into the CPU_OVERRIDE condition the allowLegacyCPU boot option will not do anything.
Not sure if you were the same person posting on either Slack or VMTN (forget) but check out https://williamlam.com/2022/10/quick-tip-automating-esxi-8-0-install-using-allowlegacycputrue.html for workaround 🙂
Hi,
Well, I tried to add "--ignoreprereqwarnings --ignoreprereqerrors" at the end of the command, but it didn't help, I got the same results :-/
Did you actually append the options in the right area? This is NOT for interactive installation AND make sure you are doing it in EFI boot.cfg and not main boot.cfg which only applies to BIOS mode only
Hi,
Not 100%, I don't masterise the whole (learned almost everything on your site)... 😉👌
My boot CFG looks like that:
bootstate=0
title=LoadingESXi 8.0 IA auto-installer
timeout=5
kernel=b.b00
kernelopt=ks=https://IP_of_a_web_server/VMware/ks80.cfg
prefix=/Boot/x64/VMware/ESXi80
modules=the whole list without the "/"
updated=0
Then I have that at the first stage of this ks80.cfg:
### Accept the VMware End User License Agreement
vmaccepteula
### Set the root password for the DCUI and Tech Support Mode
rootpw MySecretPass
### The install media (priority: local / remote / USB)
clearpart --firstdisk=local --overwritevmfs
install --firstdisk=local --overwritevmfs --novmfsondisk --ignoressd --ignoreprereqwarnings --ignoreprereqerrors
### Set the keyboard layout
keyboard "Swiss German"
### Set the network to DHCP on the first network adapter
network --bootproto=dhcp --device=vmnic0
### Reboot ESXi Host
reboot --noeject
This is a working 7.0 setup, I just copy/pasted the whole to customize it with 8.0 (but of course, might be wrong - I don't pretend it's a working config for the latest! 😁
It is most likely stalling because of the clearpart section which happens first and that doesn't support these options. Try taking that out and it should work as that is what I had used for my setup and that was a physical system which had some data on before but install still took care of it
Thanks William, and yes I was the person on Slack 🙂 Unfortunately as Franck says this does not appear to work. For the avoidance of doubt I used the complete kickstart file from your post and I'm using EFI. "CPU_OVERRIDE" seems to be a new thing in 8.0 which you may or may not see depending on how PRECHECK evaluates your CPU. The "override" appears to be a different thing from "error" or "warning" 🙁 I currently have an SR open with VMware but don't hold out much hope as it's not a supportable item.
Hi there,
So I've tried removing the cleapart section and also used only the same settings as suggested in the other article, but no success/same result (with or without allowLegacyCPU=true)
I'm wondering why, it's like they won't us to test! 😏
Franck - Can you try adding one last option to install section --forceunsupportedinstall and see if that helps?
Hi there,
I have some progress!!!
So the switch --forceunsupportedinstall didn't help at first but it gave another error message about the disk. So I remembered those nested VMs were used for my 6.7 Lab and I decided to create a fresh empty VM and it worked.
Now, I took the opportunity to test it further and those are the combination of the switches I tried:
Doesn't work
install --firstdisk=local --overwritevmfs --ignoreprereqwarnings --ignoreprereqerrors
Works with a warning (press Enter)
install --firstdisk=local --overwritevmfs --forceunsupportedinstall
Completelly unattended, all together 🙂
install --firstdisk=local --overwritevmfs --ignoreprereqwarnings --ignoreprereqerrors --forceunsupportedinstall
allowLegacyCPU is not necessary (probably ignored)
Now I have to troubleshoot the second part of the KS, it doesn't join the vcenter as the 7.0 does! 😁
PS: any hint about the right log to check is welcome!
Ah, so this was the missing piece that your CPU allowed for override but strangely, this should still cause either Error/Warning and those flags should have supported.
The only other option that I see that could help is adding --forceunsupportedinstall
Unfortunately no success. If this is a permanent "feature" I think the option of last resort is to alter the PRECHECK script on the fly before I build the ISO. Setting the "if" statement to include the OVERRIDEWARNING should result in the same behaviour as before. It's not supported anyway right 🙂
Take a look at Franck's recent comment, the parameter helped but he needed to use all three options
Thanks William, it did indeed work. I think I didn't reload my module properly when I made the changes, oops. I'm going to build this into my media creation tool. This appears to be a new parameter in 8.0:
https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-esxi-8.0-installation-setup-guide.pdf
Going by the description I wonder if it's meant to replace the kernel boot option.
I will now need to build in a bit of media version detection so I know what alterations to make for unsupported CPUs.
Thanks for your help.
Hi again,
Now I'm facing another issue: the second part of my kickstart (still working with 7), is apparently not taken into consideration ("post installation")
This is an extract (I've more command for the network part, but as the host doesn't have the second vswitch nor being in maintenance mode, I think this complete section of the ks is ignored.
##### Stage 01 - Pre installation:
### Accept the VMware End User License Agreement
vmaccepteula
### Set the root password for the DCUI and Tech Support Mode
rootpw mysecretpass
### The install media (priority: local / remote / USB)
install --firstdisk=local --overwritevmfs --ignoreprereqwarnings --ignoreprereqerrors --forceunsupportedinstall
### Set the keyboard layout
keyboard "Swiss German"
### Set the network to DHCP on the first network adapter
network --bootproto=dhcp --device=vmnic0
### Reboot ESXi Host
reboot --noeject
##### Stage 02 - Post installation:
### Open busybox and launch commands
%firstboot --interpreter=busybox
### Enable maintaince mode
esxcli system maintenanceMode set -e true
### Set Search Domain
esxcli network ip dns search add --domain=mydomain.local
## Add second vSwitch & portgroup
esxcli network vswitch standard add --vswitch-name=vSwitch1
esxcli network vswitch standard portgroup add -v vSwitch1 -p "VSAN Network"
----- some more network settings----
### Disable IPv6 support (reboot is required)
esxcli network ip set --ipv6-enabled=false
## register with vcenter
esxcli network firewall ruleset set -e true -r httpClient
wget --no-check-certificate -O vcenter80.py https://webserverip/VMware/vcenter80.py
/bin/python vcenter80.py
### Reboot
esxcli system shutdown reboot -d 15 -r "rebooting after ESXi 8.0 host configuration"
Don't know if anything changed from the syntax point of view but where can I start looking (I looked the esxi_install.log but there is a lot in there)?😊
If it's any use to anyone else, here is the function that takes a stock VMware ESXi ISO and builds you an unattended ISO. It configures the basics to get the host to a manageable state, hostname, DNS, management IP etc. It also does some useful stuff for nested labs, recreate VMK0 and support deprecated CPUs on 6.7/7.x/8.x
https://github.com/TheDotSource/tds-vSphere/blob/main/Public/New-EsxiAutoIso.ps1
It depends on the New-ISOFile function available here:
https://github.com/TheDotSource/New-ISOFile
All done natively in PowerShell which might be useful to some.
It would be great if you can add a link to where you sourced the original parameters 🙂
Oh dear, how remiss of me 🙁 Done.
Just in case, I've opened a post in VMware community about my Kickstart issue. Hopefully somebody can help me with this post deployment part ignored 😉
https://communities.vmware.com/t5/vSphere-Upgrade-Install/vSphere-8-post-deployment-Kickstart-issue/m-p/2935168#M34218
Hi there,
Just wanted to give the solution as mentioned by Jangari on VMware communities, the firstboot section WILL BE IGNORED if you have secure boot enabled. Then I created the new VMs, I activated it without knowing I was creating a new problem. I learned something today! 😉
Official documentation here: https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-esxi-installation/GUID-51BD0186-50BF-4D0D-8410-79F165918B16.html#firstboot-21
https://williamlam.com/2018/06/using-esxi-kickstart-firstboot-with-secure-boot.html
Hi Wililiam, thanks! My test system HPE DL380 G9 (not in HCL) hangs at the installation storage-path-claim. Anyone have an idea to work around this? Thanks Werner
one step further... I did the test installation on a DL380 G8 and the same "hang" occurred during the installation (activating: storage-path-claim). I disabled the SAS controller and connected it to a USB stick. The installation routine then continued. Will now do the same on the HPE DL380 G9 server with the USB stick.
@Werner & William, I've tried to disable the controller with no luck in continuing the install process. Any further recommendations? About to find my old ISO of 7.
Thanks,
GG
Hi William, I am setting up a home lab with the NUC 12 Extreme. I got ESXi loaded on it right now, but have been unable to get the USB Network Native Driver for ESXi loaded to fix the 100Mbps limitation. Any ideas or suggestions?
A couple of further notes to add on how I got things to load properly. I followed your tips on how to bypass the CPU mismatch error. I did however encounter an issue with configuration change persistence after rebooting with the permanent CPU mis match fix as well as any other configuration settings made in ESXi.
I did find a fix for this. The bootbank loads from in a temp directory and all of the changes that were being applied get destroyed on reboot. I found that if I followed the fix in this KB article https://kb.vmware.com/s/article/2149444 an then applied the permanent CPU mis match fix and it fixed my issue.
re: persistency issue - You are most likely doing following https://www.reddit.com/r/vmware/comments/y3nj8t/comment/isaleqn/?utm_source=share&utm_medium=web2x&context=3
Successfully upgraded to ESXi 8 (allowLegacyCPU=true) on the following NUC:
5th Gen
6th Gen
8th Gen
All using Haswell EVC.
Thank you Anthony for confirming those NUCs! I'll update the article
I just successfully upgraded my intel nuc NUC8i5BEH vsan cluster on version 7.03g to version 8.0.0, 20513097 without using the allowLegacyCPU.
Thank you! I have that model NUC so this is reassuring.
FYI...I just installed ESxi 8.0 Build 21203435 onto NUC 7i7 and installed the USB Network Native Driver for ESXi Fling, so far no issues. But haven't done anything with it yet.
Hi William, I installed without problems esxi8 on Intel NUC10i7FNH, but at every restart/shutdown via vCenter 8 or via console I get a purple screen with the following error message:
VMware ESKi 8.0.0 [Re leasebu i ld-205 13097 x86_64]
#PF Except ion 14 in wor ld 1052102: dev layer par IP 0x420026bfff6c addr 0x0
PTES :0x114738023 ;0X114739023;0x11473a023;0x0;
Module(s) involved in panic: [unkbsd ] [vmksdhc i 1.0.2-2vm.800 .1 .0 .2051309? (External)]
cr0=0x800 1003d cr2=0x0 cr3=0xaced2000 cr4=0x14216c
FMS=06/a6/0 uCode=0xf0
frane-0x453895c9bbb0 ip=0x420026bfff6c err=0x0 rf lags=0x10202
rax=0x0 rbx=0x0 rcx=0x0
rdx=0x1 rbp=0x0 rsi=0x0
rdi=0x0 r8=0x100 r9=0x0
r10=0x250 r11=0x8e0 r12-0x0
r13=0x430d79402198 r14=0x2c9 122d28f r15=0x420026349fec
xPCPU9 : 1052102/dev layer parallel teardoun
PCPU
0: SSSISSSSSSSS
Code start : 0x420025c00000 VMK upt ime : 0:00:01:59.048
0x453895c9bc70 : [0x420026bf ff6c ]bus_dmamap_destroy@com . vmware . vmkbsd# 1+0x10 stack : 0x430d79402300
0x453895c9bca0 : [0x420026c7c45e ]sdhc i_ c leanup_s lot@ (ymksdhc i )# +0x 13b stack : 0x430d79402578
0x453895c9bcc0 : [0x420026c7ed 12 ]sdhc i_ pc i _ detach@ (ymksdhc i ) # +0x83 stack: 0x430d79402198
0x453895c9bcf0 : [0x420026c05fd6 ]vnkbsd_ dev ice_detach@com . vmware . vmkbsd# 1+0x4? stack : 0x0
0x453895c9bd 10 : [0x420026c0d828 ]vmkbsdDev iceDetach@con . vmware . vnkbsd# 1+ 0x2d stack : 0x43063de06780
0x453895c9bd40 : [0x420025c227 29 ]Dr iver _ Det achDev ice@vmkerne l #nover+0x20e stack : 0x453895c90032
0x453895c9bdb0 : [0x420025c le2fc ]Dev iceDetachQvmkerne l #nover+0xcd stack : 0x43063de06900
0x453895c9be60 : [0x420025c 1e428 ]Dev iceRenoveCBCvmkerne l#nover+0x25 stack : 0x0
0x453895c9be80 : [0x420025c 1e9eb ]Dev ice Treeha lkQvmkerne l #nover +0xf4 stack : 0x420025c 1c6cc
0x453895c9bee0 : [0x420025c 1ea?4 ]Dev iceRenovetvnkerne l#nover +0x79 stack : 0x43063de04700
0x453895c9bf 00 : [0x420025c 1eb6e ]Dev i ceLayerShut dounExc l IncompDrvtvmkerne l #nover+0xd3 stack: 0x2c91226175
0x453895c9bf 30 : [0x420025c 1f37e ]Dev iceLayerShutdounSub TreeSer ialQvmkerne l #nover +0xf stack : 0x430087C01220
0x453895c9bf 50 : [0x420025c 1f3f8 ]Dev ice Teardounhe lpercBOvmkerne l #nover+0x15 stack : 0x430bb9cOlcf0
0x453895c9bf 60 : [0x420025ceab24 ]He lperqueueFunc@vmkerne l #nover+0x19d stack : 0x430bb9c01238
0x453895c9bfe0 : [0x4200260 14c52 ]CpuSched_Starthor ld@vmkerne l#nover+0x7b stack: 0x0
0x453895c9c000 : [0x420025cd408f ]Debug_Is Init ial ized@vmkerne l#nover +0xc stack : 0x0
base fs=0x0 gs=0x420042400000 Kgs=0x0
No port for renote debugger
Ignore first message, power adapter was root cause.
Successful upgraded to ESXi 8 for the following NUCs:
7th Gen - Intel NUC7i3BNH
8th Gen - Intel NUC8i5BEH
10th Gen - Intel NUC10i7FNH
11th Gen - Asrock NUC 1100 SERIES (i7-1165G7)
ESXI 8 installs fine on Dell Precision 3571 running i9-12900H. And as per OP, shutting down/restart will give me a PSOD.
i9-12900 is Intel 12th Gen Consumer CPU, this introduces a new big.LITTLE architecture which ESXi doesn't understand and PSOD is expected. However, there is an ESXi kernel option to bypass this which I've blogged about several times on number of the12th Gen platforms requiring same workaround. See https://williamlam.com/2022/11/esxi-on-intel-nuc-12-enthusiast-serpent-canyon.html as an example
Hi Gabriel, I'm dealing with the same problem - every time I reboot and shut down I get the error you mention above (Module(s) involved in panic: [unkbsd ] [vmksdhc i 1.0.2-2vm.800 .1 .0 .2051309? (External)]). You write that the problem was in the power adapter, I don't really understand that. How did you finally solve it?
Disabling SD card reader in the NUC10 bios seemes to solve the issue...
Many Thanks Abudef.
Had the same issue with PSOD Message:
Except ion 14 in wor ld 1052102: dev layer
After upgrade NUC 11Gen from 7 to vSphere 8. PSOD showed up every time I've shutdown/rebooted. Disabled SD Card Controller fixed this!
Same issue here (does not happen on v7), after disabling SDCard 3.0 Controller in UEFI the pink screen on reboot does not happen.
Just installed ESXi 8.0 on two new NUC10i7FNH and had the same issue (PSOD at shutdown/reboot). Thank you so much for this hint, sengork.
Thank you so much for the hint!! I was having the same issue when installing ESXi 8 on my NUC 10s
This fixed my pink screen, too - thank you very much!
Thanks a bunch, sengork! This really helped.
OMG THANK YOU! I had this problem with my NUC10 and just couldn't find the problem until I found your comment (hours later..)
When I installed vSphere 8 on a Ivy Bridge system the message said it was not supported but it gave me the option to continue with the installation anyway. I did and everything is working but this was in a nested environment would install totally stop if I had installed on bare metal?
There’s NOT supported CPUs and there ones that may work now but won’t in future, which you’re in latter scenario. It’ll install same on physical
I am using NUC 11, and community flings for network and nvme are required.
I am facing some issues when building a custom image with ESXi-8.0.0-20513097-standard
I'm using vmware powercli for this and have no issues with ESXi 7.x
I download the depot from official vmware site; VMware-ESXi-8.0-20513097-depot.zip
Got many errors of "claimed by multiple non-overlay VIBs" when executing this comment
New-EsxImageProfile -CloneProfile "ESXi-8.0.0-20513097-standard" -name "ESXi-8-NUC11" -Vendor "jc"
Here's an example of the error
New-EsxImageProfile : File path of '/lib64/python3.8/site-packages/hostprofiles/pyEngine/statusManager.pyc' is claimed by multiple
non-overlay VIBs: {'VMware_bootbank_esxio-base_8.0.0-1.0.20513097', 'VMware_bootbank_esx-base_8.0.0-1.0.20513097'}
At line:1 char:1
+ New-EsxImageProfile -CloneProfile "ESXi-8.0.0-20513097-standard" -nam ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidData: (VMware.ImageBuilder.Types.ImageProfile:ImageProfile) [New-EsxImageProfile], Exception
+ FullyQualifiedErrorId : EsxImageProfileValidationError,VMware.ImageBuilder.Commands.NewImageProfile
New-EsxImageProfile : File path of '/lib64/python3.8/idlelib/debugger.pyc' is claimed by multiple non-overlay VIBs:
{'VMware_bootbank_esxio-base_8.0.0-1.0.20513097', 'VMware_bootbank_esx-base_8.0.0-1.0.20513097'}
At line:1 char:1
+ New-EsxImageProfile -CloneProfile "ESXi-8.0.0-20513097-standard" -nam ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidData: (VMware.ImageBuilder.Types.ImageProfile:ImageProfile) [New-EsxImageProfile], Exception
+ FullyQualifiedErrorId : EsxImageProfileValidationError,VMware.ImageBuilder.Commands.NewImageProfile
New-EsxImageProfile : File path of '/lib64/python3.8/idlelib/filelist.pyc' is claimed by multiple non-overlay VIBs:
{'VMware_bootbank_esxio-base_8.0.0-1.0.20513097', 'VMware_bootbank_esx-base_8.0.0-1.0.20513097'}
At line:1 char:1
+ New-EsxImageProfile -CloneProfile "ESXi-8.0.0-20513097-standard" -nam ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidData: (VMware.ImageBuilder.Types.ImageProfile:ImageProfile) [New-EsxImageProfile], Exception
+ FullyQualifiedErrorId : EsxImageProfileValidationError,VMware.ImageBuilder.Commands.NewImageProfile
Any comments will be greatly appreciated
There have been changes in vSphere 8, where a new version of PowerCLI (not released yet) is required for constructing a custom image for ESXi 8.0 Images OR you will need a vCenter Server 8.0 to build.
Historically, it was simply assumed that prior versions of both vCenter and PowerCLI would be "forwards" compatible for image building and that is actually an incorrect assumption (one I too had assumed incorrectly). While it may have worked in prior versions, there has been major changes in vSphere 8 where this will no longer work and a new version will be required as mentioned above.
Thanks William for the prompt response. Really appreciate it.
Initially i had the issue of setting up vcsa on workstation. I found your article that resolved the issue.
My ESXi 8 on NUC 11 and vcenter on workstation is up and running.
I'm exploring ways to setup MFA for vcenter and esxi with AAD instead of ADFS.
Hey William, do we know if Intel X520-DA2 is supported on ESXi 8?
I have a couple of these NIC inside x2 NUC9VXQNX and 8.0 has been installed but only lists the onboard NIC's.
Check the VMware HCL for all device hardware compatibility
Hello Will,
I am running i5 13600K with ESXi 8.0a and for now I have disabled the E-cores as I was getting random PSOD if not after starting to use it intensively with multple VM's.
Seems very stable now.
I am only having an issue with which maybe somebody could help out.
I am running with ASUS PRIME Z790-A WIFI Motherboard and using the Integrated GPU + Intel i350T2V2 NIC.
I am having some issues with the NIC and iGPU.
As for the iGPU is not getting recognized only as "Intel Corporation VGA compatbile Card" and cannot use it for Pass-through.
Regarding the Inte i350T2V2 All works fine except the SR-IOV.
Have enabled SR-IOV support and VT-D on the motherboard and the card is on the HCL list for this ESXi 8.0 version
When I enable the SR-IOV it always just writes Enabled / Reboot needed.
Can reboot as many times I want wont make a difference.
VF is always 0 maximum 8.
Thanks in advance,
awesome, thank you so much! allowLegacyCPU made my day! 😀
Hi Will,
First, first thank you for this article and all the other ones over the years.
I wanted to ask if you (or anyone else) knows of a way for SkyLine health to elaborate on the "Devices deprecated and unsupported" check. It seems to check for ESXI v7.0 but also forward looking from 6.7 to 7.0. Do we know of any way for it to evaluate against 8.0? Also, is there any way to have it provide a specific list of what it is complaining about.
For reference, and because it might help others, I have an affordable 3 host lab running very well on 3x HP z420s. 2 hosts are running ESXI 6.7 and the 3rd is running 7.0u3. They are managed by a virtual VCSA 7.0.3.
Skyline Health calls out the z420 running 7.0u3 as have deprecated and unsupported devices, but I have no actual issues running it.
Planning my upgrade to 8.0 soon and trying to determine if I need to buy new hardware.
Thank you Will and all,
David
Hello William, I know Im a bit later to the game but wondering if you can share some EVC details with your NUCs on esx8.
I recently upgraded my lab from 7.0 to 8.0a on my two i7-6700T processor hosts. Zero Issues. A few days ago, I went to add a new i7-11700 processor host to the cluster. Naturally I went to flip on EVC.. to have vcsa tell me that my Skylake's only support Broadwell... but my Rocket Lake only supports Haswell??
What do you see on your 6th and 11th gen processors for MaxEVC?
powershell> Get-VMHost -Name virtual* | Select-Object Name,MaxEVCMode,ProcessorType
Name MaxEVCMode ProcessorType
--------- -------------------- --------------------
virtual3 intel-haswell 11th Gen Intel(R) Core(TM) i7-11700 @ 2.50GHz
virtual2 intel-broadwell Intel(R) Core(TM) i7-6700T CPU @ 2.80GHz
virtual1 intel-broadwell Intel(R) Core(TM) i7-6700T CPU @ 2.80GHz
No issues upgrading a NUC8i7HNK. Just had to disable the TPM in the BIOS to remove the warning post-upgrade.
Hey, just wanted to report a successful upgrade:
Qotom Q355G4 minipc
i5-5250U
8GB RAM
upgraded to 8.0.0 21203431 successfully (after setting AllowLegacyCPU=true and using --no-hardware-warning while doing the upgrade via cli)
My Lenovo D20 with a Intel(R) Xeon(R) CPU E5506 is version locked at ESXi 6.5. I've tried the allowLegacy CPU workaround and it allows me to boot to the installer, but after selecting upgrade I get the error that my CPU is not supported and I cannot continue past it. In any case, the drivers for my broadcom NICs have been dropped after ESXi 6.7, so I think it's just time I threw in the towel and got some new HW.
With that in mind I'm seriously considering Intel NUC's, but I'm not familiar with them and I find the options a little overwhelming and would appreciate any advice you might have.
I'd like to play around with VSAN, so I'll need 3 nodes. I was thinking of getting 3 Intel NUCs with 32GB ram and 1TB M.2 nvme's. If it all works out I believe I should be able to use the VSAN to replace my aging IOMEGA iSCSI NAS.
My main question is: What could you recommend that won't break the bank but will hopefully be still be supported as long as possible? I'd hate to be back in the situtation again when vSphere 9 gets released in two years.
See the Intel NUC section at https://williamlam.com/home-lab
12th Gen NUC are the latest and you can always go back a generation if cost is a factor. As long as you get system with 2 x SSD capable, you’ll be able to use it for vSAN. Starting w/11th Gen (Tall), you can squeeze 3 disks which will future proof because ESXi should be installed on SSD including ESX-OSData, so while USB boot is supported, it will eventually go away in future and this is easy way to be prepared. Based on generation of NUC, you can choose from CPU options based on your needs
Thanks for the quick reply! Since this is a home lab I'd like to keep costs down as much as possible, so as you suggested I'm going to focus on The Gen 11 NUCs. When it comes to form factor, does the ultra-compact have space for an SSD drive? I realize that booting from USB is not recommended for ESXi 8, but since this is a home lab I'm willing to risk it. I'm also thinking I can boot from SAN in the event that booting from USB becomes impossible in a future release. This would allow me install the OS on USB, use the M.2 cards vSAN. and potentially install Windows on the SSD drives in the event that I want to dual boot and use one of the NUCs as a dual purpose media PC. Any thoughts?
Please see detailed blog post it 11th Gen, literally all your questions and what you can do is covered there as well as every other generation of NUC
Will do, thanks again.
Worked for me on an HP Microserver gen 8 with an older intel G2020T CPU! Thanks
Hi there,
Just for information, I migrated my nested and my "physical" home labs.
I have 3 Dell Rx30 so officially, CPUs are not supported.
Even tough it warned me, I didn't had to put the option to allow unsupported CPU and it updates just fine since then (last version 8U1)
It's cool it works "out of the box"!
Hi Will,
Is this VIB available:-
nested-esxi-customization(8.0.0-1.0.0)
It's required in Vcenter 8 as part of the image update.
Thanks,
John
No, it is not relavent to deploying vCenter Server nor do you need it.
This is the error I receive when doing "image compliance" on the nested ESXi8 host on Vcenter 8:-
"The following VIBs on the host are missing from the image and will be removed from the host during remediation: nested-esxi-customization(8.0.0-1.0.0).
To prevent them from being removed, include appropriate components that are equivalent to these VIBs. If this is seen while switching from using Baselines to using Images, please refer to KB 90188."
Any ideas what is needed?
Thanks,
John
What version are you starting from and going to? More details of your setup would be useful for troubleshooting
Hi William,
Firstly, thanks for all the amazing posts.
I ran into issues upgrading from 8.0c to 8.0U1 with the following:
2023-05-20T23:55:30Z Er(11) esxupdate[527037]: RuntimeError: failed to execute mtools command:
2023-05-20T23:55:30Z Er(11) esxupdate[527037]: COMMAND: export MTOOLS_SKIP_CHECK=1 && mcopy -i /vmfs/devices/disks/mpx.vmhba32:C0:T0:L0@@32768 -Do /tmp/tmp15bzt366 ::/syslinux.cfg
After trying to troubleshoot without success, I decided to save my config and perform a reinstall. Using PowerCLI I pulled down the 8.0c repository from the depot into an ISO. The ISO loaded fine but about 75% of the way through, the install hung trying to initialize vmkdevmgr. F11 didn't reveal much. So I tried the same procedure with each of the earlier versions of 8.0 (I figured I would upgrade them to 8.0c). Unfortunately they too all hung initializing vmkdevmgr.
So I experimented with the 8.0U1 iso from the vmware website. This loaded and installed fine. I then tried the same procedure with the ISO created from powerCLI and the install hung trying to intialize vmkdevmgr. Extracting the folders/files from both ISOs I did a compare and noted most everything was exactly the same except for a few XML files with some obvious, but seemingly unimportant differences.
Is there some additional procedure to ensure the ISOs created through PowerCLI from the depot work as the those downloaded from the website? Any additional information that may help me understand why I couldn't upgrade?
Many thanks,
Greg
Works fine and out of the box with the Miniforum Venus UM560 XT, no customization needed.
One mildly infuriating thing I've run into with vSphere8 is that communitySupported VIBs and UEFI SecureBoot are incompatible with one another. SecureBoot is a strong recommendation now if possible from a security standpoint. I filed a feature ticket for it so maybe someday they'll think about it (https://vsphere.ideas.aha.io/ideas/VSP-I-1459).
This is not unique to vSphere 8, this has always been the case. SB by definition can’t attest random VIBs which aren’t properly signed …
You can sign your custom vibs using vibauthor, but you can't get a trusted signing cert from vmware. There are times when you need to step outside the official vmware/partner ecosystem and with SecureBoot your only option is through custom vibs. Within the community acceptance level there's not a way to specify one vib is allowed where another isn't. Locking down admin accounts and using host profiles are standard practice regardless, but that seems to be the best you can do currently.
If there are legitimate use cases where something can't be done using official interfaces like UI/API (even through Lockdown Mode), it would be good to understand what those are so it can be fed back to product teams. You should be able to do everything via API/Automation such that a custom VIB isn't needed and if you're doing things in an unsupported manner, then that doesn't count in my books given the point of SB is to ensure changes are well understood including potential things being introduced by admins such as a custom VIB ...
In my primary case it's just to allow syslog outbound on a non-standard port.
Will v8 work on the Supermicro E302-12D?
Yes.
When deployed the vSAN ova, it encounter error and couldn't proceed. Error message as below.
Provider method implementation threw unexpected exception: com.vmware.vapi.std.errors.OperationNotFound: OperationNotFound (com.vmware.vapi.std.errors.operation_not_found)
Any idea on how to fix it?
Hi,
What is the hardware details for this lab? How much does it cost you?
I have a NUC 9 Extreme I5 9300H which successfully runs ESXI 8.0U2 and recognizes all hardware
IDKW, but this KB article can be challenging to locate. Here’s the link: https://kb.vmware.com/s/article/78914
The VCSA Installation wizard incorrectly detects the storage deployment size of the source vCenter during migration or upgrade (78914)
Quotes from the KB:
“…Workaround
Note: while there are articles available on the Internet, describing how to shrink the disks of the VCSA after you finished the upgrade, please be aware that none of them are officially supported.
Instead, to work around the issue, you can follow this two-stages approach during upgrade:…”
“Related Information
Note: The same approach can be used to downsize a VCSA during the upgrade.
For example, consider a scenario where the VCSA was originally deployed with XLarge storage size, and the new VCSA should only have a “Large” or “Normal” storage size. However, keep in mind that the new VCSA needs to have enough storage space to accommodate the data imported from the source appliance.”
With the Broadcom announcements about Vsphere and other products going to subscription licensing does that mean HomeLabs will no longer be able to license Vsphere and other products without paying very high subscription prices which makes no sense for home labs.
A word of caution on patching and updating vSphere 8, there is a much tighter coupling between vCenter and ESXi Interoperability since version 8.0 I haven't seen any information published about this, but in previous versions (7.0 for example) you could run ESXi hosts on version 7.0U3 managed by a vCenter server on version 7.0U2. This was not only supported, but incredibly useful when working with third parties who do not update their compatibility in line with vSphere releases. You could leave your vCenter server on a version supported by your third party vendors and continue to patch your hosts for security vulnerabilities and bug fixes. I would encourage you to check out the interoperability matrix for version 8.x vcenter and ESXi. You must update vCenter to the new update version before doing your hosts. Additionally, you won't be warned of prevented from doing it the other way round. You can even add updated hosts to the inventory of a non-updated vCenter, it won't stop you or warn you from doing so. We actually had vMotion operations failing and bricking our VMs in this state (hosts on 8.0U2 and vCenter on 8.0U1). Once vCenter had been patched, the issue went away. VMware support were not aware of the issue.
William ---
I see the following statment posted (https://williamlam.com/2022/09/homelab-considerations-for-vsphere-8.html). So i am assuming that this will include 8.0u2 and all possible furture patches in the 8.0x. I have placed in bold the hardware i will be using.
Hardware Platform Support
While this is not an exhaustive list of hardware platforms that can successfully run ESXi 8.0, I have personally tested and hope others may also contribute to this list over time to help others within the community.
The following hardware platforms can successfully run ESXi 8.0:
Intel NUC 5 (Rock Canyon) - Courtesy of Anthony
Intel NUC 6 (Swift Canyon) - Courtesy of Anthony
Intel NUC 8 (Dawson Canyon) - Courtesy of Anthony
Intel NUC 9 Extreme (Ghost Canyon)
Intel NUC 9 Pro (Quartz Canyon)
Intel NUC 10 Performance (Frost Canyon)
Intel NUC 11 Performance (Panther Canyon)
Intel NUC 11 Pro (Tiger Canyon)
Intel NUC 11 Extreme (Beast Canyon)
Intel NUC 12 (Dragon Canyon)
Intel NUC 12 Pro (Wall Street Canyon)
Supermicro E200-8D
Supermicro E302-12D
Hardware for LAB deployment --- Intel NUC 10 Performance (Frost Canyon)