WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud
  • Tanzu
    • Application Modernization
    • Tanzu services
    • Tanzu Community Edition
    • Tanzu Kubernetes Grid
    • vSphere with Tanzu
  • Home Lab
  • Nested Virtualization
  • Apple
You are here: Home / ESXi / Homelab considerations for vSphere 8

Homelab considerations for vSphere 8

09.14.2022 by William Lam // 102 Comments

There has been a lot of great technical content from both VMware and the broader community since the announcement of vSphere 8, which happened a few weeks ago. I know many of you are excited to get your hands on both vSphere 8 and vSAN 8 and while we wait for GA, I wanted to share some of my own personal experiences but also some of the considerations for those interested in running vSphere 8 in their homelab.

As with any vSphere release, you should always carefully review the release notes when they are made available and verify that all of your hardware and the underlying components are officially listed on the VMware HCL, which will be updated when vSphere 8 and vSAN 8 GA's. This is the only way to ensure that you will have the best possible experience and a supported configuration from VMware.

Disclaimer: The following considerations are based on early observations using pre-GA builds of vSphere 8 and it does not reflect any official guidance or support from VMware.

CPU Support

The following CPU generations listed below for both Intel and AMD will not be supported with vSphere 8.

  • Intel
    • SandyBridge-DT, SandyBridge-EP/EN
    • IvyBridge-DT, IvyBridge-EP/EN, IvyBridge-EX
    • Haswell-DT, Haswell-EP, Haswell-EX
    • Broadwell-DT/H
    • Avoton
  • AMD
    • Bulldozer - Interlagos, Valencia, Zurich
    • PileDriver - Abu Dhabi, Seoul, Delhi
    • Steamroller - Berlin
    • Kyoto

If you boot the ESXi 8.0 installer and it detects that you have an unsupported CPU, you will see the following error message.


The default behavior of the ESXi 8.0 installer is to prevent users from installing ESXi on a system that has a CPU that is not officially supported.

ProTip: With that said, there is a workaround for those that wish to forgo official support from VMware or for homelab and testing purposes, you can add the following ESXi kernel option (SHIFT+O):

allowLegacyCPU=true

which will turn the error message into a warning message and enable the option to install ESXi, knowing that the CPU is not officially supported and accept any risks in doing so.

ProTip: If you are using Intel 12th Generation or newer consumer CPUs, an additional workaround is required due to fact that ESXi does not support the new big.LITTLE CPU architecture in these CPUs, which I had initially discovered when working with Intel NUC 12 Extreme (Dragon Canyon).

The following ESXi kernel option cpuUniformityHardCheckPanic=FALSE still needs to be appended to the existing kernel line by pressing SHIFT+O during the initial boot up. Alternatively, you can also add this entry to the boot.cfg when creating your ESXi bootable installer. Again, you need to append the entry and do not delete or modify the existing kernel options or you will boot ESXi into ramdisk only. If this entry is not added, then booting ESXi with processors that contain both P-Cores and E-Cores will result in a purple screen of death (PSOD) with following message "Fatal CPU mismatch on feature".

Note: Once ESXi has been successfully installed, you can permanently set the kernel option by running the following ESXCLI command: localcli system settings kernel set -s cpuUniformityHardCheckPanic -v FALSE before rebooting OR you can reboot host and take out the USB and manually edit EFI\boot\boot.cfg and append the kernel option and this will ensure subsequent reboots will contain the required kernel option.

I/O Device Support

Similarly for I/O devices such as networking, storage, etc. that are not be supported with vSphere 8, the ESXi 8.0 installer will also list out the type of device and its respective Vendor & Device ID (see screenshot above for an example).

To view the complete list of unsupported I/O devices for vSphere 8, please refer to the following VMware KB 88172 article for more information. I know many in the VMware Homelab community makes use of the Mellanox ConnectX-3 for networking, so I just wanted to call this out as that is no longer supported and folks should look to using either the ConnectX-4 or ConnectX-5 as an alternative.

ProTip: A very easy and non-impactful way to check whether your existing CPU and I/O devices will run ESXi 8.0 is to simply boot an ESXi 8.0 installer using USB and check whether it detects all devices. You do NOT have to perform an installation to check for compatibility and you can also drop into the ESXi Shell (Alt+F1) using root and no password to perform additional validation. If you are unsure whether ESXi 8.0 will run on your platform, this is the eaiest way to validate without touching your existing installation and workloads.

For those that require the use of the Community Networking Driver for ESXi Fling to detect onboard networking like some of the recent Intel NUC platforms, folks should be be happy to learn that this Fling has been officially productized as part of vSphere 8 and custom ESXi ISO image will no longer be needed. For those that require the USB Network Native Driver for ESXi Fling, a new version of the Fling that is compatible with vSphere 8 will be required and folks should wait for that to be available before installing and/or upgrading to vSphere 8.

USB Install/Upgrade Support

Last year, VMware published revised guidance in VMware KB 85685 regarding the installation media for ESXi which includes ESXi 8.0 and specifically when using an SD or USB device. While ESXi 8.0 will continue to support installation and upgrade using an SD/USB device, it is highly recommended that customers consider a more reliable installation media like an SSD, especially for the ESXi OSData partition. Post-ESXi 8.0, USB installation and upgrades using an SD/USB device will no longer be supported and it is best to have a solution now than to wait for that to happen, if you ask me.

If you do decide to install and/or upgrade ESXi 8.0 using SD/USB device, the following warning message will be presented before allowing you to proceed with the install or upgrade.

Hardware Platform Support

While this is not an exhaustive list of hardware platforms that can successfully run ESXi 8.0, I did want to share the list of systems that I have personally tested and hope others may also contribute to this list over time to help others within the community.


The following hardware platforms can successfully run ESXi 8.0:

  • Intel NUC 5 (Rock Canyon) - Courtesy of Anthony
  • Intel NUC 6 (Swift Canyon) - Courtesy of Anthony
  • Intel NUC 8 (Dawson Canyon) - Courtesy of Anthony
  • Intel NUC 9 Extreme (Ghost Canyon)
  • Intel NUC 9 Pro (Quartz Canyon)
  • Intel NUC 10 Performance (Frost Canyon)
  • Intel NUC 11 Performance (Panther Canyon)
  • Intel NUC 11 Pro (Tiger Canyon)
  • Intel NUC 11 Extreme (Beast Canyon)
  • Intel NUC 12 (Dragon Canyon)
  • Intel NUC 12 Pro (Wall Street Canyon)
  • Supermicro E200-8D
  • Supermicro E302-12D

VCSA Resource Requirements

With all the new capabilities in vSphere 8, it should come as no surprise that additional resources are required for the vCenter Server Appliance (VCSA). Compared to vSphere 7, the only change is the amount of memory for each of the VCSA deployment sizes, which has increased to an additional 2GB of memory. For example, in vSphere 7, a "Tiny" configuration would required 12GB of memory and in vSphere 8, it now will require 14GB of memory.


ProTip: It is possible to change the memory configurations after the initial deployment and from my limited use, I have been able to configure a Tiny configuration with just 10GB of memory without noticing any impact or issues. Depending on your usage and feature consumption, you may need more memory but so far it has been working fine for a small 3-node vSAN cluster.

Nested ESXi Resource Requirements

Using Nested ESXi is still by the far the easiest and most efficient way to try out all the cool new features that vSphere 8 has to offer. If you plan to kick the tires with the new vSAN 8 Express Storage Architecture (ESA), at least from a workflow standpoint, make sure you can spare at least 16GB of memory (per ESXi VM), which is the minimum required to enable this feature.


Note: If you intend to only use vSAN 8 Original Storage Architecture (OSA), then you can ignore the 16GB minimum as that only applies to enabling vSAN ESA. For basic vSAN OSA enablement, 8GB (per ESXi VM) is sufficient and if you plan to run workloads, you may want to allocate more memory but behavior should be the same as vSphere 7.x.

A bonus capability that I think is worth mentioning is that configuring MAC Learning on a Distributed Virtual Portgroup is now possible through the vSphere UI as part of vSphere 8. The MAC Learning feature was introduced back in vSphere 6.7 but was only available using the vSphere API and I am glad to finally see this available in the vSphere UI for those looking to run Nested Virtualization!

More from my site

  • Quick Tip - How to deploy vCenter Server Appliance (VCSA) to legacy CPU without VMX Unrestricted Guest feature?
  • Quick Tip - Automating allowed and not allowed Datastores for use with vSphere Cluster Services (vCLS)
  • How to bootstrap vSAN Express Storage Architecture (ESA) on unsupported hardware?
  • Nested ESXi installation using HTTPS boot over VirtualEFI in vSphere 8
  • ACPI motherboard layout requires EFI - Considerations for switching VM firmware in vSphere 8 

Categories // ESXi, Home Lab, vSphere 8.0 Tags // vSphere 8.0

Comments

  1. sargonkhizeran says

    09/14/2022 at 9:05 am

    I've not had a chance to install directly on my E300-9D, any idea if CPU Intel Xeon D-2146NT on the supported list? (fingers crossed)

    Reply
    • William Lam says

      09/14/2022 at 1:34 pm

      I'd expect it to work, especially since the much older E200-8D works fine

      Reply
      • Theo Potts (@tmotts) says

        10/07/2022 at 11:30 am

        Mellanox 4 is a good alternative to the unsupported and deprecated mellanox 3 but Hp 530sfp + and way cheaper than the mellanox 4 on eBay !

        Reply
        • Obsidian Group says

          01/07/2023 at 10:33 am

          You mentioned the HP 530sfp+ but this card is not on the ESXi 8 HCL. So how would this be a good replacement for the ConnectX-3? If it's not supported, we're in the same boat. Can you verify that this card does indeed work with 8? Otherwise, this post could lead people to purchase something and be in the same boat as the X-3.

          Reply
          • Tmotts says

            01/27/2023 at 4:11 am

            It’s listed in the io devices on the hcl it’s fraction of the cost of a connect x4 even including replacing the existing twin ax cable.the Intel 520 10 gb is still supported also

    • Aamir Aqueel says

      09/16/2022 at 2:24 pm

      I have Intel ® Xeon Processor HP laptop with 32 GB RAM and 512 GB SSD and I can install ESXi 7.0 but it doesn't support VCSA. Very Sad...My money has been wasted.

      Reply
  2. Joakim says

    09/14/2022 at 1:32 pm

    Hey. Do you know if P/E cores are supported in 8.0?

    Reply
    • William Lam says

      09/14/2022 at 1:33 pm

      ESXi is not aware of the big.LITTLE CPU architecture that contains P/E cores. You will need to apply the same workaround as vSphere 7.x with ESXi kernel boot option: cpuUniformityHardCheckPanic=FALSE to allow ESXi to boot and install

      Reply
      • xhomer says

        09/17/2022 at 3:28 am

        But if you apply this, are there any issues or crashes?

        Reply
        • William Lam says

          09/17/2022 at 8:30 am

          As I said ... YMMV

          Reply
        • William Lam says

          09/19/2022 at 12:59 pm

          I've been using my setup (Intel NUC 12 Pro) for several weeks and no issues, its also using most of the RAM (32GB) and fair amount of CPU

          Reply
  3. Ettore says

    09/14/2022 at 3:14 pm

    Hello,
    Where can I download vSphere 8, do you have an easy link?
    Or is it already available on my VMware ?

    Reply
    • Raja says

      09/15/2022 at 10:05 pm

      Did you got the download link?

      Reply
      • William Lam says

        09/16/2022 at 7:36 am

        vSphere 8 has NOT GA'ed (as mentioned several times in blog post :))

        Reply
        • Axel says

          10/15/2022 at 4:17 pm

          It's already available from vmware customer connect for download.

          Reply
  4. Bill Nates says

    09/14/2022 at 4:59 pm

    Great article William.
    Thanks.

    Reply
  5. Carlos says

    09/14/2022 at 6:35 pm

    Intel NUC 8 won’t be supported?

    Reply
    • William Lam says

      09/16/2022 at 7:37 am

      I don't have a NUC 8, so can't say if it'll work or not. If I had to guess ... it probably will work

      Reply
      • Filip Niezbrzycki says

        10/12/2022 at 9:26 am

        I got error:
        [HardwareError]
        Hardware precheck of profile ESXi-8.0.0-20513097-standard failed with warnings:
        when I'm trying to update NIC8i7BEH ;(

        Reply
        • Filip Niezbrzycki says

          10/12/2022 at 9:28 am

          The warning is:
          TPM_VERSION WARNING: TPM 1.2 device detected. Support for TPM version 1.2 is discontinued. Installation may proceed, but may cause the system to behave unexpectedly.

          NUC8 has TPM 2.0 as far as I know.

          Reply
          • William Lam says

            10/12/2022 at 10:25 am

            Earlier NUC implemented TPM using Intel PTT. Only recent NUC 11/12 started to show TPM 2.0 and even then, it may not be fully compliant to work with ESXi. I’ve only had success using NUC 9 (Xeon) which didn’t have any issues. You may need to disable TPM to upgrade …

  6. ESXiCeleron says

    09/15/2022 at 1:56 am

    So I guess I can't install it on my Celeron with 512 MB RAM. Shame!

    Reply
  7. Larry Day says

    09/15/2022 at 1:34 pm

    Is ESXi 8 supported with E5-2690 v4 processors? Regards..

    Reply
    • Sebastian Busch says

      09/18/2022 at 9:29 pm

      E5-2690v4 is Broadwell Architecture and is listed above as out of Date. Currently you get a message in vSphere 7 about discontinued support in next major release(vSphere 8). I’m very sad about this because Dell PowerEdge R730 still expensive and now you can not officially upgrade to vSphere 8.

      Reply
      • Rana says

        09/19/2022 at 11:21 pm

        Not true. Broadwell-EP is supported in vSphere 8.

        Reply
  8. virtualmystery says

    09/17/2022 at 9:52 am

    Hi William,
    This is a great post, and I am really looking forward to the vSphere 8.0 GA. Could you please also suggest homelab setups that we can use for testing out smart nics and DPU configurations?

    Reply
    • William Lam says

      09/19/2022 at 1:00 pm

      You'll need to wait for the VMware HCL to get updated when vSphere 8 GA if you're interested in trying out SmartNIC/DPUs

      Reply
  9. peterk says

    09/27/2022 at 8:15 pm

    Support for Realtek pci nics? Has that been considered?

    Reply
    • William Lam says

      09/27/2022 at 8:22 pm

      Realtek has no interests in VMware ecosystem when we tried to engaged them several years ago to develop a driver. I recommend looking at platforms that do not contain RTL-based NIC

      Reply
      • Uthy says

        09/28/2022 at 6:13 am

        Does Apple Mac Mini support VSphere 8?

        Reply
        • William Lam says

          09/28/2022 at 6:17 am

          No. See https://williamlam.com/2022/08/vsphere-esxi-7-x-will-be-last-version-to-officially-support-apple-macos-virtualization.html

          Reply
  10. techbrain says

    10/11/2022 at 4:13 am

    William,

    Truly appreciate all the knowledge you've always shared with us. My question. Does vsphere 8 require DPUs for installation? I've search many blogs and no one has directly answered the question or speaks of V8 without add hardware cost.

    Reply
    • William Lam says

      10/11/2022 at 6:21 am

      vSphere 8 does NOT require DPU for installation. vSphere 8 is the first release to support DPU, if you have a need for it which can help by offloading services that would typically run on your ESXi (x86) and onto DPU. See vSphere DSE for more details https://core.vmware.com/resource/whats-new-vsphere-8#sec21112-sub1

      Reply
  11. Franck says

    10/13/2022 at 12:15 am

    Hi William,

    I've a question about automated deployment and unsupported CPUs.

    In order to (re)deploy nested hosts, I use a PXE environment and use the following parameter in my boot.cfg:

    kernelopt=ks=https://iis_local_ip/VMware/ks80.cfg

    This worked gret with vSphere 7 but with vSphere 8 beta, I need to add "allowLegacyCPU=true" somewhere because my server is a bit too old, but if I put as it is at the end of the line, it doesn't seems to work and install get in a loop after showing the unsupported CPU warning.

    This is how I tested it:
    kernelopt=ks=https://iis_local_ip/VMware/ks80.cfg allowLegacyCPU=true

    Do you know if it is a bug or if it is a syntax issue. It might also not be the correct place for it but I find very little people/infos mentioning it.

    Thanks and best regards ! 🙂

    Reply
    • William Lam says

      10/13/2022 at 6:55 am

      I've not seen this before. Can you try placing allowLegacyCPU at start of the string (e.g. kernelopt=allowLegacyCPU=true ks=....) and see if that helps

      Reply
      • Franck says

        10/13/2022 at 7:25 am

        Hi William,

        In the meantime, I've tried that syntax too and it didn't change. I also tried with the new IS/GA ISO file (I had previously the RC in hand), same result.

        Don't know if anybody else can confirm that result, but I think I'll have to try the on-place upgrade instead for a while... :-/

        If anybody can try that too, thanks in advance! 😉

        Reply
        • Sebastian Busch says

          10/13/2022 at 9:19 am

          I have installed vSphere 8 on e5-2640v3 CPU. I didn’t need any switch at bootprompt. In setup a information pops up about unsupported cpu and if I really want to install i need to acknowledge this warning. After pressing enter a 2nd time because another warning pops up about cpu installation runs flawlessly.

          Reply
          • Franck says

            10/13/2022 at 9:59 am

            Hi Sebastian,

            Did you install on a physical server or on a nested VM? ISO/USB or PXE deployment?

            Thx for sharing the details! 😉

          • Sebastian Busch says

            10/13/2022 at 10:16 am

            I have installed baremetal on Fujitsu RX2540 M1. KVM (vSphere ISO mounted)

        • The Dot Source says

          10/17/2022 at 2:13 am

          Hi Franck, I can confirm I am seeing the same. I have previously used allowLegacyCPU=true on ESXi 7.0 without issue. However it seems to have no effect with 8.0, I always get a warning prompt that I have to press enter past. This is preventing automated deployments in my lab currently 🙁

          Reply
        • The Dot Source says

          10/17/2022 at 6:18 am

          Further update.....Digging about in the UPGRADE\PRECHECK.PY and comparing it to ESXi 7.0 I can see that a new CPU_OVERRIDE message has been added in addition to CPU_WARNING and CPU_ERROR messages. CPU_OVERRIDE is not evaluated against allowLegacyCPU which I suspect is not the intended behaviour and possibly a bug. If the CPU falls into the CPU_OVERRIDE condition the allowLegacyCPU boot option will not do anything.

          Reply
          • William Lam says

            10/17/2022 at 9:09 am

            Not sure if you were the same person posting on either Slack or VMTN (forget) but check out https://williamlam.com/2022/10/quick-tip-automating-esxi-8-0-install-using-allowlegacycputrue.html for workaround 🙂

          • Franck says

            10/17/2022 at 12:32 pm

            Hi,

            Well, I tried to add "--ignoreprereqwarnings --ignoreprereqerrors" at the end of the command, but it didn't help, I got the same results :-/

          • William Lam says

            10/17/2022 at 12:38 pm

            Did you actually append the options in the right area? This is NOT for interactive installation AND make sure you are doing it in EFI boot.cfg and not main boot.cfg which only applies to BIOS mode only

          • Franck says

            10/17/2022 at 12:46 pm

            Hi,

            Not 100%, I don't masterise the whole (learned almost everything on your site)... 😉👌

            My boot CFG looks like that:

            bootstate=0
            title=LoadingESXi 8.0 IA auto-installer
            timeout=5
            kernel=b.b00
            kernelopt=ks=https://IP_of_a_web_server/VMware/ks80.cfg
            prefix=/Boot/x64/VMware/ESXi80
            modules=the whole list without the "/"
            updated=0

            Then I have that at the first stage of this ks80.cfg:

            ### Accept the VMware End User License Agreement
            vmaccepteula

            ### Set the root password for the DCUI and Tech Support Mode
            rootpw MySecretPass

            ### The install media (priority: local / remote / USB)
            clearpart --firstdisk=local --overwritevmfs
            install --firstdisk=local --overwritevmfs --novmfsondisk --ignoressd --ignoreprereqwarnings --ignoreprereqerrors

            ### Set the keyboard layout
            keyboard "Swiss German"

            ### Set the network to DHCP on the first network adapter
            network --bootproto=dhcp --device=vmnic0

            ### Reboot ESXi Host
            reboot --noeject

            This is a working 7.0 setup, I just copy/pasted the whole to customize it with 8.0 (but of course, might be wrong - I don't pretend it's a working config for the latest! 😁

          • William Lam says

            10/17/2022 at 5:05 pm

            It is most likely stalling because of the clearpart section which happens first and that doesn't support these options. Try taking that out and it should work as that is what I had used for my setup and that was a physical system which had some data on before but install still took care of it

          • The Dot Source says

            10/18/2022 at 7:42 am

            Thanks William, and yes I was the person on Slack 🙂 Unfortunately as Franck says this does not appear to work. For the avoidance of doubt I used the complete kickstart file from your post and I'm using EFI. "CPU_OVERRIDE" seems to be a new thing in 8.0 which you may or may not see depending on how PRECHECK evaluates your CPU. The "override" appears to be a different thing from "error" or "warning" 🙁 I currently have an SR open with VMware but don't hold out much hope as it's not a supportable item.

          • Franck says

            10/18/2022 at 8:40 am

            Hi there,

            So I've tried removing the cleapart section and also used only the same settings as suggested in the other article, but no success/same result (with or without allowLegacyCPU=true)

            I'm wondering why, it's like they won't us to test! 😏

          • William Lam says

            10/18/2022 at 1:15 pm

            Franck - Can you try adding one last option to install section --forceunsupportedinstall and see if that helps?

          • Franck says

            10/19/2022 at 2:38 am

            Hi there,

            I have some progress!!!

            So the switch --forceunsupportedinstall didn't help at first but it gave another error message about the disk. So I remembered those nested VMs were used for my 6.7 Lab and I decided to create a fresh empty VM and it worked.

            Now, I took the opportunity to test it further and those are the combination of the switches I tried:

            Doesn't work
            install --firstdisk=local --overwritevmfs --ignoreprereqwarnings --ignoreprereqerrors

            Works with a warning (press Enter)
            install --firstdisk=local --overwritevmfs --forceunsupportedinstall

            Completelly unattended, all together 🙂
            install --firstdisk=local --overwritevmfs --ignoreprereqwarnings --ignoreprereqerrors --forceunsupportedinstall

            allowLegacyCPU is not necessary (probably ignored)

            Now I have to troubleshoot the second part of the KS, it doesn't join the vcenter as the 7.0 does! 😁

            PS: any hint about the right log to check is welcome!

          • William Lam says

            10/18/2022 at 10:14 am

            Ah, so this was the missing piece that your CPU allowed for override but strangely, this should still cause either Error/Warning and those flags should have supported.

            The only other option that I see that could help is adding --forceunsupportedinstall

          • The Dot Source says

            10/19/2022 at 1:04 am

            Unfortunately no success. If this is a permanent "feature" I think the option of last resort is to alter the PRECHECK script on the fly before I build the ISO. Setting the "if" statement to include the OVERRIDEWARNING should result in the same behaviour as before. It's not supported anyway right 🙂

          • William Lam says

            10/19/2022 at 6:01 am

            Take a look at Franck's recent comment, the parameter helped but he needed to use all three options

          • The Dot Source says

            10/19/2022 at 6:17 am

            Thanks William, it did indeed work. I think I didn't reload my module properly when I made the changes, oops. I'm going to build this into my media creation tool. This appears to be a new parameter in 8.0:

            https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-esxi-8.0-installation-setup-guide.pdf

            Going by the description I wonder if it's meant to replace the kernel boot option.

            I will now need to build in a bit of media version detection so I know what alterations to make for unsupported CPUs.

            Thanks for your help.

          • Franck says

            10/19/2022 at 7:27 am

            Hi again,

            Now I'm facing another issue: the second part of my kickstart (still working with 7), is apparently not taken into consideration ("post installation")

            This is an extract (I've more command for the network part, but as the host doesn't have the second vswitch nor being in maintenance mode, I think this complete section of the ks is ignored.

            ##### Stage 01 - Pre installation:

            ### Accept the VMware End User License Agreement
            vmaccepteula

            ### Set the root password for the DCUI and Tech Support Mode
            rootpw mysecretpass

            ### The install media (priority: local / remote / USB)
            install --firstdisk=local --overwritevmfs --ignoreprereqwarnings --ignoreprereqerrors --forceunsupportedinstall

            ### Set the keyboard layout
            keyboard "Swiss German"

            ### Set the network to DHCP on the first network adapter
            network --bootproto=dhcp --device=vmnic0

            ### Reboot ESXi Host
            reboot --noeject

            ##### Stage 02 - Post installation:

            ### Open busybox and launch commands
            %firstboot --interpreter=busybox

            ### Enable maintaince mode
            esxcli system maintenanceMode set -e true

            ### Set Search Domain
            esxcli network ip dns search add --domain=mydomain.local

            ## Add second vSwitch & portgroup
            esxcli network vswitch standard add --vswitch-name=vSwitch1
            esxcli network vswitch standard portgroup add -v vSwitch1 -p "VSAN Network"

            ----- some more network settings----

            ### Disable IPv6 support (reboot is required)
            esxcli network ip set --ipv6-enabled=false

            ## register with vcenter
            esxcli network firewall ruleset set -e true -r httpClient
            wget --no-check-certificate -O vcenter80.py https://webserverip/VMware/vcenter80.py
            /bin/python vcenter80.py

            ### Reboot
            esxcli system shutdown reboot -d 15 -r "rebooting after ESXi 8.0 host configuration"

            Don't know if anything changed from the syntax point of view but where can I start looking (I looked the esxi_install.log but there is a lot in there)?😊

          • The Dot Source says

            10/21/2022 at 2:16 am

            If it's any use to anyone else, here is the function that takes a stock VMware ESXi ISO and builds you an unattended ISO. It configures the basics to get the host to a manageable state, hostname, DNS, management IP etc. It also does some useful stuff for nested labs, recreate VMK0 and support deprecated CPUs on 6.7/7.x/8.x

            https://github.com/TheDotSource/tds-vSphere/blob/main/Public/New-EsxiAutoIso.ps1

            It depends on the New-ISOFile function available here:

            https://github.com/TheDotSource/New-ISOFile

            All done natively in PowerShell which might be useful to some.

          • William Lam says

            10/22/2022 at 7:11 am

            It would be great if you can add a link to where you sourced the original parameters 🙂

          • The Dot Source says

            10/24/2022 at 12:32 am

            Oh dear, how remiss of me 🙁 Done.

          • Franck says

            10/25/2022 at 5:41 am

            Just in case, I've opened a post in VMware community about my Kickstart issue. Hopefully somebody can help me with this post deployment part ignored 😉

            https://communities.vmware.com/t5/vSphere-Upgrade-Install/vSphere-8-post-deployment-Kickstart-issue/m-p/2935168#M34218

          • Franck says

            10/26/2022 at 11:42 pm

            Hi there,

            Just wanted to give the solution as mentioned by Jangari on VMware communities, the firstboot section WILL BE IGNORED if you have secure boot enabled. Then I created the new VMs, I activated it without knowing I was creating a new problem. I learned something today! 😉

            Official documentation here: https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-esxi-installation/GUID-51BD0186-50BF-4D0D-8410-79F165918B16.html#firstboot-21

          • William Lam says

            10/27/2022 at 2:29 am

            https://williamlam.com/2018/06/using-esxi-kickstart-firstboot-with-secure-boot.html

  12. Werner says

    10/14/2022 at 12:59 am

    Hi Wililiam, thanks! My test system HPE DL380 G9 (not in HCL) hangs at the installation storage-path-claim. Anyone have an idea to work around this? Thanks Werner

    Reply
    • Werner says

      10/14/2022 at 5:11 am

      one step further... I did the test installation on a DL380 G8 and the same "hang" occurred during the installation (activating: storage-path-claim). I disabled the SAS controller and connected it to a USB stick. The installation routine then continued. Will now do the same on the HPE DL380 G9 server with the USB stick.

      Reply
      • Greg says

        11/11/2022 at 10:08 pm

        @Werner & William, I've tried to disable the controller with no luck in continuing the install process. Any further recommendations? About to find my old ISO of 7.

        Thanks,
        GG

        Reply
  13. Andrew Silva says

    10/14/2022 at 7:09 am

    Hi William, I am setting up a home lab with the NUC 12 Extreme. I got ESXi loaded on it right now, but have been unable to get the USB Network Native Driver for ESXi loaded to fix the 100Mbps limitation. Any ideas or suggestions?

    Reply
    • Andrew says

      10/14/2022 at 7:22 am

      A couple of further notes to add on how I got things to load properly. I followed your tips on how to bypass the CPU mismatch error. I did however encounter an issue with configuration change persistence after rebooting with the permanent CPU mis match fix as well as any other configuration settings made in ESXi.

      I did find a fix for this. The bootbank loads from in a temp directory and all of the changes that were being applied get destroyed on reboot. I found that if I followed the fix in this KB article https://kb.vmware.com/s/article/2149444 an then applied the permanent CPU mis match fix and it fixed my issue.

      Reply
      • William Lam says

        10/14/2022 at 7:41 am

        re: persistency issue - You are most likely doing following https://www.reddit.com/r/vmware/comments/y3nj8t/comment/isaleqn/?utm_source=share&utm_medium=web2x&context=3

        Reply
  14. Anthony Kehoe says

    10/17/2022 at 11:01 am

    Successfully upgraded to ESXi 8 (allowLegacyCPU=true) on the following NUC:
    5th Gen
    6th Gen
    8th Gen
    All using Haswell EVC.

    Reply
    • William Lam says

      10/17/2022 at 11:40 am

      Thank you Anthony for confirming those NUCs! I'll update the article

      Reply
      • CLGZcomLLC says

        11/27/2022 at 9:38 am

        I just successfully upgraded my intel nuc NUC8i5BEH vsan cluster on version 7.03g to version 8.0.0, 20513097 without using the allowLegacyCPU.

        Reply
        • Jim Willsher says

          01/24/2023 at 10:21 am

          Thank you! I have that model NUC so this is reassuring.

          Reply
  15. Gabriel says

    10/20/2022 at 10:42 am

    Hi William, I installed without problems esxi8 on Intel NUC10i7FNH, but at every restart/shutdown via vCenter 8 or via console I get a purple screen with the following error message:

    VMware ESKi 8.0.0 [Re leasebu i ld-205 13097 x86_64]
    #PF Except ion 14 in wor ld 1052102: dev layer par IP 0x420026bfff6c addr 0x0
    PTES :0x114738023 ;0X114739023;0x11473a023;0x0;
    Module(s) involved in panic: [unkbsd ] [vmksdhc i 1.0.2-2vm.800 .1 .0 .2051309? (External)]
    cr0=0x800 1003d cr2=0x0 cr3=0xaced2000 cr4=0x14216c
    FMS=06/a6/0 uCode=0xf0
    frane-0x453895c9bbb0 ip=0x420026bfff6c err=0x0 rf lags=0x10202
    rax=0x0 rbx=0x0 rcx=0x0
    rdx=0x1 rbp=0x0 rsi=0x0
    rdi=0x0 r8=0x100 r9=0x0
    r10=0x250 r11=0x8e0 r12-0x0
    r13=0x430d79402198 r14=0x2c9 122d28f r15=0x420026349fec
    xPCPU9 : 1052102/dev layer parallel teardoun
    PCPU
    0: SSSISSSSSSSS
    Code start : 0x420025c00000 VMK upt ime : 0:00:01:59.048
    0x453895c9bc70 : [0x420026bf ff6c ][email protected] . vmware . vmkbsd# 1+0x10 stack : 0x430d79402300
    0x453895c9bca0 : [0x420026c7c45e ]sdhc i_ c leanup_s [email protected] (ymksdhc i )# +0x 13b stack : 0x430d79402578
    0x453895c9bcc0 : [0x420026c7ed 12 ]sdhc i_ pc i _ [email protected] (ymksdhc i ) # +0x83 stack: 0x430d79402198
    0x453895c9bcf0 : [0x420026c05fd6 ]vnkbsd_ dev [email protected] . vmware . vmkbsd# 1+0x4? stack : 0x0
    0x453895c9bd 10 : [0x420026c0d828 ]vmkbsdDev [email protected] . vmware . vnkbsd# 1+ 0x2d stack : 0x43063de06780
    0x453895c9bd40 : [0x420025c227 29 ]Dr iver _ Det achDev [email protected] l #nover+0x20e stack : 0x453895c90032
    0x453895c9bdb0 : [0x420025c le2fc ]Dev iceDetachQvmkerne l #nover+0xcd stack : 0x43063de06900
    0x453895c9be60 : [0x420025c 1e428 ]Dev iceRenoveCBCvmkerne l#nover+0x25 stack : 0x0
    0x453895c9be80 : [0x420025c 1e9eb ]Dev ice Treeha lkQvmkerne l #nover +0xf4 stack : 0x420025c 1c6cc
    0x453895c9bee0 : [0x420025c 1ea?4 ]Dev iceRenovetvnkerne l#nover +0x79 stack : 0x43063de04700
    0x453895c9bf 00 : [0x420025c 1eb6e ]Dev i ceLayerShut dounExc l IncompDrvtvmkerne l #nover+0xd3 stack: 0x2c91226175
    0x453895c9bf 30 : [0x420025c 1f37e ]Dev iceLayerShutdounSub TreeSer ialQvmkerne l #nover +0xf stack : 0x430087C01220
    0x453895c9bf 50 : [0x420025c 1f3f8 ]Dev ice Teardounhe lpercBOvmkerne l #nover+0x15 stack : 0x430bb9cOlcf0
    0x453895c9bf 60 : [0x420025ceab24 ]He [email protected] l #nover+0x19d stack : 0x430bb9c01238
    0x453895c9bfe0 : [0x4200260 14c52 ]CpuSched_Starthor [email protected] l#nover+0x7b stack: 0x0
    0x453895c9c000 : [0x420025cd408f ]Debug_Is Init ial [email protected] l#nover +0xc stack : 0x0
    base fs=0x0 gs=0x420042400000 Kgs=0x0
    No port for renote debugger

    Reply
    • Gabriel says

      10/24/2022 at 10:03 am

      Ignore first message, power adapter was root cause.

      Successful upgraded to ESXi 8 for the following NUCs:

      7th Gen - Intel NUC7i3BNH
      8th Gen - Intel NUC8i5BEH
      10th Gen - Intel NUC10i7FNH
      11th Gen - Asrock NUC 1100 SERIES (i7-1165G7)

      Reply
      • AbangBuncitJurong says

        12/03/2022 at 10:55 pm

        ESXI 8 installs fine on Dell Precision 3571 running i9-12900H. And as per OP, shutting down/restart will give me a PSOD.

        Reply
        • William Lam says

          12/04/2022 at 9:33 am

          i9-12900 is Intel 12th Gen Consumer CPU, this introduces a new big.LITTLE architecture which ESXi doesn't understand and PSOD is expected. However, there is an ESXi kernel option to bypass this which I've blogged about several times on number of the12th Gen platforms requiring same workaround. See https://williamlam.com/2022/11/esxi-on-intel-nuc-12-enthusiast-serpent-canyon.html as an example

          Reply
      • Abudef says

        01/29/2023 at 2:26 am

        Hi Gabriel, I'm dealing with the same problem - every time I reboot and shut down I get the error you mention above (Module(s) involved in panic: [unkbsd ] [vmksdhc i 1.0.2-2vm.800 .1 .0 .2051309? (External)]). You write that the problem was in the power adapter, I don't really understand that. How did you finally solve it?

        Reply
        • Abudef says

          01/29/2023 at 3:01 am

          Disabling SD card reader in the NUC10 bios seemes to solve the issue...

          Reply
  16. sengork says

    10/26/2022 at 4:03 am

    Same issue here (does not happen on v7), after disabling SDCard 3.0 Controller in UEFI the pink screen on reboot does not happen.

    Reply
    • erpomik says

      11/17/2022 at 7:21 am

      Just installed ESXi 8.0 on two new NUC10i7FNH and had the same issue (PSOD at shutdown/reboot). Thank you so much for this hint, sengork.

      Reply
    • virtualmystery says

      11/18/2022 at 5:17 pm

      Thank you so much for the hint!! I was having the same issue when installing ESXi 8 on my NUC 10s

      Reply
    • WBurns says

      12/14/2022 at 6:58 am

      This fixed my pink screen, too - thank you very much!

      Reply
    • Matt Heldstab says

      12/21/2022 at 9:07 am

      Thanks a bunch, sengork! This really helped.

      Reply
    • Shodan says

      12/23/2022 at 2:36 pm

      OMG THANK YOU! I had this problem with my NUC10 and just couldn't find the problem until I found your comment (hours later..)

      Reply
  17. Michael Williams says

    10/28/2022 at 3:05 pm

    When I installed vSphere 8 on a Ivy Bridge system the message said it was not supported but it gave me the option to continue with the installation anyway. I did and everything is working but this was in a nested environment would install totally stop if I had installed on bare metal?

    Reply
    • William Lam says

      10/28/2022 at 4:03 pm

      There’s NOT supported CPUs and there ones that may work now but won’t in future, which you’re in latter scenario. It’ll install same on physical

      Reply
  18. Jeremy says

    11/03/2022 at 3:59 am

    I am using NUC 11, and community flings for network and nvme are required.
    I am facing some issues when building a custom image with ESXi-8.0.0-20513097-standard

    I'm using vmware powercli for this and have no issues with ESXi 7.x

    I download the depot from official vmware site; VMware-ESXi-8.0-20513097-depot.zip

    Got many errors of "claimed by multiple non-overlay VIBs" when executing this comment
    New-EsxImageProfile -CloneProfile "ESXi-8.0.0-20513097-standard" -name "ESXi-8-NUC11" -Vendor "jc"

    Here's an example of the error
    New-EsxImageProfile : File path of '/lib64/python3.8/site-packages/hostprofiles/pyEngine/statusManager.pyc' is claimed by multiple
    non-overlay VIBs: {'VMware_bootbank_esxio-base_8.0.0-1.0.20513097', 'VMware_bootbank_esx-base_8.0.0-1.0.20513097'}
    At line:1 char:1
    + New-EsxImageProfile -CloneProfile "ESXi-8.0.0-20513097-standard" -nam ...
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo : InvalidData: (VMware.ImageBuilder.Types.ImageProfile:ImageProfile) [New-EsxImageProfile], Exception
    + FullyQualifiedErrorId : EsxImageProfileValidationError,VMware.ImageBuilder.Commands.NewImageProfile

    New-EsxImageProfile : File path of '/lib64/python3.8/idlelib/debugger.pyc' is claimed by multiple non-overlay VIBs:
    {'VMware_bootbank_esxio-base_8.0.0-1.0.20513097', 'VMware_bootbank_esx-base_8.0.0-1.0.20513097'}
    At line:1 char:1
    + New-EsxImageProfile -CloneProfile "ESXi-8.0.0-20513097-standard" -nam ...
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo : InvalidData: (VMware.ImageBuilder.Types.ImageProfile:ImageProfile) [New-EsxImageProfile], Exception
    + FullyQualifiedErrorId : EsxImageProfileValidationError,VMware.ImageBuilder.Commands.NewImageProfile

    New-EsxImageProfile : File path of '/lib64/python3.8/idlelib/filelist.pyc' is claimed by multiple non-overlay VIBs:
    {'VMware_bootbank_esxio-base_8.0.0-1.0.20513097', 'VMware_bootbank_esx-base_8.0.0-1.0.20513097'}
    At line:1 char:1
    + New-EsxImageProfile -CloneProfile "ESXi-8.0.0-20513097-standard" -nam ...
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo : InvalidData: (VMware.ImageBuilder.Types.ImageProfile:ImageProfile) [New-EsxImageProfile], Exception
    + FullyQualifiedErrorId : EsxImageProfileValidationError,VMware.ImageBuilder.Commands.NewImageProfile

    Any comments will be greatly appreciated

    Reply
    • William Lam says

      11/03/2022 at 7:29 am

      There have been changes in vSphere 8, where a new version of PowerCLI (not released yet) is required for constructing a custom image for ESXi 8.0 Images OR you will need a vCenter Server 8.0 to build.

      Historically, it was simply assumed that prior versions of both vCenter and PowerCLI would be "forwards" compatible for image building and that is actually an incorrect assumption (one I too had assumed incorrectly). While it may have worked in prior versions, there has been major changes in vSphere 8 where this will no longer work and a new version will be required as mentioned above.

      Reply
      • Jeremy says

        11/05/2022 at 4:40 am

        Thanks William for the prompt response. Really appreciate it.
        Initially i had the issue of setting up vcsa on workstation. I found your article that resolved the issue.

        My ESXi 8 on NUC 11 and vcenter on workstation is up and running.

        I'm exploring ways to setup MFA for vcenter and esxi with AAD instead of ADFS.

        Reply
  19. Chris Parker says

    11/13/2022 at 8:58 am

    Hey William, do we know if Intel X520-DA2 is supported on ESXi 8?

    I have a couple of these NIC inside x2 NUC9VXQNX and 8.0 has been installed but only lists the onboard NIC's.

    Reply
    • William Lam says

      12/04/2022 at 9:29 am

      Check the VMware HCL for all device hardware compatibility

      Reply
  20. László says

    12/30/2022 at 12:05 pm

    Hello Will,

    I am running i5 13600K with ESXi 8.0a and for now I have disabled the E-cores as I was getting random PSOD if not after starting to use it intensively with multple VM's.

    Seems very stable now.

    I am only having an issue with which maybe somebody could help out.

    I am running with ASUS PRIME Z790-A WIFI Motherboard and using the Integrated GPU + Intel i350T2V2 NIC.

    I am having some issues with the NIC and iGPU.

    As for the iGPU is not getting recognized only as "Intel Corporation VGA compatbile Card" and cannot use it for Pass-through.

    Regarding the Inte i350T2V2 All works fine except the SR-IOV.
    Have enabled SR-IOV support and VT-D on the motherboard and the card is on the HCL list for this ESXi 8.0 version

    When I enable the SR-IOV it always just writes Enabled / Reboot needed.
    Can reboot as many times I want wont make a difference.

    VF is always 0 maximum 8.

    Thanks in advance,

    Reply
  21. labguy says

    12/31/2022 at 2:27 am

    awesome, thank you so much! allowLegacyCPU made my day! 😀

    Reply
  22. DaveSays1 says

    01/08/2023 at 11:52 am

    Hi Will,

    First, first thank you for this article and all the other ones over the years.

    I wanted to ask if you (or anyone else) knows of a way for SkyLine health to elaborate on the "Devices deprecated and unsupported" check. It seems to check for ESXI v7.0 but also forward looking from 6.7 to 7.0. Do we know of any way for it to evaluate against 8.0? Also, is there any way to have it provide a specific list of what it is complaining about.

    For reference, and because it might help others, I have an affordable 3 host lab running very well on 3x HP z420s. 2 hosts are running ESXI 6.7 and the 3rd is running 7.0u3. They are managed by a virtual VCSA 7.0.3.

    Skyline Health calls out the z420 running 7.0u3 as have deprecated and unsupported devices, but I have no actual issues running it.

    Planning my upgrade to 8.0 soon and trying to determine if I need to buy new hardware.

    Thank you Will and all,

    David

    Reply
  23. N8 says

    01/16/2023 at 7:07 am

    Hello William, I know Im a bit later to the game but wondering if you can share some EVC details with your NUCs on esx8.

    I recently upgraded my lab from 7.0 to 8.0a on my two i7-6700T processor hosts. Zero Issues. A few days ago, I went to add a new i7-11700 processor host to the cluster. Naturally I went to flip on EVC.. to have vcsa tell me that my Skylake's only support Broadwell... but my Rocket Lake only supports Haswell??
    What do you see on your 6th and 11th gen processors for MaxEVC?

    powershell> Get-VMHost -Name virtual* | Select-Object Name,MaxEVCMode,ProcessorType

    Name MaxEVCMode ProcessorType
    --------- -------------------- --------------------
    virtual3 intel-haswell 11th Gen Intel(R) Core(TM) i7-11700 @ 2.50GHz
    virtual2 intel-broadwell Intel(R) Core(TM) i7-6700T CPU @ 2.80GHz
    virtual1 intel-broadwell Intel(R) Core(TM) i7-6700T CPU @ 2.80GHz

    Reply
  24. WhimsySpoon says

    02/11/2023 at 4:25 am

    No issues upgrading a NUC8i7HNK. Just had to disable the TPM in the BIOS to remove the warning post-upgrade.

    Reply
  25. PAwel says

    02/25/2023 at 4:35 am

    Hey, just wanted to report a successful upgrade:
    Qotom Q355G4 minipc
    i5-5250U
    8GB RAM
    upgraded to 8.0.0 21203431 successfully (after setting AllowLegacyCPU=true and using --no-hardware-warning while doing the upgrade via cli)

    Reply
  26. dbutch1976 says

    03/09/2023 at 12:51 pm

    My Lenovo D20 with a Intel(R) Xeon(R) CPU E5506 is version locked at ESXi 6.5. I've tried the allowLegacy CPU workaround and it allows me to boot to the installer, but after selecting upgrade I get the error that my CPU is not supported and I cannot continue past it. In any case, the drivers for my broadcom NICs have been dropped after ESXi 6.7, so I think it's just time I threw in the towel and got some new HW.

    With that in mind I'm seriously considering Intel NUC's, but I'm not familiar with them and I find the options a little overwhelming and would appreciate any advice you might have.

    I'd like to play around with VSAN, so I'll need 3 nodes. I was thinking of getting 3 Intel NUCs with 32GB ram and 1TB M.2 nvme's. If it all works out I believe I should be able to use the VSAN to replace my aging IOMEGA iSCSI NAS.

    My main question is: What could you recommend that won't break the bank but will hopefully be still be supported as long as possible? I'd hate to be back in the situtation again when vSphere 9 gets released in two years.

    Reply
    • William Lam says

      03/09/2023 at 1:35 pm

      See the Intel NUC section at https://williamlam.com/home-lab

      12th Gen NUC are the latest and you can always go back a generation if cost is a factor. As long as you get system with 2 x SSD capable, you’ll be able to use it for vSAN. Starting w/11th Gen (Tall), you can squeeze 3 disks which will future proof because ESXi should be installed on SSD including ESX-OSData, so while USB boot is supported, it will eventually go away in future and this is easy way to be prepared. Based on generation of NUC, you can choose from CPU options based on your needs

      Reply
      • dbutch1976 says

        03/11/2023 at 5:03 am

        Thanks for the quick reply! Since this is a home lab I'd like to keep costs down as much as possible, so as you suggested I'm going to focus on The Gen 11 NUCs. When it comes to form factor, does the ultra-compact have space for an SSD drive? I realize that booting from USB is not recommended for ESXi 8, but since this is a home lab I'm willing to risk it. I'm also thinking I can boot from SAN in the event that booting from USB becomes impossible in a future release. This would allow me install the OS on USB, use the M.2 cards vSAN. and potentially install Windows on the SSD drives in the event that I want to dual boot and use one of the NUCs as a dual purpose media PC. Any thoughts?

        Reply
        • William Lam says

          03/11/2023 at 5:09 am

          Please see detailed blog post it 11th Gen, literally all your questions and what you can do is covered there as well as every other generation of NUC

          Reply
          • dbutch1976 says

            03/11/2023 at 7:29 am

            Will do, thanks again.

  27. kdoggz says

    03/22/2023 at 1:31 pm

    Worked for me on an HP Microserver gen 8 with an older intel G2020T CPU! Thanks

    Reply

Thanks for the comment! Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Infrastructure Business Group (CIBG) at VMware. He focuses on Cloud Native technologies, Automation, Integration and Operation for the VMware Cloud based Software Defined Datacenters (SDDC)

Connect

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Recent

  • How to disable the Efficiency Cores (E-cores) on an Intel NUC? 03/24/2023
  • Changing the default HTTP(s) Reverse Proxy Ports on ESXi 8.0 03/22/2023
  • NFS Multi-Connections in vSphere 8.0 Update 1 03/20/2023
  • Quick Tip - How to download ESXi ISO image for all releases including patch updates? 03/15/2023
  • SSD with multiple NVMe namespaces for VMware Homelab 03/14/2023

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2023

 

Loading Comments...