With the vSphere 7 Launch Event just a few days away, I know many of you are eager to get your hands on this latest release of vSphere and start playing with it in you homelab. A number of folks in the VMware community have already started covering some of the amazing capabilities that will be introduced in vSphere and vSAN 7 and I expect to see that ramp up even more in the coming weeks.
One area that I have not seen much coverage on is around homelab usage with vSphere 7. Given this is a pretty significant release, I think there are some things you should be aware of before you rush out and immediately upgrade your existing homelab environment. As with any vSphere release, you should always carefully review the release notes when they are made available and verify the hardware and its underlying components are officially on the VMware HCL, this is the only way to ensure that you will have a good and working experience.
Having said that, here are just a few of the observations that I have made while running pre-GA builds of vSphere 7 in my own personal homelab. This is not an exhaustive list and I will try to update this article as more information is made available.
Disclaimer: The following considerations below is based on my own personal homelab experience using a pre-GA build of vSphere 7 and it does not reflect any official support or guidance from VMware. Please use these recommendation at your own risk.
Legacy VMKlinux Drivers
It should come as no surprise that in vSphere 7, the legacy VMKlinux Drivers will no longer be supported. I suspect this will have the biggest impact to personal homelabs where unsupported devices such as network or storage adapters require custom drivers built by the community such as any Realtek (RTL) PCIe-based NICs which are popular in many environments. Before installing or upgrading, you should check to see if you are currently using any VMKlinux drivers, which you can easily do so with a PowerCLI script that I developed last year which is referenced in this blog post by Niels Hagoort.
You should also check with your hardware vendor to see if a new Native Driver is available, as many of our eco-system partners have already finished porting to this new driver format over the past couple of years in preparation for this transition. For many folks, this will not affect you and you are probably already using 100% Native Drivers but if you are still using or relying on VMKlinux drivers, this is a good time to consider upgrading your hardware or talking to those vendors and asking why there is not a Native Driver for ESXi? From a networking standpoint, there are other alternatives such as the USB Native Driver for ESXi Fling which I will be covering in the next section.
Here are some VMware KB's that may be useful in reviewing:
- Devices deprecated and unsupported in ESXi 7.0 (77304)
- vmkapi Dependency error while Installing/upgrading to ESXi 7.0 (78389)
- Upgrade of ESXi from 6.0 to 6.5/7.0 fails with CONFLICTING_VIBS ERROR (49816)
USB Network Adapters
The USB Network Native Driver for ESXi Fling is a very popular solution that enables customers to add additional networking adapters to their homelab platform, especially for systems like the Intel® NUC which only includes a single built-in network adapter. For folks using this Fling and plan to upgrade to vSphere 7, a new version of the Fling is required and you download from the Fling page here.
To install just run:
esxcli software vib install -d /ESXi700-VMKUSB-NIC-FLING-34491022-component-15873236.zip
Aquantia/Marvell 10GbE NICs
If you are using either the 10GbE PCIe-based or Thunderbolt 3 to 10GbE network adapters, which uses the Aquantia (now Marvell) chipset, Marvell has just released an official Native ESXi Driver for their AQtion based network adapter which you can find here and for the complete list of supported devices, please have a look here.
I am happy to report that ESXi 7 runs fine on the latest generation of the Intel NUC 10 "Frost Canyon" as shown in the screenshot below.
One thing to note regarding the 10th Gen Intel® NUC is that the built-in NICis not automatically detected due to a newer Intel NIC. Luckily, we have an updated NE1000 driver which is also compatible with ESXi 7 and you just need to create a new ISO containing the updated ne1000 driver.
I am also happy to report that Intel® NUC 9 Pro/Extreme also work out of the box with ESXi 7 and all built-in network adapters are automatically detected without any issues.
I know several other folks have also had success installing or upgrading to ESXi 7 on older generations of Intel® NUC but I do have that full list. For folks that have had success, feel free to leave a comment and I will update this page as more details are shared.
vSphere 7 removes support for a couple of CPU processors that have been around for over 10yrs, this may impact some folks. A workaround is possible but I certainly would advise looking at upgrading your hardware before going to the latest generation of vSphere to ensure you are future proofing yourself. For more details on the workaround, please see this blog post for more details.
New ESXi Storage Requirements
In ESXi 7.0, there are new storage requirements that you should be aware and I recommend you carefully read through the official documentation found here for more details. In addition, there are several new ESXi kernel boot options in ESXi 7.0 that can be used to affect different behaviors in disk partitioning behaviors and device selection related to these new storage requirements. I strongly recommend reviewing the following VMware KBs as they may be beneficial if you are running into issues. For resizing the new ESX-OSData volume, please have a look at this blog post.
- New Kernel options available on ESXi 7.0 (77009)
- Installing ESXi on a supported USB flash drive or SD flash card (2004784)
NVMe PCIe SSD not showing up during Upgrade
It looks like several folks in the community had ran into an issue where their NVMe SSDs were no longer showing up after upgrading from ESXi 6.7 to ESXi 7.0 using ESXCLI. It turns out this was user error and they were using the incorrect command which not only caused an incorrect upgrade, but also missing ESXi 7.0 VIBs. Please have a look at this blog post for the correct command in case you are using ESXCLI.
I am also happy to report that the E200-8D platform which is another popular system in the VMware community works out of the box with ESXi 7. I also expect other Supermicro variants to just work as well but I do not have confirmation but given these systems are on VMware's HCL, you should not have any issues.
Other Hardware Platforms
If you are wondering if ESXi 7 will work on other systems that has not been listed, you can easily verify yourself without affecting your current installation. Simply obtain a new USB device and then load ESXi 7 onto the device. You can then boot from this new USB device and then install ESXi 7 on the same device which will ensure you do not affect your existing installation. This is something that many folks are still surprise to hear is possible and this is a safe way to "test" a new version of ESXi as long as you do not override or upgrade the underlying VMFS volume format in case there is a new version. From here, you can verify that your system is operating as expected before attempting to upgrade your existing installation.
- Thanks to Michael White, Supermicro SYS-5028D-TN4T works with ESXi 7
- Thanks to vincenthan, Supermicro E300-8D works with ESXi 7
- Tanks to Trevor, Supermicro E300-9D works with ESXI 7
- Thanks to Laurens van Duijn, Intel® NUC 8th Gen works with ESXi 7
- Thanks to Patrick Kernstock, Intel® NUC 7th Gen works with ESXi 7
- Thanks to NG Techie, Intel® NUC 6th Gen works with ESXi 7
- Thanks to Florian, Intel® NUC 5th-10th Gen works with ESXi 7
- Thanks to Oliver Lis, iBASE IB918F-1605B works with ESXi 7
- Thanks to Jason, Supermicro E300-8D works with ESXi 7
- Thanks to topuli, Gigabyte x570 Ryzen 3700x works with ESXi 7
vCenter Server Appliance (VCSA) Memory
Memory is always a precious resource and it also usually the first constrained resource in homelabs. In vSphere 7, the VCSA deployment sizes has been updated to require additional resources to support the various new capabilities. One change that I have noticed when I deploy a "Tiny" VCSA in my lab is the memory footprint has increased to 12GB, it was previously 10GB.
For smaller homelabs, this can be a concern and one approach that many folks have used in the past is to turn off vCenter Server services that you do not plan to use. If there is no adverse affects on your environment or usage, then this is usually a safe thing to do. Although I do not have any specific recommendations, you can use tools like vimtop to help determine your current memory usage. For example, below is a screenshot of vimtop running on a VCSA with 3 x ESXi hosts configured with vSAN with no workloads running. The default configured memory is 12GB but usage is ~5.1GB and you can probably disable some services and reduce the memory footprint. Again, this is something that will require a bit of trial and error. If folks have any tips or tricks, feel free to share them in the comments.
Nested ESXi Memory and Storage
Running Nested ESXi is still one of the easiest way to evaluate new releases of vSphere, especially with the Nested ESXi Virtual Appliance. As with previous releases, I plan to have an updated image to support the latest release. With that said, there are going to be a couple resource changes to align with the latest ESXi 7 requirements. The default memory configuration will change from 6GB to 8GB and the first VMDK which is used for the ESXi installation will also be updated from 2GB to 4GB. For those that want to know when the new Nested ESXi Appliance is available, you can always bookmark and check http://vmwa.re/nestedesxi
Nested ESXi on Unsupported Physical ESXi CPU
For those wanting to run Nested ESXi 7.0 on older release of vSphere that may have unsupported CPU, check out this blog post by Chris Hall using a nice CPUID mask trick to workaround this problem.
vSAN File Services
With vSAN 7, one of the really cool features that I think can really benefit homelabs is the new vSAN File Services which Duncan has a great blog article here. If your physical storage is running vSAN, you can now easily create NFS v3/4.1 volumes and make that available to your homelab infrastructure. For me, I am constantly building out various configurations and sometimes it is nice to have the ability to create various storage volumes that contains VMs and/or other files which I can easily re-use without having to manage yet another VM. One example is having an NFS share that I can easily mount to my Nested ESXi VMs for testing purposes. I am mainly using this for homelab purposes and I strongly recommend you also do the same as I am sure this not only is not supported in Production but this could also violate the vSAN EULA.
In the screenshot below, I have 3 Nested ESXi VMs configured with vSAN running on top of my physical vSAN host (vSAN on vSAN) and I have enabled the new vSAN File Services to expose a new NFS volume called vsanfs-datastore. I then have a 4th Nested ESXi VM which has successfully mounted the NFS volume, pretty cool!