With the vSphere 7 Launch Event just a few days away, I know many of you are eager to get your hands on this latest release of vSphere and start playing with it in you homelab. A number of folks in the VMware community have already started covering some of the amazing capabilities that will be introduced in vSphere and vSAN 7 and I expect to see that ramp up even more in the coming weeks.
One area that I have not seen much coverage on is around homelab usage with vSphere 7. Given this is a pretty significant release, I think there are some things you should be aware of before you rush out and immediately upgrade your existing homelab environment. As with any vSphere release, you should always carefully review the release notes when they are made available and verify the hardware and its underlying components are officially on the VMware HCL, this is the only way to ensure that you will have a good and working experience.
Having said that, here are just a few of the observations that I have made while running pre-GA builds of vSphere 7 in my own personal homelab. This is not an exhaustive list and I will try to update this article as more information is made available.
Disclaimer: The following considerations below is based on my own personal homelab experience using a pre-GA build of vSphere 7 and it does not reflect any official support or guidance from VMware. Please use these recommendation at your own risk.
Legacy VMKlinux Drivers
It should come as no surprise that in vSphere 7, the legacy VMKlinux Drivers will no longer be supported. I suspect this will have the biggest impact to personal homelabs where unsupported devices such as network or storage adapters require custom drivers built by the community such as any Realtek (RTL) PCIe-based NICs which are popular in many environments. Before installing or upgrading, you should check to see if you are currently using any VMKlinux drivers, which you can easily do so with a PowerCLI script that I developed last year which is referenced in this blog post by Niels Hagoort.
You should also check with your hardware vendor to see if a new Native Driver is available, as many of our eco-system partners have already finished porting to this new driver format over the past couple of years in preparation for this transition. For many folks, this will not affect you and you are probably already using 100% Native Drivers but if you are still using or relying on VMKlinux drivers, this is a good time to consider upgrading your hardware or talking to those vendors and asking why there is not a Native Driver for ESXi? From a networking standpoint, there are other alternatives such as the USB Native Driver for ESXi Fling which I will be covering in the next section.
Here are some VMware KB's that may be useful in reviewing:
- Devices deprecated and unsupported in ESXi 7.0 (77304)
- vmkapi Dependency error while Installing/upgrading to ESXi 7.0 (78389)
- Upgrade of ESXi from 6.0 to 6.5/7.0 fails with CONFLICTING_VIBS ERROR (49816)
USB Network Adapters
The USB Network Native Driver for ESXi Fling is a very popular solution that enables customers to add additional networking adapters to their homelab platform, especially for systems like the Intel® NUC which only includes a single built-in network adapter. For folks using this Fling and plan to upgrade to vSphere 7, a new version of the Fling is required and you download from the Fling page here.
To install just run:
esxcli software vib install -d /ESXi700-VMKUSB-NIC-FLING-34491022-component-15873236.zip
Aquantia/Marvell 10GbE NICs
If you are using either the 10GbE PCIe-based or Thunderbolt 3 to 10GbE network adapters, which uses the Aquantia (now Marvell) chipset, Marvell has just released an official Native ESXi Driver for their AQtion based network adapter which you can find here and for the complete list of supported devices, please have a look here.
Intel NUC
I am happy to report that ESXi 7 runs fine on the latest generation of the Intel NUC 10 "Frost Canyon" as shown in the screenshot below.
One thing to note regarding the 10th Gen Intel® NUC is that the built-in NICis not automatically detected due to a newer Intel NIC. Luckily, we have an updated NE1000 driver which is also compatible with ESXi 7 and you just need to create a new ISO containing the updated ne1000 driver.
I am also happy to report that Intel® NUC 9 Pro/Extreme also work out of the box with ESXi 7 and all built-in network adapters are automatically detected without any issues.
I know several other folks have also had success installing or upgrading to ESXi 7 on older generations of Intel® NUC but I do have that full list. For folks that have had success, feel free to leave a comment and I will update this page as more details are shared.
Unsupported CPUs
vSphere 7 removes support for a couple of CPU processors that have been around for over 10yrs, this may impact some folks. A workaround is possible but I certainly would advise looking at upgrading your hardware before going to the latest generation of vSphere to ensure you are future proofing yourself. For more details on the workaround, please see this blog post for more details.
New ESXi Storage Requirements
In ESXi 7.0, there are new storage requirements that you should be aware and I recommend you carefully read through the official documentation found here for more details. In addition, there are several new ESXi kernel boot options in ESXi 7.0 that can be used to affect different behaviors in disk partitioning behaviors and device selection related to these new storage requirements. I strongly recommend reviewing the following VMware KBs as they may be beneficial if you are running into issues. For resizing the new ESX-OSData volume, please have a look at this blog post.
- New Kernel options available on ESXi 7.0 (77009)
- Installing ESXi on a supported USB flash drive or SD flash card (2004784)
NVMe PCIe SSD not showing up during Upgrade
It looks like several folks in the community had ran into an issue where their NVMe SSDs were no longer showing up after upgrading from ESXi 6.7 to ESXi 7.0 using ESXCLI. It turns out this was user error and they were using the incorrect command which not only caused an incorrect upgrade, but also missing ESXi 7.0 VIBs. Please have a look at this blog post for the correct command in case you are using ESXCLI.
Supermicro
I am also happy to report that the E200-8D platform which is another popular system in the VMware community works out of the box with ESXi 7. I also expect other Supermicro variants to just work as well but I do not have confirmation but given these systems are on VMware's HCL, you should not have any issues.
Other Hardware Platforms
If you are wondering if ESXi 7 will work on other systems that has not been listed, you can easily verify yourself without affecting your current installation. Simply obtain a new USB device and then load ESXi 7 onto the device. You can then boot from this new USB device and then install ESXi 7 on the same device which will ensure you do not affect your existing installation. This is something that many folks are still surprise to hear is possible and this is a safe way to "test" a new version of ESXi as long as you do not override or upgrade the underlying VMFS volume format in case there is a new version. From here, you can verify that your system is operating as expected before attempting to upgrade your existing installation.
- Thanks to Michael White, Supermicro SYS-5028D-TN4T works with ESXi 7
- Thanks to vincenthan, Supermicro E300-8D works with ESXi 7
- Tanks to Trevor, Supermicro E300-9D works with ESXI 7
- Thanks to Laurens van Duijn, Intel® NUC 8th Gen works with ESXi 7
- Thanks to Patrick Kernstock, Intel® NUC 7th Gen works with ESXi 7
- Thanks to NG Techie, Intel® NUC 6th Gen works with ESXi 7
- Thanks to Florian, Intel® NUC 5th-10th Gen works with ESXi 7
- Thanks to Oliver Lis, iBASE IB918F-1605B works with ESXi 7
- Thanks to Jason, Supermicro E300-8D works with ESXi 7
- Thanks to topuli, Gigabyte x570 Ryzen 3700x works with ESXi 7
vCenter Server Appliance (VCSA) Memory
Memory is always a precious resource and it also usually the first constrained resource in homelabs. In vSphere 7, the VCSA deployment sizes has been updated to require additional resources to support the various new capabilities. One change that I have noticed when I deploy a "Tiny" VCSA in my lab is the memory footprint has increased to 12GB, it was previously 10GB.
For smaller homelabs, this can be a concern and one approach that many folks have used in the past is to turn off vCenter Server services that you do not plan to use. If there is no adverse affects on your environment or usage, then this is usually a safe thing to do. Although I do not have any specific recommendations, you can use tools like vimtop to help determine your current memory usage. For example, below is a screenshot of vimtop running on a VCSA with 3 x ESXi hosts configured with vSAN with no workloads running. The default configured memory is 12GB but usage is ~5.1GB and you can probably disable some services and reduce the memory footprint. Again, this is something that will require a bit of trial and error. If folks have any tips or tricks, feel free to share them in the comments.
Nested ESXi Memory and Storage
Running Nested ESXi is still one of the easiest way to evaluate new releases of vSphere, especially with the Nested ESXi Virtual Appliance. As with previous releases, I plan to have an updated image to support the latest release. With that said, there are going to be a couple resource changes to align with the latest ESXi 7 requirements. The default memory configuration will change from 6GB to 8GB and the first VMDK which is used for the ESXi installation will also be updated from 2GB to 4GB. For those that want to know when the new Nested ESXi Appliance is available, you can always bookmark and check http://vmwa.re/nestedesxi
Nested ESXi on Unsupported Physical ESXi CPU
For those wanting to run Nested ESXi 7.0 on older release of vSphere that may have unsupported CPU, check out this blog post by Chris Hall using a nice CPUID mask trick to workaround this problem.
vSAN File Services
With vSAN 7, one of the really cool features that I think can really benefit homelabs is the new vSAN File Services which Duncan has a great blog article here. If your physical storage is running vSAN, you can now easily create NFS v3/4.1 volumes and make that available to your homelab infrastructure. For me, I am constantly building out various configurations and sometimes it is nice to have the ability to create various storage volumes that contains VMs and/or other files which I can easily re-use without having to manage yet another VM. One example is having an NFS share that I can easily mount to my Nested ESXi VMs for testing purposes. I am mainly using this for homelab purposes and I strongly recommend you also do the same as I am sure this not only is not supported in Production but this could also violate the vSAN EULA.
In the screenshot below, I have 3 Nested ESXi VMs configured with vSAN running on top of my physical vSAN host (vSAN on vSAN) and I have enabled the new vSAN File Services to expose a new NFS volume called vsanfs-datastore. I then have a 4th Nested ESXi VM which has successfully mounted the NFS volume, pretty cool!
Hey! Thanks for all of the great information! I've not had any time to even read up on v7, and this prepares me for the speedbumps!
Is there coming free version also for ESXI 7?
Hi great cover. What about the passthrought function? Was it possible to pass the onboard gfx? Thansk
What are the new memory/disk minimums if any for memory in vcva or ESXi? Did you notice any noticeable increase on base footprint?
I'm looking to build a new 2U, power efficient homelab server. Any reports of individual Intel based supermicro or asrock motherboards working with vSphere 7.0?
Everything seems to work well on my Asrock Rack EPYC3251D4I-2T but my IBM M1015 does not.
How well does the Asrock IPMI work? Is it pretty modern?
The HTML Console works better than many I've used from bigger vendors, the rest of the IPMI is relatively barebones in my opinion but works well and is pretty responsive.
@Jake, what doesn't work with your M1015? Is it not detected, not stable? (are you running the IBM fw or the LSI?)
It gives me an error on upgrade that it's not compatible, then the device itself is detected in ESXi but zero functionality it seems. Running LSI IR fw.
@Jake if ESXi itself detects the device you should be able to use passthrough to give the M1015 and its disks to a NAS VM like FreeNAS or Napp-IT and then create datastores you can give back to ESXi using NFS for example.
This is a beautifully terrifying idea, I will definitely have to give it a try.
Would you happen to know when NSX-T will be supported with ESXI 7.0?
NSX-T 3.0 will support and should be out in coming weeks
So, how do you get around having vmklinux drivers for vSphere 7 if native drivers aren't available? I am running a home lab with a Dell Optiplex using the Broadcom vmklinux driver. When I run your script, it says I have vmkapi_v2_3_0_0_vmklinux_ship, vmklinux_9, vmklinux_9_2_3_0 and vmklinux_9_2_2_0 as true.
Sorry, meant Realtek driver, not broadcom. I assume even upgrading from 6.7 to 7.0 will disable this driver, correct?
The workaround is to either stay on 6.7 OR get new hardware that works with 7.0. I don't know if our installer will allow you to perform force upgrade, I know they had put in pre-checks such as these but wasn't sure if its a warning. You're really on your own if you attempt to take it forward and no the drivers aren't disable, they literally DO NOT exists in 7.0 and will not even be carried into the upgraded system if that works.
Thanks William. I was able to find an Intel card I340 card and installed that. I did load the ACHI driver in my customized 6.7 image, but I'm not sure if it's even needed.
I ran esxcli system module list | grep vmklinux after uninstalling the net55-r8168 vmklinux driver and it just went to the next command line. Does that mean the sata-xachi-1,42-1.x86_64.vib driver is fine and will run on 7.0 when available for home labs, aka free? When I run esxcli software vib list, it shows as the vendor as VFrontDe and the acceptance is CommunitySupported.
I ran esxcli system module list | grep vmklinux after uninstalling the net55-r8168 vmklinux driver and it just went to the next command line. Does that mean the sata-xachi-1,42-1.x86_64.vib driver is fine and will run on 7.0 when available for home labs, aka free? When I run esxcli software vib list, it shows as the vendor as VFrontDe and the acceptance is CommunitySupported.
Not even sure I use the sata-xachi custom driver. Here is what my stoarge adapter shows in vSphere:
vmhba0
Model
Lynx Point AHCI Controller
Driver
vmw_ahci
Has anyone had luck installing 7.0 onto a Supermicro X10SDV-TLN4F?
Anyone know if any Dell R720 work with esxi 7? I know it’s not on the HCL list but curious if it will install and work at all.
I have it running on an R620 with H710 mini and Intel x520 nics
R720 upgrade via Lifecycle manager does not work for me - error of "Dell (LSI) H710P is not supported". I am able to install via ISO if I accept the deprecated CPU/unsupported RAID. Not clear what will not work.
May i ask about the raid controller model on your r620? And which iso you used for installation? Dell ome or VMware official? Thanks
Did you have to use a special image? or did it just work for you?
ESXi 7.0 works with Dell Poweredge T30 (installed on USB 16GB)...even with HP P410 and a configured Dynamic DirectPath I/O with Intel HD Graphics P530.
Just successfully installed ESXi 7.0 on the Intel NUC NUC7i3BNH
[root@localhost:~] esxcli network nic list
Name PCI Device Driver Admin Status Link Status Speed Duplex MAC Address MTU Description
------ ------------ ------ ------------ ----------- ----- ------ ----------------- ---- -----------
vmnic0 0000:00:1f.6 ne1000 Up Up 1000 Full **:**:**:**:**:** 1500 Intel Corporation Ethernet Connection (4) I219-V
[root@localhost:~] vmware -vl
VMware ESXi 7.0.0 build-15843807
VMware ESXi 7.0 GA
[root@localhost:~] esxcli hardware platform get
Platform Information
UUID: 0x3e 0x97 0xfa 0xba 0xd4 0xa9 0x34 0xc4 0xea 0x8d 0x94 0xc6 0x91 0x16 0xb6 0xd6
Product Name: NUC7i3BNH
Vendor Name: Intel Corporation
Serial Number: **************
Enclosure Serial Number:
BIOS Asset Tag:
IPMI Supported: false
You say that "... there are going to be a couple of minor resource changes to align with the latest ESXi 7 requirements"., but what about the 128GB storage footprint on the installation disk. The requirement is mentioned in the documentation. So far so good, but I'd like to understand why the OSDATA VMFSL partition requires up to 120 GB disk space. Can you please share any insight about this?
Yep, it's just used over half of my 240gb msata! Need to find a way to reduce that, even if it's using a USB memory stick boot drive... https://vinfrastructure.it/2020/04/vmware-esxi-7-0-partitions-layout/
An easy way to do this is to install e.g. ESXi 6.7, and then upgrade to ESXi 7.0 with the Preserve VMFS option.
Probably won't affect many but the Intel 82571 1GB NIC isn't supported by vSphere7 anymore - it was a popular onboard NIC for a few Supermicro servers. Mine (X9DRL-3F) dates back to 2012 so it's well overdue an upgrade but the rest of the box is still doing sterling duty so I guess it'll manage another year or two running v6.7 (or I'll buy new NICs).
I just tried a fresh install and my Intel 82574L Gigabit NIC isn’t detected either on 7.0. What is weird is I see this listed on VMware’a HCL. Maybe I am misreading it but seems to be plainly listed there.
After giving up on my 82571EB and assuming I had to live w/o it, today my ESXi7 decided to recognize it.
I guess after some update. Driver is ne1000:
# lspci
...
0000:00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (2) I219-V [vmnic0]
0000:02:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller [vmnic1]
0000:02:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller [vmnic2]
# esxcli software vib list | grep ne1000
ne1000 0.8.4-10vmw.700.1.0.15843807 VMW VMwareCertified 2020-04-16
I have been experimenting with it on a Mac Pro 6,1 6 core machine that I use in my home setup and so far so good. NICs work properly etc.
Upgraded the NUC5i5 successfully to vSphere 7.0 combined with fling USB NIC driver
Hello, I have updated my esxi from 6.7u3 to 7.0. like many users I think two of my network cards used the VMKlinux drivers and no longer work.
do you know which network cards with a PCI Express 1x bus are compatible with esxi v7 natively? and if possible not too expensive. thanks in advance.
Hey William, great article!
Like many people, I'm running a homelab on an AMD3600x with a motherboard with a Realtek NIC.
Up to now I've been running the Net55-r8168 driver and it's been running perfectly under 6.7 update3 after injecting the VMKlinux community driver.
Can vmware not cut us homelab users some slack and provide an unsupported equivalent driver that works under vsphere7?
As you know its early adopter homelab users that often have significant input into vmware strategy and architecture and corporate spend - so it would be great if this specific driver could be refactored to work under vshphere 7.
thanks Ashley
Ashley,
It's not about VMware cutting homelab or anyone slack, these drivers are/should be written by the hardware vendors. In this case, Realtek should join VMware's Development program so that they can build and provide drivers to their end-customers just like any other vendor in our eco-system program.
Thanks for this article.
I am afraid there everyone with AQUANTIA 10GBe card will have to wait if Marvell do something or just replace their card.
This sucks big time 🙂 I am not aware of any TB3 -> 10GBe network card that would have different chipset :/
As mentioned in the other thread - We have been working with Aquantia/Marvell to have them build a Native Driver for ESXi, they are currently going through the certification process and we hope to have some news to share on that front. Please stay tuned
I upgraded one of my homelab hosts to see if I could get a Mellanox ConnectX2 working but I'm failing so far. Any tips for a newb?
Just an update, a few other HomeLab'ers are having the same issue with this card despite it being listed as supported on VMWare's site. The exact model listed on the ESXi host is: Mellanox Technologies MT26448 [ConnectX EN 10GigE , PCIe 2.0 5GT/s]
Another redditor tried upgrading to the latest firmware and still gets the same message when installing esx 7 about unsupported PCI devices.
Not sure where to go from here, any advice please?
Thank you!
I'm glad I'm not the only one. Slightly salty at this point because my previous 10g cards werent't compatible so I purchased three MT26448's to replace them. Now I have six useless cards. 🙁
I tuned small script ESXi7-enable-nmlx4_co.v00.sh to fix support for MT26448 [ConnectX-2 EN 10GigE , PCIe 2.0 5GT/s]. More info is here:
https://vdan.cz/vmware/esxi-7-0-and-mellanox-connectx-2-support-fix/
Daniel, I have to give a profound thanks. Your Mellanox fix saved me on a new install of 7.0 U2. Worked like a charm on the first try.
Just want to leave this here, as I struggled with this. Maybe someone will find this helpful in the future.
For me, Daniels patching workaround worked, but only after I explicitly installed 7.0.2. It did not work on a fresh 7.0.3 install (the necessary file /bootbank/nmlx4_co.v00 was missing)
Do you know which network cards with a PCI Express 1x bus are compatible with esxi v7 natively?
Hey William, as always thanks for the hard work.
What I thought might be a problem worked great "upgrade of my SuperMicro X10SRH-CLN4F" run 7 just fine.
What I thought was going to be a breeze is the issue "upgrade of my 4 nested esxi 6.7 VM to 7". Not seeing the hard drives? Really strange
Tom Miller
here is the error on the update mgr page in vcsa 7
Unsupported devices [1000:0030 15ad:1976] found on the host.The system image on the attached iso lacks a storage driver for the installed bootbank.
Hi, I am facing a similar issue in my homelab. Any pointers to look at ?
OK - got it, switch to paravirtual conroller and os 6.5 or greater
Thank you for pointing. I had burned few hours before looking this comment. On my nested esxi 6.7u3 I was able to upgrade to 7.0.1 once after changing the SCSI Contoller to "VMWare paravirtual"
6.7 > 7.0 Upgrade on a X11SDV-8C-TP8F board went smoothly. 1G and 10G NICs, Geforce Quadro p2200 and LSI 9300 HBA passthrough work fine. Only issue I have is with my Aeotec z-wave USB stick currently.
I am having the same issue with my z-wave stick as well.
Anyone have a Thunderbolt 10 GBE solution working on ESXi 7? The aquantia driver I am using doesn't appear to be updated yet for ESXi 7, so I'm doubting it'll survive the upgrade.
Hello William and all, I have a NUC 10 "Frost Canyon" running vSphere 7 with the update ne1000 driver and running virtual ESXi 7. But I have an issue where I cannot boot up a nested VM within the ESXi VM. Everytime I power a VM on, the nested ESXi VM shuts down. Anybody seen this before or have any ideas?
I have the same issue on Intel NUC7i5. Haven´t been able to find a solution yet...
Yes, I actually had to use vSphere 6.7 U3 after many hours of trial/error with 7. With a 6.7 custom image (including new ne1000 driver) I am now able to run nested VMs on vSAN within the ESXi VMs. Now, I just need to get networking for those nested VMs fixed.
Thanks for the useful information, William. One of my three hosts has a Realtek NIC on the m/b - I'm not worried about losing it because I don't use it anyway, but will that cause ESXi to renumber all my other vmnics and screw up my DVUplinks?
Upgraded E200-8D platform looks we need updated vibs ????
Prior to upgrade ping response 1ms now sample below.
Pinging 192.168.5.10 with 32 bytes of data:
Reply from 192.168.5.10: bytes=32 time=11ms TTL=128
Reply from 192.168.5.10: bytes=32 time=118ms TTL=128
Reply from 192.168.5.10: bytes=32 time=625ms TTL=128
Reply from 192.168.5.10: bytes=32 time=79ms TTL=128
Reply from 192.168.5.10: bytes=32 time=167ms TTL=128
Reply from 192.168.5.10: bytes=32 time=52ms TTL=128
Reply from 192.168.5.10: bytes=32 time=118ms TTL=128
Reply from 192.168.5.10: bytes=32 time=932ms TTL=128
Reply from 192.168.5.10: bytes=32 time=40ms TTL=128
I can confirm 7.0 installs and runs successfully on Supermicro X10SDV-TLN4F and X10SLM+-LN4F motherboards
Hi have an X10SDV-16C+-TLN4F was able to install v7 but had some compatibility issues with my old raid controller card. Can anyone recommend one that will be suitable.
Ideally i'd like to have 2 arrays one for ssd drives and another for spinning disks.
Thx in advance
Successfully installed the standard ESXi 7.0 image on an Intel NUC DC3217IYE that is now 8yrs old...I am amazed. Always needed to inject updated drivers into any previous image I had used.
Hello,
Do you know which network cards with a PCI Express 1x bus are compatible with esxi v7 ?
Thx
Does the HP Z840 (with E5-2690v3 ) support ESXi 7.0 ? The vmware HCL seems to support E5-2600v3.
If not supported, are there any workarounds
I'm running 6.7 U3 with the USB Fling but can't upgrade the profile because of the BOOT NIC:
[root@esxi:~] esxcli software profile update -d /vmfs/volumes/ESXI/VMware-ESXi-7.0.0-15843807-depot.zip -p ESXi-7.0.0-15843807-standard
[HardwareError]
Hardware precheck of profile ESXi-7.0.0-15843807-standard failed with errors:
Please refer to the log file for more details.
[root@esxi:/vmfs/volumes/56bcb324-9b1cf494-34a6-00012e6b1232] esxcli software vib list | grep fling
vmkusb-nic-fling 2.1-4vmw.670.2.48.33242987 VMW VMwareAccepted 2020-04-22
Installing the Fling also fails offcourse:
[root@esxi:/vmfs/volumes/56bcb324-9b1cf494-34a6-00012e6b1232] esxcli software vib install -d /vmfs/volumes/ESXI/ESXi700-VMKUSB-NIC-FLING-34491022-component-15873236.zip
[DependencyError]
VIB VMW_bootbank_vmkusb-nic-fling_0.1-4vmw.700.1.0.34491022 requires vmkapi_incompat_2_6_0_0, but the requirement cannot be satisfied within the ImageProfile.
VIB VMW_bootbank_vmkusb-nic-fling_0.1-4vmw.700.1.0.34491022 requires vmkapi_2_6_0_0, but the requirement cannot be satisfied within the ImageProfile.
Please refer to the log file for more detai
Why can't I update? I only have 1 NIC and that is the USB NIC...
There's an ESXi 7.0 version of the VIB that you'll need in this situation. Author a custom ISO image that contains the required VIB and you should be able to upgrade
Yeah, I was indeed busy with that as an alternative. Anyway, I've created the custom update package including the Fling and it still fails with the same error. Bummer!
PS C:\Users\XXX\Downloads> .\ESXi-Customizer-PS-v2.7.0.ps1 -v70 -pkgDir .\esx7\ -ozip
This is ESXi-Customizer-PS Version 2.7.0 (visit https://ESXi-Customizer-PS.v-front.de for more information!)
(Call with -help for instructions)
Logging to C:\Users\XXX\AppData\Local\Temp\ESXi-Customizer-PS-13924.log ...
Running with PowerShell version 5.1 and VMware PowerCLI version 12.0.0.15939655
Connecting the VMware ESXi Online depot ... [OK]
Getting Imageprofiles, please wait ... [OK]
Using Imageprofile ESXi-7.0.0-15843807-standard ...
(dated 03/16/2020 10:48:54, AcceptanceLevel: PartnerSupported,
The general availability release of VMware ESXi Server 7.0.0 brings whole new levels of virtualization performance to datacenters and enterprises.)
Loading Offline bundles and VIB files from .\esx7\ ...
Loading C:\Users\XXX\Downloads\esx7\ESXi700-VMKUSB-NIC-FLING-34491022-component-15873236.zip ... [OK]
Add VIB vmkusb-nic-fling 0.1-4vmw.700.1.0.34491022 [OK, added]
Exporting the Imageprofile to 'C:\Users\XXX\Downloads\ESXi-7.0.0-15843807-standard-customized.zip'. Please be patient ...
All done.
I don''t have clue anymore or what I'm doing wrong. I do see the Fling package in the .zip...
pay attenition. you are using the 7.0 driver on a 6.7 system.
Are there any known issues with mounting NFS as datastores on NUC10 in ESXi v7? I can mount, authenticate, and read/write data in datastore browser, however ESXi only sees a 2TB share from a QNAP NAS as 16MB capacity and Drive type as unknown? So I cant create any VMs, since the sysinfo is wrong. No error messages were shown during mount. iSCSI LUNs work fine, but was hoping for more performance and to be able to share more easily with my management laptop for ISO modifications.
I have installed v7 on my Supermicro SuperServer 5028D-TN4T hosts and want to deploy a 2-node vSAN. I haven't decided what I will use to serve as the witness. I want an all flash deployment and am trying to figure out what I will use for the cache and capacity layers. I have assumed that I will want a PCI-E device for the cache and most likely a M.2 card. I'm not sure what the "best bets" are in this arena.
Someone have succes with broadcom NetXtreme II BCM57711 ans esxi 7?
A good Memory ever is to re-enable TPS. I have pretty good results in my Homelab. Just set the advaced settings MemShareForceSalting=0 and Mem.AllocGuestLargePage=0
With that, on a physical AND nested ESXi I can run VCF 3.9.1 on 4 8Way /128GB nested ESXi with 36 GB used Memory!
I run 3x Gigabyte x570 ud with vSAN Lyzeen 5 2600.
Thanks for the great article William. I just installed vSphere 7 in my homelab on the E300-9D servers as well in case you want to add them to your list.
Trevor, I"m looking at a home lab upgrade as my servers are now about 5-6 years old. I"m intrigued by the E300-9D. Can I ask what your experiences have been and what made you pick this make/model?
Hello William,
How are you?
I have few desktop class machines with Intel I217-LM nics and want to check if esxi 7 supports it?
Thank yoo
Hi William, your content continues to be invaluable for us playing along at home. I can confirm v7 is running on my macmini6.2s (Late 2012, i7 2.6). Your autoPartitionOSDataSize tip was useful for a fresh install on a 128Gb SSD. However, upgrading from 6.7u3 to 7.0 on a 8GB USB drive didn't work well for me. The upgrade completes as expected, host boots (7.0) and reconnects to vshpere but never survives a reboot (Hypervisor Recovery can't find a boot image). I can reproduce it, and strangely, even when connected to vsphere there are no storage devices mounted (esxcli storage filesystems list) and bootbank/altbootbank are empty. Cheers.
Hello William,
I confirm that ESXI 7 (VMware-VMvisor-Installer-7.0.0-15843807) install was successful for me on the: ASRock X399 Taichi sTR4 AMD X399. CPU and AMD 2nd Gen Ryzen Threadripper 2950X, 16-Core. The motherboard BIOS UEFI Version: X399 Taichi P3.90. This motherboard has two Intel 1211AT Gbit NICs that is not listed in the VMware Compatibility Guide for ESXI 7 but seems to also work. See: https://www.vmware.com/resources/compatibility/detail.php?deviceCategory=io&productid=37601
Regards
I have been running the TaiChi for some time with 1st Gen ThreadRippers. Upgraded to 7.0 some time ago. I had some issues with distributed switches during the upgrade but I have seen similar issues with other setups, so I don't think it was related to the hardware. I ended up doing a fresh install and that went clean without any special handling. I am curious if anyone has tried these MBs with more than 128GB of RAM. ASrock has tested them with 32GB DIMMs and they worked, but they were not able to get enough free demos to test it above 128GB and apparently didn't want to spend the money to do it, so they have not certified it about 128.
I've had lots of trouble doing host and vCenter migrations with distributed switches of late. I think I have discovered the way to do it; at least this has worked for me and eliminated the need to reinstall ESXi due to a network configuration that was not repairable.
Before migrating your host attached to a dvSwitch, remove all the port groups from that host other than Management.
From the DCUI System Customization screen (F2 from the home screen), choose Network Restore Options. From that screen, choose Restore Standard Switch. That will move the Management network to a new standard switch.
Now you can clean up the old dvSwitch from the host and create/join a new one.
I've found that trying to Restore a standard switch with multiple port groups or VMK port groups attached can leave the network in an unusable, unfixable condition and, in turn, force a reinstallation of ESXi. (An upgrade install works to preserve host configuration.)
Hi William and anyone else who might be interested... I have finally been able to upgrade my home lab from esxi 6.7 to esxi7. The lack of native drivers on my NIC was holding me back but I saw there were some i350-T2 chipsets on the HCL so I took a chance on a card from Aliexpress;running i350AM2 chipset; https://www.aliexpress.com/item/4000314048950.html and it worked a treat - the fact its a pci X1 card suits my homelab server as that is the only slot left! The only quirk I picked up was a bug in the toggle of GPU pass-through in esxi7 (mainly for homelabs also being used as game servers like mine); https://tinkertry.com/vmware-vsphere-esxi-7-gpu-passthrough-ui-bug-workaround
New file system layout on install device sucks.
Previously I would use the leftover empty space on the USB install with a datastore that only contained ISOs... now I can't even do that, and it wastes a ton of space.
Stupid.
No problems using an old 2012 Mac Mini (macmine6,2 - i7-3720QM). Networks fine. Running with a second NIC (USB , TP-LINK RTL8153) as a backup server until I can get my Supermicro E200-8D (fixed/replaced). Interesting, I use the second NIC to connect to a iSCSI only physical network (Jumbo Frames), and I couldn't get the USB driver to work when MTU is set to 9000. Had to swap the roles (iSCSI management/VM) with the onboard NIC. Aside from that, runs smoothly.
Interesting, the installer flagged the processor (Ivy Bridge) as being deprecated and that future version will not support it.
William: I just acquired my first of four nodes for a vSAN configuration. It's an HP EliteDesk 800 G5 Mini with 64GB RAM and two M.2 SSDs, a 250GB for cache and an 2TB for capacity.The machine also has a Thunderbolt 3 port.
I also acquired an OWC (Aquantia/Marvell 107 chipset) Thunderbolt to 10GbE "NIC." Your link, above, led me to the driver for ESXi 6.7. I am running ESXi 7.
The Compatibility Matrix lists the driver as being applicable to all major versions of 6.7 as well as 7, but the driver itself is clearly 6.7 only. Is this a mistake in the Matrix? Is there a way to force-install the 6.7 driver (I used VUM). Am I missing something?
Any idea when a driver specifically targeted at ESXi 7 will be out?
Thanks for your help!
Hi Jeff,
Thanks for doing your homework on the AQC/Marvell. The Driver is indeed compatible with both 6.7/7.0 and there's literally no code change and hence its support for 7.0 was simply "carried" over from 6.7 from a certification point of view as I had same questions for them. I know this has lead to slight confusion as its only available for download in 6.7 but I've had multiple confirmations from Marvell folks that its fully supported and myself and others have used it with 7.0, so you can safely install 🙂
Manually installed. Working! Thank you, William!
Just a quick question - Does anyone know if I can run a nested ESXi 7.0 lab using 10th Gen i7-10700 CPU? I would suspect it would be supported but just looking for someone with experience/knowledge.
Did someone get to install the BCM5751 into ESXi 7?
While I have used ESXi for 10yrs, I finally got around to building a 3 host environment, 1 to primarily run vSphere with 2 hosts. Using 7.0 VMUG, I have been running 7.0 stability on a Lenovo Tiny M900 for months. I have added a 2nd M900 and a M920 now. Works natively with no drivers. Highly recommend these boxes. Also with vPro, you can remotely power on/off if required.
Derrick, I agree. Same here. Lenovo M900 Tiny is perfect as well. Do you have any special configuration for the HW monitoring? I asume it is not out of the box.
Thank you.
Thanks to William and to Derrick for the helpful info. I picked up a very inexpensive Lenovo M910q Tiny. SSD prices are also low, so I got an NVMe M.2 boot drive and a SATA SSD storage drive. Installing ESXi 7.0.3 required no drivers required. Thinking about getting another one...
Thank you for the great information! I want to ask you if the WOL is working now with NUC 10i7FNH .
I am considering between supermicro build or NUC and a working WOL is important for me.
I have Broadcom Qlogic 57711 cards in all my hosts (Production 6.7). The cards seem to be on the compatibility list, but I am not sure what brand the cards are without pulling one of them. i.e. several 57711 cards are on the list using the qfle3 drivers. My issue is that my ESXi hosts are using the VMKLinux bnx2x drivers not the qfle3 ones. I have downloaded the latest qfle3 ones from VMware and installed them, but it did not swap to them. The notes on the download page specifies only the 57712 and above are supported by the drivers and the same is said about the 7.0 drivers as well, yet the compatibility list says they should work on the 57711. So I guess my question is: Would the ESXi hosts have swapped to the native drivers automagically or do I need to do something to force it assuming they are compatible (which is in question).
Just updated my 7 year old i3 4010U NUC to a Shuttle DS10U5 (http://www.shuttle.eu/products/slim/ds10u5/) on ESXI 7.0b and it works out of the box using a Samsung nvme drive.
I have Dell R720s and added x520 (dual SR) NICs but they don't seem to be fully recognized. The host shows them in the hardware list (GUI and CLI) and I can configure SR-IOV and pass-through on them but they never show up in the list of physical NICs.
I checked the existing VIB and it's the latest version out there but I tried to update it anyway and there was no change. I tried a fresh install of the non-OEM ESXi as well as the Dell version (7.0.0 Build 16324942), swapped cards and confirmed PCI slots and still no change.
For those who have the x520s running, was there something additional you had to do to make these functional?
Our test cluster is all Dell R720, and two were upgraded from 6.5 -> 7.0GA and both the onboard (dual X520 + dual 1G card) and a PCI X520 were fine.
Checking a third R720 that was fresh-install 7.0, it also saw the cards fine.
Now ok for us is we can pass data through, add to vDS, vSS, etc. We do not do SR-IOV/passthrough.
Thanks for the reply Jason. Trying a fresh install of 6.7 now to see if that makes a difference.
I finally figured out the problem, which could affect other home labbers. It turns out the Intel cards only allow a specific couple of SFP+ modules to be used and will basically shutdown if an unsupported SFP+ is installed. I found a guide to reprogram the EEPROM to accept unsupported SFP+ modules and now everything is working.
If anybody wants the guide to do this themselves, it's here: https://forums.servethehome.com/index.php?threads/patching-intel-x520-eeprom-to-unlock-all-sfp-transceivers.24634/
As a further FYI Dell had at one time listed SFP+ that were not compatible - we ordered with SFP and they were not able to work (Dell swapped in our case), so this may apply with vendor provided optics too.
Hi, I created ESXi7.0 iso with ESXi-customizer and bundled with ESXi700-VMKUSB-NIC-FLING-39035884-component-16770668.zip. The installation was done successfully and it works actually. However, I got following error when I tried to update from build 16321839 to build 16324942. The error seems like "the driver is not native one". Is there any ways to update or I can't update for ESXi7.0??
[HardwareError]
Hardware precheck of profile ESXi-7.0b-16324942-standard failed with errors:
HI! I confirm ESXi 7.0 works flawlessly in SuperMicro SYS-6018R-MTR and SuperMicro SYS-1029TP-DC1R. Both out-of-the-box.
HI,
On Esxi 6.7 U3, NIC I340-T4 does not give option of Hardware iSCSI, is it supported or I have to do with Software iSCSI only.
Any Drivers which can make Hardware iSCSI wiork with it?
Thanks,
Would you have any suggestions on getting PCIe Passthrough working on my Dell PowerEdge C4140 with quad V100 running CentOS7 (qemu 5.2 compiled with e1000e and vmxnet3 support) attempting to pass the V100 devices to a VMWare ESXi 7.0.3 guest? I get to the point where the devices are listed in the VMware GUI but passthrough is greyed out.
I have a HP Z640 workstation and I tried to install ESXI 6.5, 6.7 and 7. Every time the boot / installation process stops at:
"Shutting down firmware service...
Using 'simple offset' UEFI RTS mapping policy
Relocating modules and starting up the kernel..."
I tried this workaround but does not work:
Press Shift+O during startup of an ESXi installation.
Append the boot option
ignoreHeadless=TRUE
I noticed several guys managed to install and run ESXI on HP Z640. Do you or anyone else knows what needs to be done to successfully install any ESXI version?
I managed to install ESXi 6.5 finally. I had to disable "UEFI All" ROM Launch policy and Change to Legacy ALL
Here is a screenshot of ESXI 6.5 running on https://postimg.cc/G8YvJbGs HP Z640 workstation. It has been installed via USB on SAMSUNG Electronics 870 EVO 2TB 2.5 Inch SATA III Internal SSD. The next test is to install it on NVME m.2
2 x Intel Xeon CPU E5-2695 v4 @ 2.1GHz and 256 GB of Memory.
Key Advanced BIOS Settings that worked for me:
1) Configure Legacy Support and Secure Boot: Enable Legacy Mode and Disable Secure Boot
2) Configure Option ROM Launch Policy: Legacy ALL
https://h30434.www3.hp.com/t5/Desktop-Boot-and-Lockup/Option-ROM-Launch-Policy-set-All-Legacy-in-BIOS-What-means/td-p/8462533
https://h30434.www3.hp.com/t5/image/serverpage/image-id/295832iA126D1EA5B05DB91/image-size/large?v=v2&px=999
Additional Settings that are needed but not relevant to boot issue:
System Security:
Virtualization Technology (VT-x): Enable
Intel VT For Directed I/O (VT-D)(VT-x): Enable
Intel TXT(LT) Support: Enable
I hope the above will help HP Z640 users that are having boot issues
Update: ESXi 6.5 also works with “UEFI All Expect Video” BIOS settings as well. However, I wasn’t able to upgrade to 7. The upgrade process stops at the same 3 messages shared above and then it reboots back to 6.5. The ignoreHeadless=TRUE workaround during upgrade didn’t work.