To deploy VMware Cloud Foundation (VCF), your physical or virtual ESXi host must have at least two network adaptors to be able to migrate to Distributed Virtual Switch (VDS), which is configured as part of the VCF Bringup process. While you can technically migrate to a VDS with just a single network adaptor using this trick, it is definitely easier if you have a system that meets this basic requirement.
Earlier this year, I demonstrated that you can deploy VCF using just an Intel NUC with only 64GB of memory, which would be the minimum to run single node VCF Management Domain, however it does not leave you with much room for running other workloads due to pushing the memory limits.
The ASUS PN64-E1 is currently one of my top favorite small form factor kits, especially being able to support up 96GB of memory using the new non-binary DDR5 memory modules. After the release of VCF 5.1, I wanted to use the ASUS PN64-E1 for a VCF deployment, but there was only one problem ... my particular configuration of the PN64-E1 only had a single network adaptor!
I thought I could out smart the VCF Bringup pre-check by using a USB network adaptor and installing the popular USB Network Native Driver for ESXi 😉
However, it turns out the pre-check is looking for PCIe-based network adaptors, so while the system does have two network adaptors, it still failed the pre-check and prevented the deployment from continuing. I ended up reaching out some of the VCF Engineers to see if there were any workarounds and he was kind enough to provide me with a nice workaround that would benefit our users looking to play and explore VCF in a lab environment.
Disclaimer: This is not officially supported by Broadcom, use at your own risk.
To workaround the PCIe-base network adaptor pre-check, a set of modified JAR files are required and replaces the default ones from VMware Cloud Builder. Below are the requirements along with a simplified shell script that will optimize the deployment of VCF for small form factor systems such as ASUS PN64-E1 or Intel NUC.
Requirements:
- VMware Cloud Builder 5.1 OVA (Build 22688368) or VMware Cloud Builder 5.0 OVA (Build 21822418)
- VCF 5.1 or 5.1 Licenses
- ASUS PN64-E1 configured with
- 64GB of memory or more
- Additional supported USB NIC
- 2 x SSD that are empty for use for vSAN bootstrap (500GB+ for capacity)
- ESXi 8.0 Update 2 or ESXi 8.0 Update 1a installed on the Intel NUC using USB device
- USB Network Native Driver for ESXi installed and recognizing the USB NIC (1gbE+)
- Ability to deploy and run the VMware Cloud Builder (CB) Appliance in a separate environment (ESXi/Fusion/Workstation)
Note: While my setup used an ASUS PN64-E1, any system that meets the basic requirements will also work.
Step 1 - Download the the two required JAR files for your desired version of VCF below and then transfer those files to your Cloud Builder appliance using the admin account:
VCF 5.0
VCF 5.1
Step 2 - Download the setup_vmware_cloud_builder_for_one_node_management_domain_with_usb_nic.sh shell script and transfer that to Cloud Builder appliance using the admin account.
Step 3 - Switch to the root user and set the script to have the executable permission and run the script as shown below
su -
cd /home/admin
chmod +x setup_vmware_cloud_builder_for_one_node_management_domain_with_usb_nic.sh
./setup_vmware_cloud_builder_for_one_node_management_domain_with_usb_nic.sh
Note: The script will automatically backup the original vsphere-plugin-1.0.0.jar and vsphere-sdk-wrapper-1.0.0.jar file before replacing them, if they exists in the current directory.
The script will take some time, especially as it converts the NSX OVA->OVF->OVA and if everything was configured successfully, you should see the same output as the screenshot above.
Step 4 - Download either the vcf50-management-domain-vsan-osa-example-USB-NIC.json or vcf51-management-domain-vsan-osa-example-USB-NIC.json example JSON file and adjust the values based on your environment. In addition to changing the hostname/IP Addresses you will also need to replace all the FILL_ME_IN_VCF_*_LICENSE_KEY with valid VCF 5.0/5.1 license keys.
Step 5 - As shared in this blog post HERE, we need to use the VMware Cloud Builder API to kick off the deployment to workaround the new 10GbE networking pre-check. The following PowerShell snippet can be used (replace the values from within your environment) that will deploy VCF 5.0 using the VMware Cloud Builder API and providing the same VCF JSON deployment spec that you would use with the VMware Cloud Builder UI.
$cloudBuilderIP = "192.168.30.190" $cloudBuilderUser = "admin" $cloudBuilderPass = "VMware123!" $mgmtDomainJson = "vcf51-management-domain-vsan-osa-example-USB-NIC.json" #### DO NOT EDIT BEYOND HERE #### $inputJson = Get-Content -Raw $mgmtDomainJson $pwd = ConvertTo-SecureString $cloudBuilderPass -AsPlainText -Force $cred = New-Object Management.Automation.PSCredential ($cloudBuilderUser,$pwd) $bringupAPIParms = @{ Uri = "https://${cloudBuilderIP}/v1/sddcs" Method = 'POST' Body = $inputJson ContentType = 'application/json' Credential = $cred } $bringupAPIReturn = Invoke-RestMethod @bringupAPIParms -SkipCertificateCheck Write-Host "Open browser to the VMware Cloud Builder UI to monitor deployment progress ..."
We also have another option of calling the VMware Cloud Builder API directly from within the VMware Cloud Builder VM by using the following shell script below if you prefer to run it locally versus remotely.
#!/bin/bash cloudBuilderIP="192.168.30.190" cloudBuilderUser="admin" cloudBuilderPass="VMware123!" mgmtDomainJson="vcf51-management-domain-vsan-osa-example-USB-NIC.json" #### DO NOT EDIT BEYOND HERE #### inputJson=$(<$mgmtDomainJson) curl https://$cloudBuilderIP/v1/sddcs -i -u "$cloudBuilderUser:$cloudBuilderPass" -k -X POST -H 'Accept: application/json' -d '$inputJson' echo "Open browser to the VMware Cloud Builder UI to monitor deployment progress ..."
If everything was configured correctly, the deployment of the VCF Management Domain should complete without any issues. As you can see below, I have VCF 5.1 running on PN64-E1 which only has a single PCIe-base network adaptor along with a USB-based network adaptor provided by the USB Native Network Driver for ESXi 😀
If we log into the SDDC Manager, we can see that the physical network adaptor is composed of vmnic0 (PCIe-based) and vusb0 (USB-based) network adaptor.
Steve Walker says
Thanks William - I'm struggling to find the files which are linked above since the
Steve Walker says
*since the changeover to Broadcom recently. Is it possible to re-link to them?
William Lam says
I've updated the links Steve, they're now hosted on Github repo
Steve Walker says
Gratefully received! Thank you for turning this around so quickly, it should get my 4 x Intel NUC with dual dvswitch config up and running with just the onboard/USB single uplinks.
stevewalker508 says
Just a quick report back, I achieved a successful 'bringup'!
Points to note personally which I'm happy to share here were:
Firstly, I hit a problem with NTP during the second configuration phase of vCenter appliance because it just would not communicate with time.google.com or uk.pool.ntp.org (even though the appliance could successfully ping the names). In the end the only solution was a local Windows Server 2016 domain controller with the w32time service configured to advertise its time based upon an upstream peer server (you've guessed it, pool.ntp.org!).
Error: Failed to set the time via NTP. Details: Failed to sync to NTP servers.. Code: com.VMware.applmgmt.err_ntp_sync_failed
Secondly, my original plan to build two dvSwitches with a single uplink each failed due to the requirement for an even number of interfaces on each switch. When I based my config around William's example I had better success, but then hit a different problem - caused by having 3 NICs, one of which was disconnected
vmnic0 (2,500 Mbit/s)
vmnic1 (down)
vusb0 (1,000 Mbit/s)
ERROR [c.v.e.s.o.model.error.ErrorFactory,pool-3-thread-4] [NJ0LK2] VSPHERE_ADD_HOST_TO_DVS_FAILED Failed to add host esximgmt1.domainname.internal to DVS
Caused by: java.lang.NullPointerException: Cannot invoke "com.vmware.vim.binding.vim.host.PhysicalNic$LinkSpeedDuplex.getSpeedMb()" because the return value of "com.vmware.vim.binding.vim.host.PhysicalNic.getLinkSpeed()" is null
Even though I connected the vmnic0 and vusb0 to the same switch I had to temporarily provide connectivity to vmnic1 in order for the error to not occur whilst I bypassed this ordinarily disconnected NIC. I think that this could be related to the code in the .jar files provided above not taking into account adapters in a down state.
Once I got this all figured out I will endeavour to remember that the second NIC (in my case vusb0) should not be attached to any vswitch initially, as this needs to be spare in order for the dvswitch to claim it and migrate the vmk ports over once completely built.
PS - it appears that the MTU parameter in the JSON file is the vswitch frame size which you want to encapsulate, not the frame size on the physical switch.
Therefore, "mtu": "1600" shown below 'magically' became 1700 in the global NSX configuration. Now it makes sense why the example in the Git repo is 8900, i.e. 9000 minus 100 byte header etc.
"dvsSpecs": [
{
"dvsName": "sbc-m01-cl01-vds01",
"mtu": "1600",
"networks": [
"MANAGEMENT",
"VMOTION",
"VSAN"
],
Next steps are to try to scale out my single NUC installation to incorporate another 2/3 hosts in order to create an acceptable management cluster out of this hybrid single-NIC configuration.