Interested in trying out the latest release of VMware Cloud Foundation (VCF) 5.0? Don't have some beefy hardware to meet all the requirements, not to worry! Did you know you can actually deploy a VCF Management Domain using just a single Intel NUC or simliar small form factor system? This is exactly how I kicked the tires with the latest VCF 5.0 release 😎
Disclaimer: This is not officially supported by VMware, please use at your own risk.
Requirements:
- VMware Cloud Builder 5.0 OVA (Build 21822418)
- VCF 5.0 Licenses
- Intel NUC configured with
- 64GB of memory or more
- Dual onboard networking ("Tall" NUC like Intel NUC 11 Pro, which is what I used) OR add additional NICs with these Thunderbolt 3 Networking options (no USB NIC)
- 2 x SSD that are empty for use for vSAN bootstrap (500GB+ for capacity)
- ESXi 8.0 Update 1a installed on the Intel NUC using USB device
- Ability to deploy and run the VMware Cloud Builder (CB) Appliance in a separate environment (ESXi/Fusion/Workstation)
Note: While my experiment used an Intel NUC, any system that meets the basic requirements above should also work.
Procedure:
Step 1 - Boot up the ESXi installer from USB and then perform a standard ESXi installation using the same USB device that it was used to boot up from.
Step 2 - Once ESXi is up and running, you will need to minimally configure networking along with an FQDN (ensure proper DNS resolution), NTP and specify which SSD should be used for the vSAN capacity drive. You can use the DCUI to setup the initial networking but recommend switching to ESXi Shell afterwards and finish the require preparations steps as demonstrated in the following ESXCLI commands:
esxcli system ntp set -e true -s pool.ntp.org
esxcli system hostname set --fqdn vcf-m01-esx01.primp-industries.local
esxcli vsan storage tag add -d [DISK_ID] -t capacityFlash
Note: Use vdq -q command to query for the available disks for use with vSAN and ensure there are no partitions residing on the disks
To ensure that the self-signed TLS certificate that ESXi generates matches that of the FQDN that you had configured, we will need to regenerate the certificate and restart hostd for the changes to go into effect by running the following commands within ESXi Shell:
/bin/generate-certificates
/etc/init.d/hostd restart
Step 3 - Deploy the VMware Cloud builder in a separate environment and wait for it to be accessible over the browser. Once CB is online, download the setup_vmware_cloud_builder_for_one_node_management_domain.sh setup script and transfer that to the CB system using the admin user account (root is disabled by default).
Step 4 - Switch to the root user and set the script to have the executable permission and run the script as shown below
su -
chmod +x setup_vmware_cloud_builder_for_one_node_management_domain.sh
./setup_vmware_cloud_builder_for_one_node_management_domain.sh
The script will take some time, especially as it converts the NSX OVA->OVF->OVA and if everything was configured successfully, you should see the same output as the screenshot above.
Step 4 - Download the example JSON deployment file vcf50-management-domain-example.json and and adjust the values based on your environment. In addition to changing the hostname/IP Addresses you will also need to replace all the FILL_ME_IN_VCF_*_LICENSE_KEY with valid VCF 5.0 license keys.
Step 5 - As shared in this blog post HERE, we need to use the VMware Cloud Builder API to kick off the deployment to workaround the new 10GbE networking pre-check. The following PowerShell snippet can be used (replace the values from within your environment) that will deploy VCF 5.0 using the VMware Cloud Builder API and providing the same VCF JSON deployment spec that you would use with the VMware Cloud Builder UI.
$cloudBuilderIP = "192.168.30.190" $cloudBuilderUser = "admin" $cloudBuilderPass = "VMware123!" $mgmtDomainJson = "vcf50-management-domain-example.json" #### DO NOT EDIT BEYOND HERE #### $inputJson = Get-Content -Raw $mgmtDomainJson $pwd = ConvertTo-SecureString $cloudBuilderPass -AsPlainText -Force $cred = New-Object Management.Automation.PSCredential ($cloudBuilderUser,$pwd) $bringupAPIParms = @{ Uri = "https://${cloudBuilderIP}/v1/sddcs" Method = 'POST' Body = $inputJson ContentType = 'application/json' Credential = $cred } $bringupAPIReturn = Invoke-RestMethod @bringupAPIParms -SkipCertificateCheck Write-Host "Open browser to the VMware Cloud Builder UI to monitor deployment progress ..."
Thanks to a nice tip, we can actually simplify our setup by just calling the VMware Cloud Builder API directly within the VMware Cloud Builder VM using the following shell script below.
#!/bin/bash cloudBuilderIP="192.168.30.190" cloudBuilderUser="admin" cloudBuilderPass="VMware123!" mgmtDomainJson="vcf50-management-domain-example.json" #### DO NOT EDIT BEYOND HERE #### inputJson=$(<$mgmtDomainJson) curl https://$cloudBuilderIP/v1/sddcs -i -u "$cloudBuilderUser:$cloudBuilderPass" -k -X POST -H 'Accept: application/json' -d '$inputJson' echo "Open browser to the VMware Cloud Builder UI to monitor deployment progress ..."
Here is a screenshot running the code snippet above and once VMware Cloud Builder API has accepted the request, then you can login to VMware Cloud Builder UI to monitor the rest of the deployment progress.
Your deployment time will vary based on your physical resources but it should eventually complete with everything show success as shown in the screenshot below.
Here are a few more screenshots of the final VCF 5.0 deployment running on an Intel NUC from vSphere point of view as well as logging into SDDC Manager:
Depending on your usage of the VCF Management Domain, you could reduce some of the resources. In my very limited testing, I was able to shutdown SDDC Manager and NSX Manager tune them down to 14GB and they seem to run fine. You probably can do same for vCenter Server, but make sure vCenter Server is operational before you start up both SDDC Manager since it uses vCenter SSO authentication.
Finally, if you want to also deploy a VCF Workload Domain, you can do so with another Intel NUC or simliar system by following this blog post HERE! 😁
Plamen Iliev says
Hi, with regards to this line ...
`2 x SSD that are empty for use for vSAN bootstrap (500GB+ for capacity)`
Currently I got one 500GB SATA and one 1TB NVME in my NUC 13. After installing ESX on the 500GB SATA I now got ~ 300GB of it available for data store, the 1TB NVM is fully available. Would that work or would I need a bigger SATA, or would I actually need a third drive to host ESX OS?
William Lam says
As mentioned, you need two empty drives for vSAN. You can use 3rd drive OR use USB for ESXi install
Plamen Iliev says
Also curious what is driving the hard requirement for 2 hardware NICs, I wonder if it is possible to spread it on couple of NUCs - would that help circumvent this?
Plamen Iliev says
Actually looking at the - management-domain-example.json file is it safe to say that 1 NIC is required for Management Network , and the other one for vSAN & vMotion?
William Lam says
Please read VCF documentation for details. It uses VDS, it can’t support with single NIC … at least through its workflows
Mike says
For the 2nd NIC, do you have suggestions for a 1gb Thunderbolt NIC that works natively? I know you gave options for 10gb, but hoping not to spend that much money.
William Lam says
I’m not aware of any 1GbE or eve 2.5GbE TB/USB4 … seems market only catered to 10GbE
I may have a nice alternative solution next week … stay tuned on the blog 😉