By default, VMware Cloud Foundation (VCF) requires a minimum of 4 ESXi hosts to construct the Management Domain which is fine for a production environment, but it can be a challenge for those interested in explore VCF in a homelab setting.
I recently came to learn about a really cool tidbit from one of our VCF Engineers on how you can actually deploy a VCF Management Domain using just a single ESXi host, ideal for a homelab setup! 😍
Not only could this benefit users in deploying a physical VCF setup but it would also benefit anyone using my Automated Lab Deployment Script for VCF, which makes it super easy by leveraging my Nested ESXi Virtual Appliance VMs.
In fact, that was how I quickly verified this trick works using my VCF automation script 😀
The way that this work is a configuration change to Cloud Builder to tell it to allow a single ESXi host to be used and it will simply setup a single node vSAN Cluster, which is typically how you would bootstrap if you were doing a greenfield deployment. The only difference here is that instead of adding additional 3 x ESXi hosts to provide redundancy for Management Domain, it simply is relaxing that requirement and thus allowing for a single ESXi host. vSAN is still a requirement for VCF Management Domain, so ensure you can meet those requirements still.
Disclaimer: This is not supported by VMware, use at your own risk. As of writing this blog post, this trick is functional with the latest VCF 4.5 release.
Step 1 - Before starting a VCF deployment, SSH to the Cloud Builder VM and run the following commands:
echo "bringup.mgmt.cluster.minimum.size=1" >> /etc/vmware/vcf/bringup/application.properties systemctl restart vcf-bringup.service
This will take a minute or so for the VCF Bringup service to restart and if you are logged into the Cloud Builder UI, you may see a blue notification banner asking you to refresh.
Note: You will also know that the setting has been applied correctly when you specify only a single ESXi host as it should not complain that you are not meeting the minimum 4 host requirement.
Step 2 - We need to generate a VCF deployment using the JSON format. If you are using my VCF automation script, it will automatically generate the required JSON deployment file upon completing the deployment and and you can go straight to Step 3. If you are starting with the VCF Deployment XLSX workbook, then please see this blog post on how to convert the file from XLSX to JSON before proceeding to Step 3.
Step 3 - Edit the VCF JSON configuration file and ensure it only contains a single ESXi host entry as shown in the example below:
"hostSpecs": [{ "hostname": "vcf-m01-esx01", "vSwitch": "vSwitch0", "association": "sfo-m01-dc01", "credentials": { "username": "root", "password": "VMware1!" }, "ipAddressPrivate": { "subnet": "255.255.255.0", "ipAddress": "192.168.30.182", "gateway": "192.168.30.1" } }]
Next, append following entry "hostFailuresToTolerate": 0 within the clusterSpec section as shown in the example below
"clusterSpec": { "vmFolders": { "MANAGEMENT": "sfo-m01-fd-mgmt", "NETWORKING": "sfo-m01-fd-nsx", "EDGENODES": "sfo-m01-fd-edge" }, "clusterName": "sfo-m01-cl01", "clusterEvcMode": "", "hostFailuresToTolerate": 0 },
Step 3 - Finally, we are now ready to begin our VCF deployment and provide our modified VCF JSON configuration as input to the VCF deployment wizard.
Once the VCF pre-checks have successfully completed without errors, you can begin the deployment by clicking next. Deployment times will vary based on the available resources and hardware configuration.
As you can see, while I can get away with just 54GB of memory for my single ESXi VM, it was definitely tight on resources and you may want to provision at least 60-64GB of memory. If you plan to also deploy a VCF Workload Domain, you will need to assign additional resources to accommodate those requirements.
Hopefully this makes deploying VCF a bit more easier for folks in a homelab environment and I have few more articles planned this week on making it easier to deploy VCF in the homelab, so stay tuned!
Nice! Gone try it!
Very informative .. thanks
Nice, thanks for the share. This will definitely help for some lab work.
Thanks for the great post William!
I have 2 physical hosts I want to try this out on. Do you think I can just adjust the "minimum size" to 2 in the app properties?
Yes
Excellent, Thanks!
Hi Mr.Lam,
Thanks for your sharing . amazing for me .
I tried to do this lab and facing with error at task" enable vSan mornitoring " . I ssh to CB and check log on vcf-bringup , i see the status " Failed to enable vsan performance " . Can you advise me about this issue ?
Many Thanks !
Hi all ,
My problem solved because Vsan's default policy is raid 1 (mirroring) and this environment has only 1 node. So the system will be tested unsuccessfully on this task. You can get around this via logging into Vcenter and editing Vsan's default policy of raid 0 - no redundancy and retry.
Nice weekend !
When I am trying on one host I get this vSAN cache/capacity disks ratio must be more or equal to 1:7
Is anyone aware if this still works on VCF 5.1? When I set bringup.mgmt.cluster.minimum.size=1 the bringup service will no longer start.
Never mind, it just takes 3 to 5 minutes for the service to restart.
Has anyone been able to run this is VCF 5.2. Seems like no matter what I do, I can't login to Cloud Builder with the root account to make the change.
Looks I posted this in the wrong spot. Ooopsy. By "this", I meant VMware Cloud Foundation with a single ESXi host for Management Domain. Can't login to Cloud Builder with the root account to run
"echo "bringup.mgmt.cluster.minimum.size=1" >> /etc/vmware/vcf/bringup/application.properties
systemctl restart vcf-bringup.service"
Deployed manually a few times without any luck.
Validated this works with VCF 5.1.1
Hello Lam,
Thanks for this tip, I wanted to use the nested but when validating with cloud builder for the vsan section: I have the following error: has 4 different SSD disk sizes, only 2 are allowed. Indeed, the nested has 3 local disks 4GB, 8GB and 16GB, I added two disks 40GB and 400GB.
I have the impression that it does not differentiate between system disk and available disk. Do you have a recommendation to make to me? Thank you for your feedback
Just increase 8/16 to 40/400, they’re empty and there for vSAN, no need to complicate things by adding more disks 🙂