By default, the VMware Cloud Foundation (VCF) 9.0 Installer requires a minimum of 3 ESXi hosts when you select vSAN (OSA or ESA) for storage or 2 ESXi hosts when you choose to use external storage (Fibre Channel VMFS or NFS). When compared to VCF 5.x where vSAN was the only storage option and it required a minimum of 4 ESXi host, this is certainly a welcome change for our users.
With VCF 5.x, it was possible to deploy using just a single ESXi host for the VCF Management Domain and this question has certainly come up a few time since the GA of VCF 9.0 ...
So what happens when you only enter a single ESXi host into the VCF Installer?

As expected, we see a validation error stating that we have not met the minimum number of ESXi hosts ...
However, this is just the default behavior and of course this was something I had looked into early on with VCF Engineering, since this would be an extremely useful capability to allow our users to easily explore VCF 9 using the smallest number of ESXi hosts for lab purposes only, assuming you can meet the Minimal Resources to Deploy VCF 9.0 in a Lab.
Enough teasing ... lets get to the goods! ?
We do have a configuration override that can be added to the VCF 9.0 Installer, allowing you to use either a single or even dual hosts, depending on the amount of available resources you have on each host and the capabilities you plan on using as additional hosts may still be needed for setting up a VCF Workload Domain or deploying additional VCF components.
Disclaimer: This is not officially supported by Broadcom, please use at your own risk.
Step 1 - Deploy the VCF 9.0 Installer and ensure that it is up and running
Step 2 - Run the following command to append the required configuration for VCF 9.0.1 deployment:
echo "feature.vcf.vgl-29121.single.host.domain=true" >> /home/vcf/feature.properties
For VCF 9.0.0 deployment, please run the following command:
echo "feature.vcf.internal.single.host.domain=true" >> /home/vcf/feature.properties
Step 3 - Restart the VCF Installer services for the change to go into effect:
echo 'y' | /opt/vmware/vcf/operationsmanager/scripts/cli/sddcmanager_restart_services.sh
Note: This setting is applicable for both VCF Management and Workload Domain, since the VCF Installer can turn into SDDC Manager, which means it will have the settings or it will be deployed with the setting as that is what the source VCF Installer has configured.
Step 4 - While this workaround allows you to use less than the required number of ESXi hosts, the VCF Installer UI still enforces the minimum host check, I suspect this is due to UI hardcoding the validation checks.
The workaround is to not use the interactive workflow, which will not allow you to proceed until you enter three valid ESXi hosts but to use JSON method of deployment. You can still use the VCF Installer UI, but instead of the interactive wizard, simply upload your JSON that contains the single or dual ESXi host reference and proceed directly to the validation, which will pass and you can begin your deployment.

So what are some ways in which you can deploy VCF 9 using this new trick?
If you have a physical host that can meet the Minimal Resources to Deploy VCF 9.0 in a Lab, then a single ESXi host can certainly allow you to easily deploy the full VCF 9 solution, there are plenty of powerful workstations to server grade hardware that can already do this, no different than say VCF 5.x but if you are looking at consumer grade hardware, there are definitely resource constraints that will require you to use more than a single host.
For example, I have been playing with the Minisforum MS-A2, which I will have a thorough write-up on and after countless hours of testing, debugging ... I have found that even though it meets the CPU core requirement, having just a single host is not sufficient due to the memory demands required by VCFA and this is with NVMe Tiering enabled! While I am still not giving up, I would say to get the best possible experience, you will need to have at least two MS-A2 to not only deploy VCF 9, but doing something useful with it afterwards, which ultimately is my goal!
The GMktec K11 or K8+ is another popular set of kits, however they do not meet the minimum CPU core requirement, which would prevent you from deploying VCF Automation (VCFA) out of the box. The rest of the VCF 9 components can deploy without any issues and I have it running with vSAN ESA and since these systems only support two NVMe devices, you also need to apply this workaround to or you can simply use external NFS storage as an alternative.

Currently, I do NOT have 2 x MS-A2 and thanks to this recent workaround, which I had just published last week, I was able to deploy VCF 9 using both a Minisforum MS-A2 and GMktec K11, both of which support 128GB of memory but are from two different hardware vendors.
The benefit here is that the VCF Automation (VCFA) appliance can run on the MS-A2 which can support the higher vCPU requirement and the other appliances can be spread across the MS-A2 and another system that only has 8 Cores / 16 Threads, which can help from a budgeting standpoint.

While you can mix/match your hardware using this additional workaround, if you ask my personal opinion, I would say invest in a uniform hardware setup as it will only give you a better experience in the long run and remember, this is an investment in yourself, not necessary the physical thing that you just bought and need to justify to your significant other π
Lastly, I also want to share that I will have a VMware Explore session that will deep dive into deploying VCF 9.0 for a lab environment including more tips and tricks,Β so be sure to sign up for #CLOB1201LV Deploying Minimal VMware Cloud Foundation 9.0 Lab
Your first paragraph mentions Fiber Channel VMFS or NFS, but not iSCSI. Does VCF 9 not support iSCSI?
iSCSI continues to be supported, but just not for the principal storage for VCF Management Domain. For VCF Workload Domain, you can use any of the supported vSphere Storage options
Is NFS supported as principal storage for VCF Management Domain? We have existing diskless blades with NetApp iSCSI/NFS array.
This is answered directly in the blog post π
Ok, given that humorous reply, I'm going to assume that:
"By default, the VMware Cloud Foundation (VCF) 9.0 Installer requires a minimum of 3 ESXi hosts when you select vSAN (OSA or ESA) for storage or 2 ESXi hosts when you choose to use external storage (Fibre Channel VMFS or NFS)"
means NFS is covered as principal storage for the Management Domain that the "Installer" would be building.
I'm just looking for clarity before arguing for a specific hardware refresh option as we are needing to replace some old Cisco B200 M4 blades that only support ESX 7.x and had been considering some B200 M5s (still 'old' hardware but newer which supported ESX 8.x and are listed as still supported for ESX 9.x. We don't currently use VCF but have the license as we had to upgrade to it on the last renewal with Broadcom. Just trying to verify if our external iSCSI array or external NFS array could be used as primary storage for the Management domain with VCF 9 or would we be needing to consider other options with either external FC storage for LUNs or onboard storage for vSAN.
Thanks,
David G
Correct. vSAN, FC (VMFS) & NFS is supported as principal storage for VCF Mgmt Domain, this is new in 9.0
This is also called out on official documentation
Good morning, I am curious how iSCSI comes into play if you use the option in the VCF installer to use existing components. For instance if we already have vCenter, VCF Operations, and NSX deployed on iSCSI backed storage. Thanks!
Found this: https://cormachogan.com/2025/08/21/support-for-iscsi-in-vmware-cloud-foundation-9-0/
Is it not possible to run VCF 9 on the MS-A2 with 128GB RAM and using memory tiering up to almost 630GB RAM?
While NVMe Tiering is extremely powerful and allows you to push the boundaries of your physical DRAM, there are limits when you do indeed use up physical memory pages and you'll need at least two MS-A2 to deploy and do useful things π
Stay tuned for a more thoroughly write-up on MS-A2 tomorrow
Hi William,
Thank you for the post !
Can I use this trick to import a 2 nodes vSAN 8u3 cluster into VCF 9 ?
Did you ever get this working, I cannot get this work around to work for importing my environment with 2 single node clusters
how to deploy vcf 9 with existing vcenter and nsx in single host cluster ? the error I got : There were no conforming clusters detected ensure at least one cluster has more than 1 host
I am trying to test this using a single node. I can't get past the step "Enable the default vSAN Storage Policies". Should I use NFS versus vSAN to test?
I am also running into this and hoping @William might have an answer.
Hi William,
Thanks for starting this project. For me this is the only setup possible in the home lab, because of the high pre-requisites of the VCF Automation instance.
Still have some weird issues during the last steps of the deployment phase of VCF Automation. After the template deployment it runs in a time-out error...
One question about running VCF9 on a single esx host. What is the best shutdown / startup order of all of the appliances? Especially during ESX maintenance or when I don't use the lab (during holidays for example).
I had a couple of times now, that my NSX appliance breaks and because of that probably my vCenter networking breaks and reports that the VDS status is DOWN.
Thank you in advance for the help.
Hi William
First of all, thank you for your posts.
I have tried to deploy on single host but:
2026-01-24 23:36:39,212 - vCSACliInstallLogger - ERROR - Error:
Problem Id: None
Component key: setnet
Detail:
Failed to set the time via NTP. Details: Failed to sync to NTP servers.. Code: com.vmware.applmgmt.err_ntp_sync_failed
Could not set up time synchronization.
Resolution: Verify that provided ntp servers are valid.
Checked and it looks fine.
2026-01-24 23:33:29,190 - vCSACliInstallLogger - ERROR - Failed while trying the connection with certificate validation. Exception: HTTPSConnectionPool(host='192.168.39.11', port=5480): Max retries exceeded with url: /rest/vcenter/deployment (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host'))
could you please let me know how can I fix it?
Thanks
Jawad Ehsani
Hey William, thanks for the wonderful resources. Do you know if the single node deployment override changed again for 9.0.2.0?
I tried both your suggestions but neither worked. Still get the pesky "At least 3 ESX Hosts required. Found 1" error π
Hey, just wanted to say, it works fine. Just didnt with only rebooting the sddcmanager service. After a full reboot its now OK.
Unable to make this workaround work with 9.0.2.0 did something change?
William thanks so much for your blogs , really helped me in terms of equipment/BoM and tips and tricks. I've managed to get vcf 901 working (nested) across 2 x MS-A2s (2 node vsan cluster) . Smoke is coming out of my boxes, but its running as 3 node nested vsan cluster, inside a 2 node vsan cluster. I skipped the Automation install, that might be pushing it and at the moment with Memory prices it would be too expensive to buy a 3rd MS-A2. I'd bought the other 2 luckily before prices shot up.
In your screenshot, I can see two ESXi hosts.
Itβs not entirely clear to me whether you are running both the Management Domain and a Workload Domain across those two hosts in the same Cluster.
Based on the standard VCF architecture, each domain requires its own vCenter instance.
So if you are running both Management and Workload Domains ,does that mean you are deploying two separate vCenter Servers in this setup?
Would appreciate some clarification.
You're mixing up several concepts ...
1) The reason for two host is already explained in the blog post. Search "Currently, I do NOT have 2 x MS-A2"
2) I'm only running Management Domain but as already mentioned in blog post, you can do this for BOTH Mgmt and Workload Domain which means that would be 1 x Host for Mgmt and 1 Host for Workload, assuming each of your physical hosts has enough resources. I don't know what you mean by same cluster ...
Hi William,
Let me clarify my question.
After completing the Management Domain deployment, I would like to deploy vRA (VCF Automation).
In order to do that, the wizard requires creating a Workload Domain.
During the process, it asks for a dedicated vCenter and NSX instance, and it does not allow selecting the same vCenter that is used for the Management Domain.
So my question is very simple:
Is there an option in VCF 9.x to deploy a Consolidated Domain from the beginning (similar to what was possible in VMware Cloud Foundation 5.2), where Management and Workload components share the same vCenter/NSX infrastructure?
Appreciate your clarification.
The use of "Consolidated Architecture" isn't being used in VCF 9 as it created confusion amongst our users. VCF 9.0 supports combining the management and workload into single vSphere Cluster OR you can separate that out by having just Management Domain and additional Workload Domain. So that's there and again, there's no use of the word "Consolidated" anymore
Secondly, VCF 9 can be deployed with VCF Automation as part of the initial deployment when the Management Cluster/Domain is formed, so while you can defer that during the initial deployment, it can still reside in the same cluster but if you're doing it for first time, might as well deploy it all together. There's no expectation that you need to have Workload Domain for anything, if you're looking for minimal footprint.
Might be good for you to review https://github.com/lamw/vcf-9x-in-box as this is all covered in gory details π