By now, I am sure you have heard about VMware Virtual SAN (VSAN) and you are probably anxious to give it a spin once the beta becomes publicly available in the very near future. I have been doing some testing in my lab with VSAN, not Nested VSAN, but on actual physical hardware. While getting started, I hit an interesting challenge given my physical hardware configuration and also this being a greenfield deployment.
Let me explain by what I mean by this. In my lab, I have three physical hosts and each contains a single SSD and single SATA drive. Each host has been provisioned with a small 5GB iSCSI boot LUN that is used to install ESXi (this could have also been another local disk or even USB/SD card). Though VSAN itself is built into the VMkernel, the management of the VSAN cluster, configurations and policies are all performed through vCenter Server. So for a greenfield deployment, you would need to first deploy a vCenter Server which would then require you to consume at least one of the local disks. This is the good ol chicken and egg problem!
In my environment, this was a problem because I only have a single SSD and SATA disk and I would not be able to setup a VSAN datastore for all three hosts at once. This meant I had to do the following steps:
- Create a local VMFS volume on the first ESXi host
- Deploy vCenter Server and then create a VSAN Cluster
- Add the two other ESXi host to the VSAN Cluster
- Storage vMotion the vCenter Server to the VSAN Datastore
- Destroy the local VMFS datastore on first ESXi host (existing VMFS partitions will not work with VSAN) & delete partitions
- Add the first ESXi host to VSAN Cluster
As you can see this can get a bit complicated and potentially error prone when needing to destroy VMFS volumes ...
I figured there had to be a better way and I was probably not going to be the only one hitting this scenario for a greenfield and even potentially for a brownfield deployments. In talking to Christian Dickmann, a Tech Lead for the VSAN project, I learned about a really cool feature of VSAN in which you can actually bootstrap vCenter Server onto a single VSAN node! This was possible due to the tight integration of VSAN within the VMkenel and best part about this solution is that it is fully SUPPORTED by VMware. From an operational perspective, this deployment workflow is much easier and intuitive than the process listed above. This also allows you to maximize the use of your hardware investment by running both your core infrastructure VMs as well as your regular workloads all on the VSAN datastore which is great for small or ROBO offices.
In my environment, I start out with a single ESXi 5.5 host which contains a single SSD and SATA disk and I create single VSAN node from that ESXi host and contribute its storage to the VSAN datastore. I then deploy a vCenter Server for which I am using the VCSA (vCenter Server Appliance) for quick and easy deployment. The default policy for VSAN is to automatically ensure there is at least one additional replica of the VM as new ESXi compute nodes join the VSAN cluster.
Once the vCenter Server is online, I can then create a vSphere Cluster and enable it with VSAN and add all three ESXi 5.5 hosts to the vSphere Cluster. This will then contribute all their storage to the VSAN datastore all while the vCenter Server is happily running. Once the other ESXi hosts join the VSAN cluster, we will automatically get replication between the other nodes to ensure our vCenter Server is replicated and of course you can change this policy.
As you can see this is much simpler setup than having to start out with an existing VMFS or even NFS datastore to initially store the vCenter Server and then create the VSAN datstore and migrate the vCenter Server. I also like how I can start deploying my infrastructure with a single ESXi host and then slowly bring in additional ESXi hosts (just make sure you do it in timely fashion as you have a SPOF until then). In part two of this article, I will go into more details on how to configure the single VSAN node and bootstrap vCenter Server. In the meantime, if you have not checked out these awesome articles by some of my VMware colleagues, I would highly recommend you give them a read, especially Cormac's awesome VSAN series!
Here is How to bootstrap vCenter Server onto a single VSAN node Part 2?
If you are interested in testing out VSAN, be sure to sign up for the beta here!
- VSAN Part 1 – A first look at VSAN
- VSAN Part 2 – What do you need to get started?
- VSAN Part 3 – It is not a Virtual Storage Appliance
- VSAN Part 4 – Understanding Objects and Components
- VSAN Part 5 – The role of VASA
- Introduction to VMware vSphere Virtual SAN
- How do you know where an object is located with Virtual SAN?
Avri Roth says
For a home lab testing is it possible to have vsan with only 2 ESXi servers?
William Lam says
If you care about data being protected and the ability to recover, you'll need a minimum of 3 nodes. If not, then you can even run VSAN on a single node, just don't expect data to be available in the case of hardware failure
Hello, i have 4 nodes with Vsan, is possible make a bootsrap with, 4..5..6. nodes?
thanks very much and congratulations! very good blog!
William Lam says
Not sure I understand the question? The "bootstrap" is just to get the first node up and running. Sure you can use the same process to add additional nodes, but the hope is once you have VC up and running, it would be much easier and straight forward going through the vSphere APIs provided by VC.
Paul Sheard says
The problem with bootstrapping vsan is that I've just used the method outlined in the VMware document "https://www.vmware.com/files/pdf/products/vsan/VMware-TechNote-Bootstrapping-VSAN-without-vCenter.pdf" but after I deploy the psc and vcsa I would then like to migrate my standard switches created during the bootstrapping process over to distributed switches, however the problem lies in the fact that I have the vcsa sat on the vsan datastore.. so when I try to migrate the vsan standard switch over to the distributed switch I have created, it all goes pear shaped.. the vsan datastore becomes unavailable, the vcsa becomes unavailable and it all becomes very messy.. why does the document above state that you can migrate to distributed switches once the vcsa is up and running?
You can obviously do this migration but only if the vcsa isn't running on the vsan datastore.. but that is the whole point of the document, to get the vcsa on to the vsan datastore..
Am I missing something here?
I'm experiencing the same issue as Paul. Once I get my single host VSAN setup, I can install the vCenter Appliance but once I reboot my ESXi 6.0U1 host, I can't access vCenter and I seam to be locked out of my vsan datastore.
Paul Sheard says
Before you reboot the ESXi host, do you put it in to maintenance mode? and do you opt for full data migration, Ensure accessibility or no data migration?
In the end I built 1 of my 4 VSAN nodes with a local VMFS and installed the PSC/VCSA on to this local datastore, then I created a 3 node VSAN cluster with my remaining nodes. I created all my networking with distributed switches etc on the 3 node VSAN. And then I simply storage vMotioned my PSC/VCSA over to the 3 node VSAN datastore..
Then I cleared the disk config on the 1 node I initially used to create the PSC/VCSA, and then introduced this as the 4th node in to the 3 node VSAN cluster I earlier created.
I could not for the life of me migrate my standard switches to distributed switches after I created the 4 node VSAN (using the bootstrapping method). For whatever reason because my bootstrapped VSAN datastore which had the PSC/VCSA sat on it, the migration procedure to distributed switches cause the VSAN datastore to fall over, taking the PSC/VCSA with it.
I have migrated from standard switches to distributed switches so many times in the past, but I couldn't get it to work when a bootstrapped VSAN was in the equation.
All the best
William Lam says
I just went through the workflow which I laid out above and bootstrapped the latest vSphere 6.0 Update 1 environment which includes VCSA running on the VSAN Datastore and migrating to a VDS without any connectivity issues what so ever. In fact, I even recall doing this awhile back but figure I go through it once more to ensure nothing has changed.
A couple of things to note in case this might help.
* If you look at the 2nd part of the article, in Step 2 we had to change the default VM Storage Policy on the ESXi host, this is to allow us to setup the single node VSAN Datastore. Once you have VC up and running and add the remainder nodes to form your full VSAN Cluster, I also applied the default VSAN Storage Policy on the VCSA to ensure its fully protected (FTT=1) and modify the default VM Storage Policy on the ESXi back to its original configuration which is FTT=1
* Once everything is configured, I then go through the process of creating a new VDS and Distributed Portgroups. I assume you have at least 2pNICs, else you'll definitely run into issues trying to migrate from VSS to VDS. I first migrate only 1 of the pNICs over to the VDS, this will give all the ESXi hosts access to the VDS and allow you to then move the VMs over
* Once I verify that ESXi hosts now have access to BOTH the VSS/VDS, I then just reconfigure my VCSA using the vSphere Web Client to change from standard portgroup to distributed portgroup. In just a few seconds, the reconfiguration is successful and I'm still fully connected to the vSphere Web Client and VCSA is running perfectly fine on the VSAN Datastore
Steffen Oezcan says
Hi William, any idea what happened to the link to the TechNote-PDF? Not working anymore, doc has been deleted or moved, cannot be found anywhere else.
William Lam says
I don't know to be honest, the exact process is documented in Part 2 of the article if you're interested
Steffen Oezcan says
thanks I know, at least the information is still available somewhere. Was just wondering, b/c a customer wants to have an "official document". Anyways, if its really important, we´ll reach out via the official paths ;).