The other day I needed to increase the capacity on one of my Nested VSAN Datastores as one of our users required a larger VSAN datastore than it was initially configured for. I was expecting to be able to just increase the size of the underlying VMDKs like I would for a traditional Nested ESXi environment and rescan in ESXi to pick up the new capacity without any downtime. It turns out, this is was not exactly the case for a Nested VSAN environment.
Disclaimer: Nested Virtualization is not officially supported by VMware
When you first setup VSAN, regardless of how the disks were claimed, VSAN will consume the entire device (SSD or MD). The capacity that VSAN initially detects will then be used to create the necessary partition as part of the VSAN Disk Group creation. VSAN assumes that the capacity for the underlying devices would never change as in the "real" world, disks do not auto-magically get larger 🙂 and this is a valid assumption. In a Nested ESXi environment however, it can auto-magically get larger but VSAN was not built for this use case. What ends up happening is that the underlying devices can be "hot-extended" but the existing VSAN Disk Group can not detect this new capacity.
Having said that, there are two ways you can increase your VSAN datastore:
Option 1 - If you wish to preserve your VSAN Datastore, you can hot-add additional VMDK(s) to your existing VSAN Disk Group or if it is full, you can create a new disk group and add additional VMDK(s). This will modify your setup slightly if you wanted a particular set of disk groups but will allow you to preserve your data.
Option 2 - The latter option requires the deletion and re-creation of the VSAN Datastore which is not ideal if you already have data on it. You will need to increase the capacity of the underlying VMDKs and then re-create your VSAN Datastore, but this way you can keep the existing number of disks and disk groups you initially created your Nested ESXi environment with.
In my scenario, I could not destroy the VSAN Datastore as I had someone using it and so I opted for option #1. Here is what my configuration looked like before which was a single VSAN Disk Group with 1xSSD and 1xMD:
I then added an additional 10GB VMDK to each of my Nested ESXi hosts and issue a rescan so the ESXi host would pickup the new device:
In just a few seconds, I can see my new storage device. I can now head over to the VSAN management page which is located at the vSphere Cluster and once I refresh, I can see that VSAN has automatically added the new "MD" into the existing disk group and my storage has automatically expanded!
Sara says
quick tip:
1) increase the size of the ESX's VMDKs
2) one host at a time, enter maintenance mode, remove its disk group from vSAN and then re-add it. After that of course exit maintenance mode
This procedure takes longer than destroying and recreating the entire vSAN but at least you can keep everything working and use the same VMDKs. HTH!
Alfonso Lopez says
Thank you sir. I just ran into the same issue and had to go with option 1 because I did not want to lose my existing data on the vsan datastore.
I also happened to learn that you cannot mix Flash with non-Flash disks for the Capacity tier.