Something that I had noticed while working with VSAN in my lab is that when you disable VSAN on your vSphere Cluster, the disks that were used for VSAN in each of the ESXi hosts were no longer available for use afterwards. If you want to use one of the disks for creating a regular VMFS volume or even use it for for vSphere Flash Read Cache, the disks would not show up as an available device. The reason this is occurring is that the disks still contains a VSAN partition and this is not automatically removed when disabling VSAN.
You can view the partition details by using the partedUtil and specifying the "getptbl" option and the device.
Now I could use partedUtil to clear the partition, but there is actually a nice ESXCLI command that can be used to remove the disks used in a VSAN disk group and this will automatically clear the VSAN partition. The ESXCLI command is:
esxcli vsan storage remove -s [SSD-DEVICE-ID]
When I tried to run the command, I was surprised to get the following error message:
Unable to remove device: Can not destroy disk group for SSD naa.6000c29c581358c23dcd2ca6284eec79 : storage auto claim mode is enabled
It turns out when you use "Automatic" claiming mode when enabling VSAN on your vSphere Cluster, that configuration is left enabled on the ESXi host even when disabling VSAN. This then prevents you from destroying the disk group. So there is an extra step required if you choose automatic mode and you will need to run the following ESXCLI command to disable it:
esxcli vsan storage automode set –enabled false
If you are not sure, you can always perform a "get" operation to check whether automatic claim mode is enabled. Once that has been disabled, you will now be able to destroy the diskgroup by running the original command above:
The remove operation only requires the SSD device front-ending the VSAN disk group and you can identify the SSD by running "esxcli vsan storage list". I did find it odd that disabling VSAN in your vSphere Cluster did not completely disable the automatic mode on the ESXi host and I have already filed a bug request to get that fix.