There has been a great deal of interest from customers and partners for an All-Flash VSAN configuration, especially as consumer grade SSDs (eMLC) continue to drop in price and the endurance levels of these devices lasting much longer than originally expected as mentioned in this article by Duncan Epping. In fact, last year at VMworld the folks over at Micron and SanDisk built and demoed an All-Flash VSAN configuration proving this was not only cost effective but also quite performant. You can read more about the details here and here. With the announcement of vSphere 6 this week and VMware Partner Exchange taking place the same week, there was a lot of excitement on what VSAN 6.0 might bring.
One of the coolest feature in VSAN 6.0 is the support for an All-Flash configuration. The folks over at Sandisk gave a sneak peak at VMware Partner Exchange couple weeks back on what they were able to accomplish with VSAN 6.0 using an All-Flash configuration. They achieved an impressive 2 Million IOPs, for more details take a look here. I am pretty sure there are going to be plenty more partner announcements as we get closer to the GA of vSphere 6 and there will be a list of supported vendors and devices on the VMware VSAN HCL, so stay tuned.
To easily demonstrate this new feature, I will be using Nested ESXi but the process to configure an All-Flash VSAN configuration is exactly the same for a real physical hardware setup. Nested ESXi is a great learning tool to understand and be able to walk through the exact process but should not be a substituted for actual hardware testing. You will need a minimum of 3 Nested ESXi hosts and they should be configured with at least 6GB of memory or more when working with VSAN 6.0.
Disclaimer: Nested ESXi is not officially supported by VMware, please use at your own risk.
In VSAN 1.0, an All-Flash configuration was not officially supported, the only way to get this working was by "tricking" ESXi into thinking the SSD's used for capacity tier are MD's by creating claimrules using ESXCLI. Though this method had worked, VSAN itself was assuming the capacity tier of storage are regular magnetic disks and hence the operations were not really optimized for anything but magnetic disks. With VSAN 6.0, this is now different and VSAN will optimize based on whether are you using using a hybrid or an All-Flash configuration. In VSAN 6.0, there is now a new property called IsCapacityFlash that is exposed and it allows a user to specify whether an SSD is used for the write buffer or for capacity purposes.
Step 1 - We can easily view the IsCapacityFlash property by using our handy vdq VSAN utility which has now been enhanced to include a few more properties. Run the following command to view your disks:
vdq -q
From the screenshot above, we can see we have two disks eligible for VSAN and that they both are SSDs. We can also see thew new IsCapacityFlash property which is currently set to 0 for both. We will want to select one of the disk(s) and set this property to 1 to enable it for capacity use within VSAN.
Step 2 - Identity the SSD device(s) you wish to use for your capacity tier, a very simple to do this is by using the following ESXCLI snippet:
esxcli storage core device list | grep -iE '( Display Name: | Size: )'
We can quickly get a list of the devices and their ID along with their disk capacity. In the example above, I will be using the 8GB device for SSD capacity
Step 3 - Once you have identified the device(s) from the previous step, we now need to add a new option called enable_capacity_flash to these device(s) using ESXCLI. There are actually three methods of assigning the capacity flash tag to a device and both provide the same end result. Personally, I would go with Option 2 as it is much simpler to remember than syntax for claimrules 🙂 If you have the ESXi hosts connected to your vCenter Server, then Option 3 would be great as you can perform this step from a single location.
Option 1: ESXCLI Claim Rules
Run the following two ESXCLI commands for each device you wish to mark for SSD capacity:
esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d naa.6000c295be1e7ac4370e6512a0003edf -o enable_capacity_flash
esxcli storage core claiming reclaim -d naa.6000c295be1e7ac4370e6512a0003edf
Option 2: ESXCLI using new VSAN tagging command
esxcli vsan storage tag add -d naa.6000c295be1e7ac4370e6512a0003edf -t capacityFlash
Option 3: RVC using new vsan.host_claim_disks_differently command
vsan.host_claim_disks_differently --disk naa.6000c295be1e7ac4370e6512a0003edf --claim-type capacity_flash
Step 4 - To verify the changes took effect, we can re-run the vdq -q command and we should now see our device(s) marked for SSD capacity.
Step 5 - You can now create your VSAN Cluster using the vSphere Web Client as you normally would and add the ESXi host into the cluster or you can bootstrap it using ESXCLI if you are trying to run vCenter Server on top of VSAN, for more details take a look here.
One thing that I found interesting is that in the vSphere Web Client when setting up an All-Flash VSAN configuration, the SSD(s) used for capacity will still show up as "HDD". I am not sure if this is what the final UI will look like before vSphere 6.0 GA's.
If you want to check the actual device type, you can always go to a specific ESXi host under Manage->Storage->Storage Devices to see get more details. If we look at our NAA* device ID, we can see that both devices are in fact SSDs.
Hopefully for those of you interested in an All-Flash VSAN configuration, you can now quickly get a feel for that running VSAN 6.0 in a Nested ESXi environment. I will be publishing updated OVF templates for various types of VSAN 6.0 testing in the coming weeks so stay tune.
Cihan says
Yet another great article William. Thank you.
Thinking of going nested VSAN for testing, how much performance hit does nested have? System will be made up of 5960X 8C cpu with NVME Intel P3700 SSD and 64GB DDR4 memory.
dedwards says
Yes, thank you. Great article, now I don't have to write one 🙂
Jubish says
Thanks William!
Can't we do this via RVC? Would you mind adding those commands so that the story is complete?
best,
Jubish
William Lam says
I've updated the article to reflect the comment from Ken
Ken Werneburg says
rvc command vsan.hosts_claim_disks_differently is the thing you're looking for Jubish.
Will need to pass it a device type or model, or a specific set of devices, and --claim-type capacity_flash.
Something forthcoming on that, though it will also be in the admin guide.
-K
William Lam says
Thanks for helping answer the question Ken, I've updated the article with the new RVC command
jasper9 says
Hey William, Are you working on any scripted way of doing this? Seems kind of a painful at scale.
William Lam says
Josh,
You should know me better than that 🙂 of course there's a way to automate this, though it really depends on how you want to install ESX. This can be via auto-deploy or kickstart, the latter you can just embed the commands you need and setup DHCP reservation and power it up and let the cluster build itself out which is what I do. Take a look here http://www.virtuallyghetto.com/2014/07/esxi-5-5-kickstart-script-for-setting-up-vsan.html
ramanthemanRaman says
Hey William, do you know if this process is still works on 6.0 U1a, which got released on 10/06/15? I'm running into a few issues (it worked fine in 6.0), where its complaining that it cannot create an allFlash.
Johan Brems says
Hi,
So I followed your guide's and I have 2x nested esxi and want to bootstrap a vcenter.
I'm trying to test all-flash vsan and I have created 2 volume's ssd tagged.
But now if I want to add the disks to vsandatastore I get the error no license:
Unable to add device: Can not create all-flash disk group: current Virtual SAN license does not support all-flash
But since I don't have any vcenter running how am I able to add license ?
Sany says
Can I have a script to make the disk as capasityflash as per the vendor