WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

How to configure an All-Flash VSAN 6.0 Configuration using Nested ESXi?

02.11.2015 by William Lam // 11 Comments

There has been a great deal of interest from customers and partners for an All-Flash VSAN configuration, especially as consumer grade SSDs (eMLC) continue to drop in price and the endurance levels of these devices lasting much longer than originally expected as mentioned in this article by Duncan Epping. In fact, last year at VMworld the folks over at Micron and SanDisk built and demoed an All-Flash VSAN configuration proving this was not only cost effective but also quite performant. You can read more about the details here and here. With the announcement of vSphere 6 this week and VMware Partner Exchange taking place the same week, there was a lot of excitement on what VSAN 6.0 might bring.

One of the coolest feature in VSAN 6.0 is the support for an All-Flash configuration. The folks over at Sandisk gave a sneak peak at VMware Partner Exchange couple weeks back on what they were able to accomplish with VSAN 6.0 using an All-Flash configuration. They achieved an impressive 2 Million IOPs, for more details take a look here. I am pretty sure there are going to be plenty more partner announcements as we get closer to the GA of vSphere 6 and there will be a list of supported vendors and devices on the VMware VSAN HCL, so stay tuned.

To easily demonstrate this new feature, I will be using Nested ESXi but the process to configure an All-Flash VSAN configuration is exactly the same for a real physical hardware setup. Nested ESXi is a great learning tool to understand and be able to walk through the exact process but should not be a substituted for actual hardware testing. You will need a minimum of 3 Nested ESXi hosts and they should be configured with at least 6GB of memory or more when working with VSAN 6.0.

Disclaimer: Nested ESXi is not officially supported by VMware, please use at your own risk.

In VSAN 1.0, an All-Flash configuration was not officially supported, the only way to get this working was by "tricking" ESXi into thinking the SSD's used for capacity tier are MD's by creating claimrules using ESXCLI. Though this method had worked, VSAN itself was assuming the capacity tier of storage are regular magnetic disks and hence the operations were not really optimized for anything but magnetic disks. With VSAN 6.0, this is now different and VSAN will optimize based on whether are you using using a hybrid or an All-Flash configuration. In VSAN 6.0, there is now a new property called IsCapacityFlash that is exposed and it allows a user to specify whether an SSD is used for the write buffer or for capacity purposes.

Screen Shot 2015-02-10 at 10.01.12 PM
Step 1 - We can easily view the IsCapacityFlash property by using our handy vdq VSAN utility which has now been enhanced to include a few more properties. Run the following command to view your disks:

vdq -q

all-flash-vsan-6
From the screenshot above, we can see we have two disks eligible for VSAN and that they both are SSDs. We can also see thew new IsCapacityFlash property which is currently set to 0 for both. We will want to select one of the disk(s) and set this property to 1 to enable it for capacity use within VSAN.

Step 2 - Identity the SSD device(s) you wish to use for your capacity tier, a very simple to do this is by using the following ESXCLI snippet:

esxcli storage core device list  | grep -iE '(   Display Name: |   Size: )'

all-flash-vsan-1
We can quickly get a list of the devices and their ID along with their disk capacity. In the example above, I will be using the 8GB device for SSD capacity

Step 3 - Once you have identified the device(s) from the previous step, we now need to add a new option called enable_capacity_flash to these device(s) using ESXCLI. There are actually three methods of assigning the capacity flash tag to a device and both provide the same end result. Personally, I would go with Option 2 as it is much simpler to remember than syntax for claimrules 🙂 If you have the ESXi hosts connected to your vCenter Server, then Option 3 would be great as you can perform this step from a single location.

Option 1: ESXCLI Claim Rules

Run the following two ESXCLI commands for each device you wish to mark for SSD capacity:

esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d naa.6000c295be1e7ac4370e6512a0003edf -o enable_capacity_flash
esxcli storage core claiming reclaim -d naa.6000c295be1e7ac4370e6512a0003edf

all-flash-vsan-2
Option 2: ESXCLI using new VSAN tagging command

esxcli vsan storage tag add -d naa.6000c295be1e7ac4370e6512a0003edf -t capacityFlash

Option 3: RVC using new vsan.host_claim_disks_differently command

vsan.host_claim_disks_differently --disk naa.6000c295be1e7ac4370e6512a0003edf --claim-type capacity_flash

Step 4 - To verify the changes took effect, we can re-run the vdq -q command and we should now see our device(s) marked for SSD capacity.

all-flash-vsan-3
Step 5 - You can now create your VSAN Cluster using the vSphere Web Client as you normally would and add the ESXi host into the cluster or you can bootstrap it using ESXCLI if you are trying to run vCenter Server on top of VSAN, for more details take a look here.

One thing that I found interesting is that in the vSphere Web Client when setting up an All-Flash VSAN configuration, the SSD(s) used for capacity will still show up as "HDD". I am not sure if this is what the final UI will look like before vSphere 6.0 GA's.

all-flash-vsan-4
If you want to check the actual device type, you can always go to a specific ESXi host under Manage->Storage->Storage Devices to see get more details. If we look at our NAA* device ID, we can see that both devices are in fact SSDs.

all-flash-vsan-5
Hopefully for those of you interested in an All-Flash VSAN configuration, you can now quickly get a feel for that running VSAN 6.0 in a Nested ESXi environment. I will be publishing updated OVF templates for various types of VSAN 6.0 testing in the coming weeks so stay tune.

Categories // ESXi, Nested Virtualization, VSAN, vSphere 6.0 Tags // enable_capacity_flash, esxcli, IsCapacityFlash, Virtual SAN, VSAN, vSphere 6.0

Increasing disk capacity simplified with VCSA 6.0 using LVM autogrow

02.10.2015 by William Lam // 20 Comments

With previous releases of the VCSA, increasing disk capacity was not a very straight forward process. Even though you could easily increase the size of the underlying VMDK while the VCSA was running, increasing the guestOS filesystem was not as seamless. In fact, the process was to add a new VMDK, format it and then copy the contents from the old disk to the new disk as detailed in VMware KB 2056764. This meant with previous releases of VCSX 5.x, you would need to incur downtime of your environment and it could be also be quite significant depending on your familiarity with the steps mentioned in the KB not to mention the time it took to copy the data.

UPDATE (12/06/16) - For VCSA 6.5 deployments, please refer to the article here as the instructions have changed since VCSA 6.0.

The reason for this unnecessary complexity is that VCSA did not take advantage of a Logical Volume Manager (LVM) for managing its disks. In VCSA 6.0, LVM is now used to make it extremely easy to increase disk capacity while the VCSA is running. VCSA 6.0 further simplifies this by separating out the various functions into their own disk partitions comprised of 11 VMDKs compared to the monolithic design in previous VCSA releases. This not only allows you to increase capacity for specific a partition but you can also now attach specific storage SLA's using VM Storage Policies on specific VMDKs such as the Database or Log VMDK for example.

In the example below, I will walk through the process of increasing the DB VMDK from the existing 10GB to 20GB while the vCenter Server is still running.

Step 1 - Verify the existing disk capacity using "df -h"

increase-vmdk-in vcsa-01
Step 2 - Increase the capacity on VMDK 6 which represents the DB partition using the vSphere Web/C# Client.

Step 3 - Once the VMDK has been increased, you will need to run the following command in the VCSA which will automatically expand any Logical Volumes that have had their Physical Volumes increased:

vpxd_servicecfg storage lvm autogrow

increase-vmdk-in vcsa-02
Step 4 - Confirm the newly added capacity has been consumed

increase-vmdk-in vcsa-03
If you would like to learn more about the different VMDK structure in the new VCSA 6.0, I will be sharing more details in a future article.

Categories // Automation, VCSA, vSphere 6.0 Tags // autogrow, lvm, VCSA, vcva, vpxd_servicecfg, vSphere 6.0

Ultimate automation guide to deploying VCSA 6.0 Part 0

02.09.2015 by William Lam // 8 Comments

With vSphere 6.0, there is a new deployment model for vCenter Server which is comprised of following two core components:

  • Platform Services Controller (PSC) Node - Provides VMware Infrastructure services such as vCenter Single Sign-On, vSphere Licensing and VMware Certificate Authority Management (VCMA)
  • vCenter Server Management Node - Provides vCenter Server Service, Inventory Service, vSphere Web Client, vPostgres DB, vSphere Syslog Collector, vSphere Auto Deploy, and vSphere Dump Collector Services

From these two components, there are three deployment types (also shown in the diagrams below):

  1. Embedded Node - Both the Platform Services Controller and vCenter Server Management Node reside on a single system, this is true for both the Windows vCenter Server and the VCSA
  2. External Platform Services Controller Node - You can deploy multiple PSC's and configure them with independent SSO Domains or have them all joined to a single SSO Domain, replicating between each other
  3. vCenter Server Management Node - This requires that you have deployed an external PSC which the vCenter Server can point to

vcsa-6.0-deployment-options-new-2There are currently two supported methods of deploying the VCSA 6.0 Appliance which is using the new HTML based UI (Supported only on Windows) or a new scripted installer method (supports Windows, Mac & Linux). Both of these methods today require direct access to an ESXi host for deployment, which may not work for everyone. What if you want to deploy the new VCSA 6.0 using an existing vCenter Server or running it on top of VMware Fusion or Workstation? Luckily, I spent quite a bit of time going through all these "alternative" deployment methods and documenting the process so that you have a choice on how you would like to test and evaluate vSphere 6 and the new VCSA in your environment.

These alternative methods will be using the VCSA OVA which is actually included in the VCSA ISO. You will need to extract the contents of the VCSA ISO and you can find the OVA in the following path after extraction: VMware-VCSA-all-6.0.0-2562643->vcsa->vmware-vcsa where vmware-vcsa is the VCSA OVA file. Depending on the deployment method you are using, you may only need to just extract the contents of the ISO or possibly rename the vmware-vcsa with .ova extension to deploy. Please refer to the articles below for more details.

Disclaimer: Though these alternative deployment options work, they are however not officially supported by VMware. Please use at your own risk.

In the upcoming days, I will be sharing a 4-part blog series for automating the deployment of the new VCSA 6.0 with the following deployment options:

  • Part 0: Introduction
  • Part 1: Embedded Node
  • Part 2: Platform Services Controller Node
  • Part 3: Replicated Platform Services Controller Node
  • Part 4: vCenter Server Management Node

In each article, I will provide resources on how to deploy to an existing vCenter Server or directly to an ESXi host using ovftool via a shell script as well using PowerCLI, deploying to VMware Fusion and deploying to VMware Workstation. Stay tune for Part 1 ...

Categories // Automation, Fusion, OVFTool, VCSA, vSphere 6.0, Workstation Tags // fusion, vcenter server appliance, VCSA, vcva, vSphere 6.0, workstation

  • « Previous Page
  • 1
  • …
  • 44
  • 45
  • 46
  • 47
  • 48
  • …
  • 51
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...