WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Quick Tip - Steps to shutdown/startup VSAN Cluster w/vCenter running on VSAN Datastore

07.08.2014 by William Lam // 11 Comments

I know Cormac Hogan already wrote about this topic awhile ago, but there was a question that was recently brought up that included a slight twist which I thought it would be useful to share some additional details. The question that was raised: How do you properly shutdown an entire VSAN Cluster when vCenter Server itself is also running on the VSAN Datastore? One great use case for VSAN in my opinion is a vSphere Management Cluster that would contain all your basic infrastructure VMs including vCenter Server which can be bootstrapped onto a VSAN Datastore. In the event that you need to shutdown the entire VSAN Cluster which may also include your vCenter Server, what is the exact procedure?

To help answer this question, I decided to perform this operation in my own lab which contains a 3-Node (physical) VSAN Cluster that had several VMs running on the VSAN Datastore including the vCenter Server VM that was managing the VSAN Cluster.

stutdown-vsan-cluster-with-vcenter-on-vsan-datastore-0
Below are the steps that I took to properly shutdown down a VSAN Cluster as well as powering everything back on.

UPDATE (4/27) - Added instructions for shutting down a VSAN 6.0 Cluster when vCenter Server is running on top of VSAN.

Shutdown VSAN Cluster (VSAN 6.0)

Step 1 - Shutdown all Virtual Machines running on the VSAN Cluster except for the vCenter Server VM, that will be the last VM you shutdown.

stutdown-vsan-cluster-with-vcenter-on-vsan-datastore-1
Step 2 - To help simplify the startup process, I recommend migrating the vCenter Server VM to the first ESXi host so you can easily find the VM when powering back on your VSAN Cluster.

Step 3 - Ensure that there are no vSAN Components being resync'ed before proceeding to the next step. You can find this information by going to the vSAN Cluster and under Monitor->vSAN->Resyncing Components as shown in the screenshot below.

Step 4 - Shutdown the vCenter Server VM which will now make the vSphere Web Client unavailable.

stutdown-vsan-cluster-with-vcenter-on-vsan-datastore-4
Step 5 - Next, you will need to place ALL ESXi hosts into Maintenance Mode. However, you must perform this operation through one of the CLI methods that supports setting the VSAN mode when entering Maintenance Mode. You can either do this by logging directly into the ESXi Shell and running ESXCLI locally or you can invoke this operation on a remote system using ESXCLI.

Here is the ESXCLI command that you will need to run and ensure that "No Action" option is selected when entering Maintenance Mode:

esxcli system maintenanceMode set -e true -m noAction

Step 5 - Finally, you can now shutdown all ESXi hosts. You can login to each ESXi hosts using either the vSphere C# Client / ESXi Shell or you can also perform this operation remotely using the vSphere API such as leveraging PowerCLI as an example.

Shutdown VSAN Cluster (VSAN 1.0)

Step 1 - Shutdown all Virtual Machines running on the VSAN Cluster except for the vCenter Server VM.

stutdown-vsan-cluster-with-vcenter-on-vsan-datastore-1
Step 2 - To help simplify the startup process, I recommend migrating the vCenter Server VM to the first ESXi host so you can easily find the VM when powering back on your VSAN Cluster.

stutdown-vsan-cluster-with-vcenter-on-vsan-datastore-2
Step 3 - Place all ESXi hosts into Maintenance Mode except for the ESXi host that is currently running the vCenter Server. Ensure you de-select "Move powered-off and suspend virtual machines to other hosts in the Cluster" as well as selecting the "No Data Migration" option since we do not want any data to be migrated as we are shutting down the entire VSAN Cluster.

Note: Make sure you do not shutdown any of the ESXi hosts during this step because the vCenter Server VSAN Components are distributed across multiple hosts. If you do this, you will be unable to properly shutdown the vCenter Server VM because its VSAN components will not available.

stutdown-vsan-cluster-with-vcenter-on-vsan-datastore-3
Step 4 - Shutdown the vCenter Server VM which will now make the vSphere Web Client unavailable.

stutdown-vsan-cluster-with-vcenter-on-vsan-datastore-4
Step 6 - Finally, you can now shutdown all ESXi hosts. You can login to each ESXi hosts using either the vSphere C# Client / ESXi Shell or you can also perform this operation remotely using the vSphere API such as leveraging PowerCLI as an example.

Startup VSAN Cluster

Step 1 - Power on all the ESXi hosts that is part of the VSAN Cluster.

Step 2 - Once all the ESXi hosts have been powered on, you can then login to the ESXi host that contains your vCenter Server. If you took my advice earlier from the shutdown procedure, then you can login to the first ESXi host and power on your vCenter Server VM.

Note: You can perform steps 2-4 using the vSphere C# Client but you can also do this using either the API or simply calling vim-cmd from the ESXi Shell. To use vim-cmd, you need to first search for the vCenter Server VM by running the following command:

vim-cmd vmsvc/getallvms

startup-vsan-cluster-with-vcenter-on-vsan-datastore-0
You will need to make a note of the Vmid and in this example, our vCenter Server has Vmid of 6

Step 3 - To power on the VM, you can run the following command and specify the Vmid:

vim-cmd vmsvc/power.on [VMID]

startup-vsan-cluster-with-vcenter-on-vsan-datastore-1
Step 4 - If you would like to know when the vCenter Server is ready, you can check the status of VMware Tools as that should give you an indication that system is up and running. To do so, you can run the following command and look for the VMware Tools status:

vim-cmd vmsvc/get.guest [VMID]

startup-vsan-cluster-with-vcenter-on-vsan-datastore-2
Step 5 - At this point, you can now login to the vSphere Web Client and take all of your ESXi hosts out of Maintenance Mode and then power on the rest of your VMs.

startup-vsan-cluster-with-vcenter-on-vsan-datastore-3
As you can see the process to shutdown an entire VSAN Cluster even with vCenter Server running on the VSAN Datastore is fairly straight forward. Once you are comfortable with the procedure, you can even automate this entire process using the vSphere API/CLI, so you do not even need a GUI to perform these steps. This might even be a good idea if you are monitoring a UPS and have an automated way of sending remote commands to shutdown your infrastructure.

Categories // ESXi, VSAN, vSphere 5.5, vSphere 6.0 Tags // ESXi 5.5, vCenter Server, VSAN, vsanDatastore, vSphere 5.5, vSphere 6.0

Quick stats for the VSAN HCL

06.13.2014 by William Lam // 3 Comments

I noticed there was a new blog post this morning from Wade Holmes on an update to the VSAN HCL and I thought it might be useful to provide some quick stats on all the partners who have supported components listed on the VSAN HCL such as the storage controllers, SSDs and MDs. As of today (06/13/14), the information below is the latest from the VSAN HCL. I will make adjustments to the Google doc as updates are made to the VSAN HCL.

Disclaimer: The VMware VSAN HCL should still be used as the official source when selecting components for your VSAN environment.

Total VSAN Storage Controllers: 89
GDoc for All VSAN Controllers - https://docs.google.com/spreadsheets/d/1FHnGAHdQdCbmNJMyze-bmpTZ3cMjKrwLtda1Ry32bAQ

Vendor Controllers
Cisco 2
Dell 5
Fujitsu 11
HP 7
IBM 6
Intel 18
LSI 37
SuperMicro 3

Note: If you would like to help contribute to the "Community" VSAN storage controller queue depth list, please take a look at this article for more details.

Total VSAN SSDs: 110
GDoc for All VSAN SSDs - https://docs.google.com/spreadsheets/d/1FHnGAHdQdCbmNJMyze-bmpTZ3cMjKrwLtda1Ry32bAQ/edit#gid=858526558

Vendor SSDs
Cisco 5
Dell 15
EMC 5
Fujitsu 4
Fusion-IO 15
Hitachi 9
HP 15
IBM 9
Intel 12
Micron 7
Samsung 3
SanDisk 6
Virident Systems 5

Total VSAN MDs: 97
GDoc for All VSAN MDs - https://docs.google.com/spreadsheets/d/1FHnGAHdQdCbmNJMyze-bmpTZ3cMjKrwLtda1Ry32bAQ/edit#gid=1993745998 

Vendor MDs
Cisco 8
Dell 20
Fujitsu 13
Hitachi 1
HP 19
IBM 20
Lenovo 3
Seagate 13

Categories // VSAN, vSphere 5.5 Tags // ESXi 5.5, hdd, md, ssd, storage controller, VSAN, vSphere 5.5

"Community" VSAN Storage Controller Queue Depth List

06.08.2014 by William Lam // 13 Comments

After reading this Reddit thread about a customers recent experience with VSAN, I have been thinking about how customers can actually tell what the queue depth is for a particular storage controller? Currently, the VSAN HCL for storage controllers does not provide any queue depth information and from my understanding this information may not always be easy to find or documented.

I know Duncan Epping has even "crowd source" for some of this information and currently his list seems to be the best at the moment. However, if you look through his list carefully, you will see that it only contains a very small subset of the supported storage controllers found on the VSAN HCL as it also contains non-supported storage controllers. I was thinking about how can we build a more compressive list and more importantly, one that includes ALL the storage controllers found on the VSAN HCL to help our customers?

It then hit me, why not build on top of the effort Duncan has started and create a compressive list that includes all storage controllers found within the VSAN HCL and their corresponding queue depth? For this effort, I decided to take a slightly different approach on how I gather the information. Right now, a user is asked to must manually run through a series of commands in ESXTOP and then report back the vendor, make and the queue depth of a particular storage controller that may or may not be on the VSAN HCL. My goal was to make the process as simple as possible by automating the data collection but also adding some intelligence into the script which you will see as you read further.

If you currently look at the VSAN HCL for storage controllers (as of 07/18/14), there are currently 73 supported storage controllers:

Vendor Controllers
Cisco 1
Dell 4
Fujitsu 6
HP 4
IBM 6
Intel 16
LSI 33
SuperMicro 3

Instead of asking a user to identify the proper storage controller, the make/model and the queue depth to submit, I have instead created a very simple python script that runs inside the ESXi Shell (this information is not available in the API) to help collect this information. The interesting thing about the script is not the collection itself as mentioned, but how it performs the collection. I have embedded the entire list of supported storage controllers found in the VSAN HCL and as the script scans through the storage controllers within an ESXi host, it will compare that to the list of supported controllers. If a supported controller is found, it will then display some basic information about the storage controller along with the current supported queue depth. The nice thing about this list if completed, is that when selecting a particular storage controller in the VSAN HCL, you can easily map that same device to the VSAN storage controller queue depth list and have confidence it is the same device!

To use the script, follow these 3 simple instructions:

Step 1 - Download the script here: find_vsan_storage_ctrl_queue_depth.py

Disclaimer: Please excuse my poor Python script, as a Python beginner,  I am sure it can be better written and open to any fixes/suggestions

Step 2 - SCP it to your ESXi host and make sure you set the execute permission on the script before running (chmod +x find_vsan_storage_ctrl_queue_depth.py).

Here is an example of the script running on an ESXi host with a supported VSAN storage controller:

vsan-storage-controller-queue-depth
As you can see from the screenshot above, this is an Intel controller which supports a queue depth of 600.

Step 3 - Submit the results to the "Community" VSAN Storage Controller Queue Depth List which is hosted on Google Docs and is available for everyone to contribute

The easiest way to map the output to the Google document is to find the "Identifier" ID which is actually made up of the Vendor ID (VID), Device ID (DID), Sub-Vendor ID (SVID), and Sub-Device ID (SDID) within the Google document. Once you have found the match on the document and if no one has submitted the queue depth, go ahead and edit the document with the queue depth from the script.

For those of you who would like to contribute non-supported VSAN storage controllers, there is a variable in the script called show_non_vsan_hcl_ctr that can be toggled from False to True and this will provide a much longer list of controllers and their queue depth.

In addition to the assistance from the community, I also hope to see some of the storage controller vendors participate in this effort to help build a complete list of supported queue depth for every storage controller found on the VSAN HCL. I think this will benefit everyone and I look forward to seeing the collaboration from the community! Lets see how fast we can complete the list, I have faith in our powerful community!

Categories // Automation, ESXi, VSAN, vSphere 5.5 Tags // esxcfg-info, ESXi 5.5, queue depth, storage controller, VSAN, vSphere 5.5

  • « Previous Page
  • 1
  • …
  • 38
  • 39
  • 40
  • 41
  • 42
  • …
  • 53
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...