WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud
  • Tanzu
    • Application Modernization
    • Tanzu services
    • Tanzu Community Edition
    • Tanzu Kubernetes Grid
    • vSphere with Tanzu
  • Home Lab
  • Nested Virtualization
  • Apple

Quick Tip - Steps to shutdown/startup VSAN Cluster w/vCenter running on VSAN Datastore

07.08.2014 by William Lam // 11 Comments

I know Cormac Hogan already wrote about this topic awhile ago, but there was a question that was recently brought up that included a slight twist which I thought it would be useful to share some additional details. The question that was raised: How do you properly shutdown an entire VSAN Cluster when vCenter Server itself is also running on the VSAN Datastore? One great use case for VSAN in my opinion is a vSphere Management Cluster that would contain all your basic infrastructure VMs including vCenter Server which can be bootstrapped onto a VSAN Datastore. In the event that you need to shutdown the entire VSAN Cluster which may also include your vCenter Server, what is the exact procedure?

To help answer this question, I decided to perform this operation in my own lab which contains a 3-Node (physical) VSAN Cluster that had several VMs running on the VSAN Datastore including the vCenter Server VM that was managing the VSAN Cluster.

stutdown-vsan-cluster-with-vcenter-on-vsan-datastore-0
Below are the steps that I took to properly shutdown down a VSAN Cluster as well as powering everything back on.

UPDATE (4/27) - Added instructions for shutting down a VSAN 6.0 Cluster when vCenter Server is running on top of VSAN.

Shutdown VSAN Cluster (VSAN 6.0)

Step 1 - Shutdown all Virtual Machines running on the VSAN Cluster except for the vCenter Server VM, that will be the last VM you shutdown.

stutdown-vsan-cluster-with-vcenter-on-vsan-datastore-1
Step 2 - To help simplify the startup process, I recommend migrating the vCenter Server VM to the first ESXi host so you can easily find the VM when powering back on your VSAN Cluster.

Step 3 - Ensure that there are no vSAN Components being resync'ed before proceeding to the next step. You can find this information by going to the vSAN Cluster and under Monitor->vSAN->Resyncing Components as shown in the screenshot below.

Step 4 - Shutdown the vCenter Server VM which will now make the vSphere Web Client unavailable.

stutdown-vsan-cluster-with-vcenter-on-vsan-datastore-4
Step 5 - Next, you will need to place ALL ESXi hosts into Maintenance Mode. However, you must perform this operation through one of the CLI methods that supports setting the VSAN mode when entering Maintenance Mode. You can either do this by logging directly into the ESXi Shell and running ESXCLI locally or you can invoke this operation on a remote system using ESXCLI.

Here is the ESXCLI command that you will need to run and ensure that "No Action" option is selected when entering Maintenance Mode:

esxcli system maintenanceMode set -e true -m noAction

Step 5 - Finally, you can now shutdown all ESXi hosts. You can login to each ESXi hosts using either the vSphere C# Client / ESXi Shell or you can also perform this operation remotely using the vSphere API such as leveraging PowerCLI as an example.

Shutdown VSAN Cluster (VSAN 1.0)

Step 1 - Shutdown all Virtual Machines running on the VSAN Cluster except for the vCenter Server VM.

stutdown-vsan-cluster-with-vcenter-on-vsan-datastore-1
Step 2 - To help simplify the startup process, I recommend migrating the vCenter Server VM to the first ESXi host so you can easily find the VM when powering back on your VSAN Cluster.

stutdown-vsan-cluster-with-vcenter-on-vsan-datastore-2
Step 3 - Place all ESXi hosts into Maintenance Mode except for the ESXi host that is currently running the vCenter Server. Ensure you de-select "Move powered-off and suspend virtual machines to other hosts in the Cluster" as well as selecting the "No Data Migration" option since we do not want any data to be migrated as we are shutting down the entire VSAN Cluster.

Note: Make sure you do not shutdown any of the ESXi hosts during this step because the vCenter Server VSAN Components are distributed across multiple hosts. If you do this, you will be unable to properly shutdown the vCenter Server VM because its VSAN components will not available.

stutdown-vsan-cluster-with-vcenter-on-vsan-datastore-3
Step 4 - Shutdown the vCenter Server VM which will now make the vSphere Web Client unavailable.

stutdown-vsan-cluster-with-vcenter-on-vsan-datastore-4
Step 6 - Finally, you can now shutdown all ESXi hosts. You can login to each ESXi hosts using either the vSphere C# Client / ESXi Shell or you can also perform this operation remotely using the vSphere API such as leveraging PowerCLI as an example.

Startup VSAN Cluster

Step 1 - Power on all the ESXi hosts that is part of the VSAN Cluster.

Step 2 - Once all the ESXi hosts have been powered on, you can then login to the ESXi host that contains your vCenter Server. If you took my advice earlier from the shutdown procedure, then you can login to the first ESXi host and power on your vCenter Server VM.

Note: You can perform steps 2-4 using the vSphere C# Client but you can also do this using either the API or simply calling vim-cmd from the ESXi Shell. To use vim-cmd, you need to first search for the vCenter Server VM by running the following command:

vim-cmd vmsvc/getallvms

startup-vsan-cluster-with-vcenter-on-vsan-datastore-0
You will need to make a note of the Vmid and in this example, our vCenter Server has Vmid of 6

Step 3 - To power on the VM, you can run the following command and specify the Vmid:

vim-cmd vmsvc/power.on [VMID]

startup-vsan-cluster-with-vcenter-on-vsan-datastore-1
Step 4 - If you would like to know when the vCenter Server is ready, you can check the status of VMware Tools as that should give you an indication that system is up and running. To do so, you can run the following command and look for the VMware Tools status:

vim-cmd vmsvc/get.guest [VMID]

startup-vsan-cluster-with-vcenter-on-vsan-datastore-2
Step 5 - At this point, you can now login to the vSphere Web Client and take all of your ESXi hosts out of Maintenance Mode and then power on the rest of your VMs.

startup-vsan-cluster-with-vcenter-on-vsan-datastore-3
As you can see the process to shutdown an entire VSAN Cluster even with vCenter Server running on the VSAN Datastore is fairly straight forward. Once you are comfortable with the procedure, you can even automate this entire process using the vSphere API/CLI, so you do not even need a GUI to perform these steps. This might even be a good idea if you are monitoring a UPS and have an automated way of sending remote commands to shutdown your infrastructure.

Categories // ESXi, VSAN, vSphere 5.5, vSphere 6.0 Tags // esxi 5.5, vcenter, vCenter Server, VSAN, vsanDatastore, vSphere 5.5, vSphere 6.0

Considerations when migrating VMs between vCenter Servers

05.02.2014 by William Lam // 19 Comments

Something that I really enjoy when I get a chance to, is to speak with our field folks and learn a bit more about our customer environments and some of the challenges they are facing. Last week I had quick call with one of our TAMs (Technical Account Managers) regarding the topic of Virtual Machine migration between vCenter Servers. The process of migrating Virtual Machines between two vCenter Servers is not particularly difficult, you simply disconnect the ESXi hosts from one vCenter Server and re-connect to the new vCenter Server. This is something I have performed on several occasions when I was a customer and with some planning it works effortlessly.

However, there are certain scenarios and configurations when migrating VMs between vCenter Servers that could potentially cause Virtual Machine MAC Address collisions. Before we jump into these scenarios, here is some background. By default, a Virtual Machine MAC Address is automatically generated by vCenter Server and the algorithm is based on vCenter Server's unique ID (0-63) among few other parameters which is documented here. If you have more than one vCenter Server, a best practice is to ensure that these VC IDs are different, especially if they are in the same broadcast domain.

As you can imagine, if you have two vCenter Servers that are configured with the same VC ID, there is a possibility that a duplicate MAC Address could be generated. You might think this is probably a rare event given the 65,000 possible MAC Address combinations. However, it actually happens more frequently than you think, especially in very large scale environments and/or Dev/Test for continuous build/integration environments which I have worked in the past and I have personally seen these issues before.

Going back to our vCenter Server migration topic, there are currently two main scenarios that I see occurring in customr environments and we can explore each in a bit more detail and their implications:

  • Migrate ALL Virtual Machines from old vCenter Server to new vCenter Server
  • Migrate portion of Virtual Machines from old vCenter Server to new vCenter Server

Migrate all Virtual Machines:

vm-migration-between-vcenter-0
In the diagram above, we have vCenter Server 1 and vCenter Server 2 providing a before/after view. To make things easy, lets say they have VC ID 1 and 2. If we migrate ALL Virtual Machines across, we can see their original MAC Addresses will be preserved as we expect. For any new Virtual Machines being created, the 4th octet of the MAC Address will differ as expected and the vCenter Server will guarantee it is unique. If you want to ensure that new Virtual Machines keep a similar algorithm, you could change the vCenter Server ID to 1. No issues here and the migration is very straight forward.

Migrate A portion of Virtual Machines:

vm-migration-between-vcenter-1
In the second diagram, we still have vCenter Server 1 and vCenter Server with unique VC IDs. However, in this scenario we are only migrating a portion of the Virtual Machines from vCenter Server 1 to vCenter Server 2. By migrating VM2 off of vCenter Server 1, the MAC Address of VM2 is no longer registered with that vCenter Server. What this means is that vCenter Server 1 can potentially re-use that MAC Address when it generates a new request. As you can see from the above diagram, this is a concern because VM2 is still using that MAC Address in vCenter Server 2, but vCenter Server 1 is no longer aware of its existence.

The scenario above is what the TAM was seeing at his customer's site and after understanding the challenge, there are a couple of potential solutions:

  1. Range-Based MAC Address allocation - Allows you to specify a range of MAC Addresses to allocate from which may or may not helpful if the migrated MAC Addresses are truly random
  2. Prefix-Based MAC Address allocation -  Allows you to modify the default VMware OUI (00:50:56) which would then ensure no conflicts would be created with previously assigned MAC Addresses. Though this could solve the problem, you potentially could run into collisions with other OUI's within your environment
  3. Leave VMs in a disconnected state - This was actually a solution provided by another TAM on an internal thread which ended up working for his customer. The idea was that instead of disconnecting and removing the ESXi host when migrating a set of Virtual Machines, you would just leave it disconnected in vCenter Server 1. You would still be able to connect the ESXi host and Virtual Machines to vCenter Server 2 but from vCenter Server 1 point of view, the MAC Addresses for those Virtual Machines are seen as in use and it would not be reallocated.

I thought option #3 was a pretty interesting and out of the box solution that the customer came up with. The use case that caused them to see this problem in the first place is due to the way they provision remote environments. The customer has a centralized build environment in which everything is built and then shipped off to the remote sites which is a fairly common practice. Since the centralized vCenter Server is not moving, you can see how previously used MAC Addresses could be re-allocated.

Although option #3 would be the easiest to implement, I am not a fan of seeing so many disconnected systems from an operational perspective as it is hard to tell if there is an issue with the ESXi host and Virtual Machines or because it has been migrated. I guess one way to help with that is to create a Folder called "Migrated" and move all disconnected ESXi hosts into that folder which would help mask that away and disable any alarms for those hosts.

Some additional per-requisite checks that you can perform prior to the partial Virtual Machine migration:

Ensure that the destination vCenter Server is not configured with the same VC ID else you can potentially run into duplicate MAC Address conflicts. You can do this either manually through the vSphere Web/C# Client or leveraging our CLI/API to do so.

Here is an example using PowerCLI to retrieve the vCenter Server ID:

Get-AdvancedSetting -Entity $server -Name instance.id

Ensure no duplicate MAC Addresses exists by comparing the MAC Addresses of the Virtual Machines to be migrated to the Virtual Machines in the new environment. Again, you can either do this by hand (which I would not recommend) or leveraging our CLI/API to extract this information.

Here is an example using PowerCLI to retrieve the MAC Addresses for a Virtual Machine:

Get-VM |  Select-Object -Property Name, PowerState, @{"Name"="MAC";"Expression"={($_ | Get-NetworkAdapter).MacAddress}}

If there are other scenarios or solutions that you have seen with Virtual Machine migrations between vCenter Serves, feel free to leave a comment. I am sure others can benefit from past experiences or any other lesson learns.

Categories // vSphere Tags // mac address, migration, vCenter Server, vSphere

  • « Previous Page
  • 1
  • …
  • 11
  • 12
  • 13

Search

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Infrastructure Business Group (CIBG) at VMware. He focuses on Cloud Native technologies, Automation, Integration and Operation for the VMware Cloud based Software Defined Datacenters (SDDC)

Connect

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Recent

  • How to enable passthrough for USB Network Adapters claimed by ESXi CDCE Driver? 03/30/2023
  • Self-Contained & Automated VMware Cloud Foundation (VCF) deployment using new VLC Holodeck Toolkit 03/29/2023
  • ESXi configstorecli enhancement in vSphere 8.0 Update 1 03/28/2023
  • ESXi on Intel NUC 13 Pro (Arena Canyon) 03/27/2023
  • Quick Tip - Enabling ESXi Coredumps to be stored on USB 03/26/2023

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2023

 

Loading Comments...