I recently caught an interesting VMTN thread where a user wanted to move an exiting VSAN Cluster from one vCenter Server to another vCenter Server with minimal impact to the ESXi hosts and running Virtual Machines. The great news is that this can be done without any impact to your ESXi hosts and more importantly, there is no impact to your workloads. I have personally performed this operation on several occasions without any problems and the process is actually quite straight forward and thought I walk you through it because it is literally a couple of steps.
The main reason this is not a challenge is that VSAN has been architected to not have a reliance on vCenter Server for its normal operations. It is true that vCenter Server is required for the configuration and management of the VSAN Cluster and VM Storage Policies, but once those configurations have been applied, then the vCenter Server is no longer in picture from operational point of view. This means if you need to move your VSAN Cluster from a development vCenter Server to a production vCenter Server or if you accidentally destroyed your original vCenter Server, the VSAN Cluster can easily be re-created on a new vCenter Server.
To demonstrate the process, I have a 3 Node VSAN Cluster with a running Virtual Machine on vCenter Server (vcenter55-1) and I have built a new vCenter Server (vcenter55-3) which I would like to move the existing VSAN Cluster over to.
UPDATE2 (11/02/17) - There was a question a couple of weeks back on whether the procedure outlined below could also apply to a vSAN Stretched Cluster. I did not see any technical reasons preventing this and one of our GSS Engineers had recently validated this with a customer and successfully moved a vSAN Stretched Cluster. I asked if he could share the modified instructions in case others were interested.
- Copy all VDS settings to new cluster
- Enable vSAN on new cluster (follow Step 2 below)
- Disable stretched cluster
- Move each host
- Move witness
- Re-enable stretched cluster (follow Step 4 below)
Step 1 - Deploy a new vCenter Server and create a vSphere Cluster with VSAN Enabled.
UPDATE1 (05/02/17) - Updated to include vSAN 6.6. specific instructions.
Pre-vSAN 6.6 - Disconnect one of the ESXi hosts from your existing VSAN Cluster and then add that to the VSAN Cluster in your new vCenter Server.
Note: Technically, you do not even have to disconnect the ESXi hosts from the old vCenter Server. You could just add the ESXi hosts to the new vCenter Server and once you have confirmed you wish to move the ESXi host, it will automatically be disconnected once added. This would actually save you an extra step.
vSAN 6.6 - An additional configuration is needed to be applied to all ESXi hosts PRIOR to disconnecting from the original vCenter Server and adding them into the new vCenter Server. Below are a few examples on how to apply the ESXi Advanced Setting which should be a value of 1:
Here is an example using ESXCLI (local or remotely) on an individual ESXi host:
esxcli system settings advanced set -o /VSAN/IgnoreClusterMemberListUpdates -i 1
Here is an example of using PowerCLI to apply the setting across all ESXi hosts if the original vCenter Server is still available:
Foreach ($vmhost in (Get-Cluster -Name VSAN-Cluster | Get-VMHost)) {
$vmhost | Get-AdvancedSetting -Name "VSAN.IgnoreClusterMemberListUpdates" | Set-AdvancedSetting -Value 1 -Confirm:$false
}
Here is an example of using PowerCLI to apply the setting directly to an ESXi host if the original vCenter Server is no longer available:
Get-VMHost -Name 192.168.1.100 | Get-AdvancedSetting -Name "VSAN.IgnoreClusterMemberListUpdates" | Set-AdvancedSetting -Value 1 -Confirm:$false
Once you have successfully added the ESXi host, you should see a warning within the VSAN Configuration page stating there is a "Misconfiguration detected" which is expected. What is happening is that this ESXi has been configured in an existing VSAN Cluster and the ESXi hosts that it is supposed to be able to communicate with are not part of this VSAN Cluster. Once we add the remainder ESXi hosts, then the VSAN Cluster will be happy and this error will go away.
Note: If you try to add all of the ESXi hosts from the existing VSAN Cluster to the new VSAN Cluster at once, you will see an error regarding UUID mismatch. The trick is to add one host first and once that has been done, you can then bulk add the remainder ESXi hosts and you will not have an issue. This is handy if you are trying to automate this process.
Step 3 - Add the remainder ESXi hosts to the VSAN Cluster in the new vCenter Server. Once all hosts have been added to the new VSAN Cluster, you will see the warning icons disappear and your VSAN Cluster is now fully managed by the new vCenter Server. We can also confirm that there are no network partition as all original VSAN configurations have been retained on the ESXi hosts.
UPDATE1 (05/02/17)
Step 4 - This last step is ONLY applicable to vSAN 6.6 hosts. Once all hosts have been successfully to the new vCenter Server and you have verified cluster status is healthy and there are no network partitions. We now need to update the ESXi Advanced Setting we had set earlier from a value of 1 back to value of 0.
Here is a PowerCLI snippet which given a vSAN Cluster, it will automatically go through all ESXI hosts and update the setting:
Foreach ($vmhost in (Get-Cluster -Name VSAN-Cluster | Get-VMHost)) {
$vmhost | Get-AdvancedSetting -Name "VSAN.IgnoreClusterMemberListUpdates" | Set-AdvancedSetting -Value 0 -Confirm:$false
}
Disclaimer: As mentioned there is no impact to the ESXi hosts (other than not being able to manage it while you disconnect and re-connect on the new vCenter Server) and there is no impact to the running Virtual Machines and any VM Storage Policies that have been applied to the VM will still be enforced by each of the ESXI hosts. However, one thing to be aware of is that the VM Storage Policies in your original vCenter Server will not be available in the new vCenter Server. You will need to re-create each of the VM Storage Policies and re-attach them to the existing Virtual Machines. This can of course be automated by using the vSphere API or leveraging the new PowerCLI 5.8 R1 release which includes VM Storage Policie cmdlets.
Here is an example of exporting a VM Storage Policy named "FTT=1" to a file called policy.xml on your desktop:
Export-SpbmStoragePolicy -StoragePolicy (Get-SpbmStoragePolicy -Name FTT=1) -FilePath C:\Users\Administrator\Desktop\policy.xml
Currently this is the only impact by moving a VSAN Cluster from one vCenter Server to another and of course this assumes you have created VM Storage Policies aside from the default policies.
I received a couple of questions regarding the networking setup for my VSAN Cluster. In the above example I was using VSS (Virtual Standard Switch). I did however, retest this scenario completely on VDS (Virtual Distributed Switch) and the results were the same. When all ESXi hosts have been added to the new vCenter Server, you will see a warning about proxy host switch. The key to properly migrating the networks (VMkernel & VM Portgroup) is to add each ESXi hosts to the new VDS that you will need to create. If you original vCenter Server is still available, you can export and import the VDS configuration. If it is not available, then you will need to manually re-create the Distributed Portgroups before proceeding.
The first step is to go to the Networking view and right click select "Add and Manage Hosts"
Go ahead and walk through the guided wizard and make sure you only add one hosts at a time, as I saw issues when trying to add multiple hosts at a time. Once the ESXi host has been added to the new VDS and its uplinks, VMkernel and VM Portgroups are all connected. You should now see two VDS under the Networking view of ESXi host under "Manage".
This can be seen clearly using the vSphere C# Client as it allows you to view both on the same screen. Once you have confirmed that the everything looks good, then you can go ahead and remove the old VDS switch as shown in the screenshot above. At this point, your ESXi hosts networking is now running on the new VDS. You will continue this same workflow for the remainder ESXi hosts until they all have been migrated over to the new VDS.
Victor Chen says
Hi William,
Thanks for your blog but I have a question. What type of virtual switch do you use? VSS or VDS?
According to my testing, there is a problem for vDS if you use vDS. Of course VSAN is running well in the new vCenter. But since vDS is created in the old vCenter and the VSAN vmk is created on that vDS, you can't do any operation related to vDS in the new vCenter.
For example, you can not change the VSAN vmk ip address, can not change the uplink NIC, can not set traffic shaping, etc...
William Lam says
Victor,
My setup was using VSS, however I just re-tested the scenario using VDS (VMkernel + VM Portgroup) and it does work. I've updated my article to provide the procedure that I used and I was able to migrate over. If you need to change the VSAN VMkernel IP, you should wait until the ESXi hosts have been migrated to the new VDS, either that or change the IP prior but I wouldn't recommend performing so many changes as well as migrating to new VDS
shruthi says
Hello William,
Thank you for all the articles you write which makes most of troubleshooting steps easier.
I wanted to know what if we would like to change vLAN IDs of vSAN and VM network on new VCenter.
Disconnect and reconnect host to new VC will have minimal downtime on VMs. I wanted to know how would that impact vSAN Datastores after changing vLAN ID.
K. Chris Nakagaki (@Zsoldier) says
OMG... Did I see PowerCLI code in this post!?
Victor Prylipko says
Hi, William!
Thanks for your blog!
I have question about moving hosts to new vCenter server.
I have vSAN cluster of 5 hosts. 3 of them granted there disks groups to vSAN.
I want to move only this 3 hosts to new vCenter.
I did as you wrote.
BUT!
On each host I got error:
Found host(s) participating in the Virtual SAN service which is not a member of this host's vCenter cluster.
My question is: How to delete this host, which i don't move to new vCenter, from cluster configuration?
William Lam says
Hi Victor,
You can not just migrate a subset of the 5 hosts who originally contributed storage to the VSAN Datastore. If you only wish to migrate 3, then you need to first ensure ALL data from the other two have been completely migrated off. This would also assume that the 3 hosts can run all workloads that were initially provisioned across the 5 hosts.
This is the reason you're seeing the message because VSAN knows that you've only migrated a portion of its capacity. If you delete or destroy the remainder two hosts, then you will negatively impact your environment.
Victor Prylipko says
William, yes, I already migrated all workloads.
I just need to reconfigure vSAN to FORGET about 2 of my UNNECESSARY hosts.
William Lam says
When you say "migrated all workloads" its not just the VMs but ensure you have migrated the actual data off of the disks on those 2 hosts as well. Remember, data in VSAN is distributed across all hosts contributing storage. You can do this by placing the host into maintenance mode using the vSphere Web Client and you will have several options on VSAN mode, one of which is full data migration. Once that's been done, you can then remove the hosts from the VSAN Cluster
Victor Prylipko says
Yes, I migrated ALL.
But, as I understand, all of 3 hosts, that I try to connect to NEW vCenter REMEMBER about 2 another hosts, which left on OLD vCenter.
pnlucio says
Hi William,
I have the following scenario:
One vCenter Server managing several Clusters, including a vSAN Enabled Cluster. In that vSAN Cluster ( 3 hosts ) I have deployed a new vCenter Server ( VCSA + External Oracle DB ). The objective is to migrate all hosts from all the clusters to this new vCenter Server and ditch the old vCenter Server.
The difference of my scenario from yours is that the new vCenter Server in actually residing on the vSAN Cluster that I intend to migrate first.
It seems to me that the operation will be peaceful, but I just want to make sure that I am not overlooking any detail.
Thank you on your comments,
Regards,
_
Pedro.
William Lam says
Pedro,
Yep, this will work perfectly fine. Once VSAN has been configured, it'll continue to work with or without VC (similiar to HA) and of course you won't be able to make any configuration changes until you have access back to VC.
Govind says
Hi William,
is it advisable to change the vSAN vmkernel IPaddress in the existing configuration.Currently we using the management & VSAN in different vswitch with same vlan. Planning to change the vSAN network to a different vlan.
HPsenicka says
Hi William!
Would there be any additional considerations when moving a VSAN cluster from vCenter 5.5 to vCenter 6?
I am imagining that any additional complications should only surface post move when I decide to upgrade to VSAN 6. Is that correct?
Paul Sheard says
Hi William, how easy/hard is it to change an existing set of VSAN vmkernel IP addresses? I have a 6 node cluster and want to change the last octet on each vsan vmkernel to a slightly lower number. The subnet, gateway and vlan all stay the same.. Have you tried this? thanks Paul.
Joe Peterson says
Greetings William,
Great and very informative site!! So far, I've learned a ton - Thank you!!
I have a few questions about migrating VMs from 1 datacenter to another datacenter. We're relocating our datacenter to another site that we already have. One little twist, we're planning on upgrading our current version vSphere 5.1 to 5.3 U3 on the new hardware. Also, we're not using vStack. We're strictly Fiber channel shop.
Should I move our current vSphere cluster first and then do the upgrade? The pipe between the location is only 100MB pipe.
Please let me know what else you need on my end before you can give me a recommendation.
Thank you,
Joe Peterson
Nick Stefanisko says
I've got an interesting problem WRT VSAN. I have a host that was in a VSAN cluster, it did not contribute any disk space to the cluster, it was just a user of the VSAN. I moved that host to a different vCenter, but the host still thinks it should have access to the VSAN in its old cluster. How do I tell it that the VSAN is no longer available?
Daniel Langenhan says
Awesome work. Here some updates concerning Streched Clusters (vSpehre 6)
Move Stretched Cluster
• Unconfigure Stretched
• Move Hosts one after another
• Move Witness
• (remove witness from cluster manual if it didnt work)
Esxcli vsan cluster leave
• Remove disk groups from Witness
esxcli vsan storage remove -s mpx.vmhba1:C0:T1:L0
• Reconfigure Streched
Joe says
Can this be done going from vCenter 5.5 to vCenter 6.5?
Shaheer Shamsi says
This procedure doesnt work for ESXi 6.0 servers connected to LAG on source vCenter. The ESXi server goes into PSOD state, when attempting to re-add to LAG on destination vCenter's ESXi Management Port Group. Btw, the DVS was exported from Source --> destination vCenter with "Preserve Port Group Settings" prior to connect the hypervisors to destination vCenter.
William Lam says
Can you try again but disable "Preserve Port Group Settings" when importing the VDS to new VC and see if that helps?
Shamz says
It did not do a PSOD, this time. However, it did not work as expected and made one out of the two vmnics in "Link Down" State. Had to recover from situation to connect back to old vCenter by following the steps in the article --> http://sostechblog.com/2014/01/31/moving-a-vmnic-from-vds-to-vss-at-the-host-commandline/
and reset Standard Switch, Reboot ; reconfigure VMKernel for vMotion, VSAN. Believe, LACP is causing this behaviour. Have you tested with LACP in your LAB Setup ?
bigboss77 says
I have also tested using a vDS and LACP, and I first try I got a PSOD. The second try I did the same suggestion, but afterwards had to manually configure the uplinks again, which caused a vDS downtime (and everything connected to it) of about 10 minutes.
Shamz says
So, I believe the safest method is to migrate the uplinks from LACP to VSS on source vCenter > add host to destination vCenter > migrate to dvs on destination vCenter? I couldn't figure out any other possible way. If you have, please chime in
Shamz says
Instead of export and import of DVS from source to destination vCenter, create a New DVS with same port groups as source vCenter and connect your hypervisors to destination VC. This is applicable for hosts configured by LACP and on DVS
reddy08 says
Hi Shamz,
How did you get your ESXI host on VDS with LACP to VSS. Curious to know, as I am having the same situation in my current setup.
Motive: To migrate the ESXI hosts running on VDS with LACP to a new vcenter.
Martin Meinhardt says
Hi William,
is it also possbile to move a vSAN 6U3 stretched cluster (4 hosts) from an existing vCSA 6U3 to a soon new deployed vCSA 6.5.0b (which is configured with new external load balanced PSCs on 6.5.0b)? Long story short, we can't repoint the existing vCSA to the new PSCs...
Best regards,
Martin
Matt Z says
I was able to move a 6.0U3 two node cluster from a 6.0 vCenter to a 6.5 vCenter just now, without disrupting my running VMs. I hit a snag with the witness however. Working on that right now. Might have to deploy a new one.
Matt Z says
Replying to my own comment: I missed Daniel Langenhan's comment above. That would have avoided the snag I hit. For now I will just deploy a new witness.
Rommel Humarang, VCAP5-DCD, VCP-Cloud, Linux+, Security+ says
Thanks William! I used this method on my customer last night. Migrated a production vSAN cluster to a new vCenter server and it worked like a charm. All vmkernels and VM's are in VDS and there is no need to perform the classic VSS to VDS migration. VM's never lost connectivity. Though there could be a bug with vSAN vmkernel migration to vDS, After each ESXi host migration, the vSAN vmkernel just gets partitioned all the time. The workaround is to disable and immediately re-enable the vSAN vmkernel after each move.
vasil says
I have a similar problem with vsan 6.6. After reconnecting all cluster nodes with the IgnoreClusterMemberListUpdates = 1 parameter to the new cluster of the new vCenter, everything looks fine: the esxcli vsan cluster unicastagent list command displays the correct node list on all nodes. After switching the IgnoreClusterMemberListUpdates = 0 parameter, everything also looks good. But after any operation through vCenter with the configuration of vSAN (add disks, nodes, reconfigure or add vSAN vmkernel), the loss of unicastagent list occurs. The output of the esxcli vsan cluster unicastagent list command on all nodes becomes empty, all nodes (disk groups) become partitioned, all cluster nodes get MSATER status.
Only by manually entering commands like esxcli vsan cluster unicastagent add -a 172.16.90.234 -i vmk1 -u 58fb605d-1415-bcaf-8cd7-fc15b40bb348 -U 1 -t node .... on all nodes and by switching the IgnoreClusterMemberListUpdates = 1 parameter, you can return Cluster to life. However, any changes regarding the network or the addition / removal of the VSAN cluster nodes must be performed manually (esxcli / powercli).
Doc McGee says
Did you customer use LAG's?
locca says
Hi William,
Any insight if deduplication and encryption is enabled on the vSAN cluster to be migrated? Deduplication and encryption needs to be re-enabled on the new cluster in the new VC , will the dedication and encryption rolling format happen again?
Thanks.
Bill Dossett says
This is exactly the article I was looking for... however I am having issues. I am on vsphere 6.5. There is a problem with my vcenter appliance, it is still working but I can not storage migrate it, clone it, or backup and restore it. I am out of ideas so I want to create a new VCSA and migrate to it. I am using all VDS. When I try to deploy to a host, and get to configure network settings, there are no networks listed. Help says that when ESXI is used as the deployment target non-ephemeral distributed virtual portgroups are not supported. And after filling out all the network info and pressing next, I get No Networks on the host, cannot proceed.. I guess I will try to deploy to the vcenter, but from the article I though I would be able to do this without the vcenter. I would greatly appreciate any help, I have been struggling with getting this all resolved for over a week now and many sleepless nights!
kees says
Hi, is moving the management vmk from dvs to a vss needed?
Shamz says
If you are using LACP for physical uplinks then it’s required to mirage off the uplinks from DVS to VSS one source vCenter before joining the new vCenter and remigrating to DVS there.
Jignesh says
Hi,
I am moving a vsan cluster with LACP, I know everyone has tried different methods here. I would like to know if this what requires - move LACP to VSS and before moving the physical nic to VSS reconfigure the physical switch port and now connect to new VC, move from VSS to VDS and then recreate LACP. So I assume I need to work closely with the network team when doing this? Or anyone had any new tested method? Please help.
Regards
Jignesh
Bryan says
I have referenced this article several times over the years when I migrated my clusters to new vcenters. One thing I wanted to note is if I did not select Preserve Port Group Settings when exporting the VDS. When I went to add the hosts to the new VDS and migrated the vmkernel ports. It would drop my Vm's off the network if I did not select migrate virtual machine networking as well. Looking at the VM's nic would show an invalid backing.
By Preserving the Port Group Settings I was able to bypass the step of migrating the virtual machines networking. When adding a host to the VDS was now just migrating the vmkernel ports. This saves alot of time when migrating several hundred VM's to a new vcenter.
renen says
Hi William, do you have step by step changing Mgmt IP address of ESXi host and VCSA with VSAN enabled cluster? Appreciate a guide.. thanks
William Dossett says
Sorry, I do not... to be honest I can't even remember if I did it or not, my VSAN cluster has been running under the current VCSA for nearly two years now... and my memory is bad!
Jeff Creek says
Will this work for moving hosts from vCenter 6.7 to 7.0?
William Lam says
yup, concept should apply but do test 🙂
Shalom Freitas says
Hi William,
Thanks for the awesome article. My vCenter 6.7 stopped working and I had deploy a new one, so the old one is not available. I have deploy a new vCenter, a new Data Center clustere with vSphere enabled.
It is not clear to me if I have to copy my Virtual Standard Switch over to the new cluster before I migrate each ESXi host or if that only applies to environments using VDS (Virtual Distributed Switch).
Also, is there a way for me to get the current Storage Policy considering the old vCenter is unavailable? I am almost sure they are default policies, but I am not 100%. If the default policies on the new cluster do not match the existing ones will that be a problem?
Thank you in advance for your help.