For those of you who have attempted a vMotion (whether that is within a vCenter Server or between different vCenter Servers (including across SSO Domains), if the VM is running on a Distributed Virtual Switch (VDS) and the version of the VDS is not the same between the source and destination (VDS 6.5 to VDS 6.7), the operation will fail with the following error message (UI and API):
Currently connected network interface 'Network adapter 1' cannot use network 'DVPG-VM-Network (VDS-67)', because the destination distributed switch has a different version or vendor than the source distributed switch.
This behavior is no different on VMware Cloud on AWS (VMC) or at least, I thought it was, until I recently learned about a really neat feature that was introduced in the VMC 1.4p2 release, here is a snippet from the release notes:
Cross VDS version vMotion Compatibility
With this advanced configuration option enabled, bi-directional vMotion between on-premises and VMware Cloud on AWS can be achieved across different virtual distributed switch (VDS) versions (greater than or equal to version 6.0). This must be enabled on the on-premises vCenter.
It turns out there is actually a way to allow vMotions across different VDS versions, this is important for VMC because the software stack will always be using a newer version than what we ship to our onPrem customers. However, due to this limitation, we could not benefit from the latest VDS version but had to default it to VDS 6.0 to ensure that customers could migrate their workloads. The advanced setting mentioned in the release notes allows us to disable the strict compatibility check which is performed on the destination vCenter Server when a vMotion is initiated, this setting is now enabled by default on the VMC vCenter Server which is why you can perform migration across different VDS without having to do anything special on your onPrem vCenter Server.
UPDATE (11/07/21) - Thanks to Robert Cranedonk, it looks like you can now also vMotion across different NSX-T Logical Switches by adding vCenter advanced setting called config.vpxd.network.allowVmotionBetweenLogicalSwitches and set the value to true.
UPDATE (01/02/21) - If you are running vSphere 7.x, an additional advanced setting must be configured called config.vmprov.enableHybridMode and set the value to true. For more details, you can refer to this VMware KB 79446. Thanks to reader Marc Alumbaugh for sharing this finding!
UPDATE (10/16/18) - With the release of vSphere 6.7 Update 1, customers can now also vMotion VMs from on-prem running on a VDS to VMC with NSX-T N-VDS.
To allow vMotions to go the other direction (VMC to onprem), the onPrem vCenter Server would need to be configured with an advanced setting which can be applied non-disruptively. What the release note does not go into details about is the actual process. If you wish to enable migrations back to onPrem, simply login to your vCenter Server and go to the Advanced Settings as shown in the screenshot below. You will then need to add the following property: config.migrate.test.NetworksCompatibleOption.AllowMismatchedDVSwitchConfig and set the value to true and click OK to save.
Once this is done, you will now be able to perform vMotion bi-directionally between VMC and your onPrem environment. Pretty cool!? This actually got me thinking, if this setting allows the destination vCenter Server to relax this compat check, I wonder if this could also be applicable for vMotions for a pure onPrem to onPrem deployment?
It turns out the answer is yes and in fact, the screenshot above is actually a failed vMotion of a VM with from a source vCenter Server running VDS 6.5 to vCenter Server running VDS 6.7. After applying the setting on my destination vCenter Server (6.7), I was able to successfully perform the vMotion using this quick PowerCLI snippet (more details here) and as you can see from the screenshot below, the migration was successful!
$sourceVC = Connect-VIServer -Server mgmt-vcsa-03 -User *protected email* -Password VMware1! $targetVC = Connect-VIServer -Server mgmt-vcsa-04 -User *protected email* -Password VMware1! $targetVmhost = "vesxi-05.cpbu.corp" $targetDatastore = "vsanDatastore" $targetNetwork = "DVPG-VM-Network" $sourceVM = "TinyVM-01" Move-VM -VM (Get-VM -Server $sourceVC $sourceVM) -VMotionPriority High ` -Destination (Get-VMhost -Server $targetVC -Name $targetVmhost) ` -Datastore (Get-Datastore -Server $targetVC -Name $targetDatastore) ` -Portgroup (Get-VDPortgroup -Server $targetVC -Name $targetNetwork)
Before you go and start enabling this feature, there are few things to be aware of
- You must be running vSphere 6.0 Update 3, vSphere 6.5 Update 2 and vSphere 6.7+ and customers with NSX-V, you will need to be running at least NSX-V 6.3.6 or greater for your onPrem vCenter Server (includes ESXi host version)
- When performing vMotion across different VDS, we are only migrating the port of the VM and not the actual switch configuration where the VM resides. This means for switch level configurations like NIOC, IPFIX or NSX-V features like Distributed Firewall (DFW) will NOT be migrated to the destination vCenter Server. If you are relying on these features, you will need to ensure that they have been configured on the destination vCenter Server prior to migration. This behavior is the same whether you are going from onPrem to VMC or onPrem to onPrem
- For vSphere 7.x environment, you will also need to configure config.vmprov.enableHybridMode = true as mentioned in the note above
In general, we recommend customers to vMotion between the same VDS version. I think the biggest benefit of this capability are for customers who wish to migrate their workloads from older vSphere deployments (6.0 and 6.5) and go directly to a fresh vCenter Server, which probably has refreshed hardware. After the workloads have been migrated, they can simply decomission the vCenter Server and not have to worry about upgrades, which is something I had quite a few times during my conversations at VMworld.
Its really nice to see that we were able to solve a particular challenge in VMC and the results also directly benefit our onPrem customers!
amir54 says
like.
Jonas says
Thanks!
Any idea if this setting will be or is officially supported in a pure on-premise environment?
Would like to use this in a pure on-premise environment when refreshing hardware, but assume it is not currently supported as it is not available in any documentation or KB.
William Lam says
Jonas, my understanding is this is supported. This is the same configuration we use on the VMC's vCenter Server to allow customers to migrate using different VDS from onPrem to VMC, the only difference is that VMC's vCenter Server has this enabled by default
Jonas says
Thank you, will keep this in mind during future hardware refresh projects in client environments.
Kyle McDonald says
I've just used this as an alternative to the options presented in https://kb.vmware.com/s/article/2126851 while migrating VMs to a new on-prem VCSA. Thanks!
Dag Kvello says
How I wish I had known this a year ago 🙁
Florin Mircea says
Great !!
Question : Is it supported for vDS 5.5 under VCSA 6.5 U2 ? We migrated to vCSA 6.5 ( new deployment ) , but where we imported the old VDS ( 5.5 ) , which also had a different Vendor ID ( VMware instead of VMware, INC ) . We are building a new infrastructure ( new clusters, new hosts , in new VDS) , but had issues with the online migration due to different vDS version + different Vendor ID . We applied workarounds ( migrating first to vSS , then to VDS , as we don't really want to risk the upgrade of current vDS 5.5 ) .
Thank you !
Jason Chen says
This is great article.
One question, can vMotion network running on VSS not VDS?
Derek Charleston says
I have used the suggested settings in Advanced configuration on my on-prem vSphere 6.5 vDS 6.5 and vSphere 6.7 vDS version 6.6 and works without any problem. I must that you for this article which allowed me to for go migrating my vDS configuration to a downgraded version. You are the MAN.
Mathiau says
Great info. Can this only be done via PowerCLI once enabled ? Or can the GUI used ?
Rob Dowling says
Hey William. Im migrating a lot of VMs from a Nexus1000v switch to VCF on VxRail cluster and needed to use this cheat to allow the cross vCenter vMotions to work. The VMs migrate almost perfect except that the vNIC is disconnected. Once I edit settings and click the checkbox to connect the NIC, all is fine.
I have a case open with Dell/VMware but waiting to hear from them.
Do I have a RARP issue that is being masked by me using this setting and ignoring the incompatibility between vDS and N1000v?
Marc Alumbaugh says
For vSphere 7 use this
I was trying to vMotion from one cluster running 7.x with NSX-T on vDS 7.0 to a host running 6.5x vDS 6.5
config.vmprov.enableHybridMode = true
See here https://kb.vmware.com/s/article/79446 for more details
William Lam says
Thanks Marc, I was not aware of this new setting and I have just updated the article with a note to this setting and KB
Jeff Kowalenko says
These advanced settings appear to no longer work in with the 7.0.3 vDS and vCenter 7 update 3. I am trying to migrate between a 7.0.3 vDS and a 6.5.0 vDS and the vMotion shows incompatible. I created a support case 22321446104 and VMware has been able to replicate.
Toert ChenKid says
Same here, migrate online vm from vcsa 7.0.1 Build: 17491160 with a dvs 6.6.0 to vcsa 7.0.1 Build:19717403 with a dvs 7.0.0 not working. Got the same Message and the advanced settings from https://kb.vmware.com/s/article/79446 do not have any impact.
Toert ChenKid says
sorry, 7.0.3 Build:19717403
Rob Dowling says
I'm hitting this issue now too and putting the call into VMware but am I wasting my time? Did you find any solution? Is it even possible to upgrade the vDS version without upgrading the nodes too? vDS is a vCenter component after all
Toert ChenKid says
After a call with a VMware tech guy i will upgrade dvs 6.6.0 to 7.0.0
There is a Bug in dvs 7, something with a string that is publish to the lower dvs version 6.6.0 as a wrong value. Will be fixed with vcsa 8.
Toert ChenKid says
Hm, now it's working. I've added
config.vmprov.enableHybridMode on both! vcsa and now its migrating.
Think VMWare should wrote it in their article.
Rob Dowling says
Yeah, I have reported it as working also. The KB is written very badly. In the reference section it says that migrating from 7.0.3 back to older release is the problem
Rob Dowling says
Actually, they seem to have changed to KB as the original message that it does not work with 7.0.3 has been removed and reference section now lists the issue as moving from 7.0.3 to older release. I can confirm that this is an issue