WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud
  • Tanzu
    • Application Modernization
    • Tanzu services
    • Tanzu Community Edition
    • Tanzu Kubernetes Grid
    • vSphere with Tanzu
  • Home Lab
  • Nested Virtualization
  • Apple
You are here: Home / Automation / vMotion across different VDS version between onPrem and VMC

vMotion across different VDS version between onPrem and VMC

09.19.2018 by William Lam // 21 Comments

For those of you who have attempted a vMotion (whether that is within a vCenter Server or between different vCenter Servers (including across SSO Domains), if the VM is running on a Distributed Virtual Switch (VDS) and the version of the VDS is not the same between the source and destination (VDS 6.5 to VDS 6.7), the operation will fail with the following error message (UI and API):

Currently connected network interface 'Network adapter 1' cannot use network 'DVPG-VM-Network (VDS-67)', because the destination distributed switch has a different version or vendor than the source distributed switch.


This behavior is no different on VMware Cloud on AWS (VMC) or at least, I thought it was, until I recently learned about a really neat feature that was introduced in the VMC 1.4p2 release, here is a snippet from the release notes:

Cross VDS version vMotion Compatibility
With this advanced configuration option enabled, bi-directional vMotion between on-premises and VMware Cloud on AWS can be achieved across different virtual distributed switch (VDS) versions (greater than or equal to version 6.0). This must be enabled on the on-premises vCenter.

It turns out there is actually a way to allow vMotions across different VDS versions, this is important for VMC because the software stack will always be using a newer version than what we ship to our onPrem customers. However, due to this limitation, we could not benefit from the latest VDS version but had to default it to VDS 6.0 to ensure that customers could migrate their workloads. The advanced setting mentioned in the release notes allows us to disable the strict compatibility check which is performed on the destination vCenter Server when a vMotion is initiated, this setting is now enabled by default on the VMC vCenter Server which is why you can perform migration across different VDS without having to do anything special on your onPrem vCenter Server.

UPDATE (11/07/21) - Thanks to Robert Cranedonk, it looks like you can now also vMotion across different NSX-T Logical Switches by adding vCenter advanced setting called  config.vpxd.network.allowVmotionBetweenLogicalSwitches and set the value to true.

UPDATE (01/02/21) - If you are running vSphere 7.x, an additional advanced setting must be configured called config.vmprov.enableHybridMode and set the value to true. For more details, you can refer to this VMware KB 79446. Thanks to reader Marc Alumbaugh for sharing this finding!

UPDATE (10/16/18) - With the release of vSphere 6.7 Update 1, customers can now also vMotion VMs from on-prem running on a VDS to VMC with NSX-T N-VDS.

To allow vMotions to go the other direction (VMC to onprem), the onPrem vCenter Server would need to be configured with an advanced setting which can be applied non-disruptively. What the release note does not go into details about is the actual process. If you wish to enable migrations back to onPrem, simply login to your vCenter Server and go to the Advanced Settings as shown in the screenshot below. You will then need to add the following property: config.migrate.test.NetworksCompatibleOption.AllowMismatchedDVSwitchConfig and set the value to true and click OK to save.


Once this is done, you will now be able to perform vMotion bi-directionally between VMC and your onPrem environment. Pretty cool!? This actually got me thinking, if this setting allows the destination vCenter Server to relax this compat check, I wonder if this could also be applicable for vMotions for a pure onPrem to onPrem deployment?

It turns out the answer is yes and in fact, the screenshot above is actually a failed vMotion of a VM with from a source vCenter Server running VDS 6.5 to vCenter Server running VDS 6.7. After applying the setting on my destination vCenter Server (6.7), I was able to successfully perform the vMotion using this quick PowerCLI snippet (more details here) and as you can see from the screenshot below, the migration was successful!

$sourceVC = Connect-VIServer -Server mgmt-vcsa-03 -User *protected email* -Password VMware1!
$targetVC = Connect-VIServer -Server mgmt-vcsa-04 -User *protected email* -Password VMware1!

$targetVmhost = "vesxi-05.cpbu.corp"
$targetDatastore = "vsanDatastore"
$targetNetwork = "DVPG-VM-Network"
$sourceVM = "TinyVM-01"

Move-VM -VM (Get-VM -Server $sourceVC $sourceVM) -VMotionPriority High `
    -Destination (Get-VMhost -Server $targetVC -Name $targetVmhost) `
    -Datastore (Get-Datastore -Server $targetVC -Name $targetDatastore) `
    -Portgroup (Get-VDPortgroup -Server $targetVC -Name $targetNetwork)


Before you go and start enabling this feature, there are few things to be aware of

  • You must be running vSphere 6.0 Update 3, vSphere 6.5 Update 2 and vSphere 6.7+ and customers with NSX-V, you will need to be running at least NSX-V 6.3.6 or greater for your onPrem vCenter Server (includes ESXi host version)
  • When performing vMotion across different VDS, we are only migrating the port of the VM and not the actual switch configuration where the VM resides. This means for switch level configurations like NIOC, IPFIX or NSX-V features like Distributed Firewall (DFW) will NOT be migrated to the destination vCenter Server. If you are relying on these features, you will need to ensure that they have been configured on the destination vCenter Server prior to migration. This behavior is the same whether you are going from onPrem to VMC or onPrem to onPrem
  • For vSphere 7.x environment, you will also need to configure config.vmprov.enableHybridMode = true as mentioned in the note above

In general, we recommend customers to vMotion between the same VDS version. I think the biggest benefit of this capability are for customers who wish to migrate their workloads from older vSphere deployments (6.0 and 6.5) and go directly to a fresh vCenter Server, which probably has refreshed hardware. After the workloads have been migrated, they can simply decomission the vCenter Server and not have to worry about upgrades, which is something I had quite a few times during my conversations at VMworld.

Its really nice to see that we were able to solve a particular challenge in VMC and the results also directly benefit our onPrem customers!

More from my site

  • Bulk VM Migration using new Cross vCenter vMotion Utility Fling
  • Resource Pools, Folders & VMC now supported with Cross vCenter vMotion Utility Fling
  • When to use Move-VM cmdlet vs xMove.ps1 script for performing Cross vCenter vMotions?
  • Uniquely identifying VMs in vSphere Part 3: Enhanced Linked Mode & Cross VC-vMotion
  • Cross vCenter Server operations (clone / migrate) between versions of vSphere 6.x

Categories // Automation, NSX, VMware Cloud on AWS, vSphere Tags // Cross vMotion, ExVC-vMotion, NSX, vmotion, VMware Cloud on AWS, xVC-vMotion

Comments

  1. amir54 says

    09/19/2018 at 1:57 pm

    like.

    Reply
  2. Jonas says

    09/19/2018 at 8:54 pm

    Thanks!

    Any idea if this setting will be or is officially supported in a pure on-premise environment?
    Would like to use this in a pure on-premise environment when refreshing hardware, but assume it is not currently supported as it is not available in any documentation or KB.

    Reply
    • William Lam says

      09/20/2018 at 4:39 am

      Jonas, my understanding is this is supported. This is the same configuration we use on the VMC's vCenter Server to allow customers to migrate using different VDS from onPrem to VMC, the only difference is that VMC's vCenter Server has this enabled by default

      Reply
      • Jonas says

        09/20/2018 at 5:24 pm

        Thank you, will keep this in mind during future hardware refresh projects in client environments.

        Reply
  3. Kyle McDonald says

    09/21/2018 at 1:15 am

    I've just used this as an alternative to the options presented in https://kb.vmware.com/s/article/2126851 while migrating VMs to a new on-prem VCSA. Thanks!

    Reply
  4. Dag Kvello says

    09/21/2018 at 1:53 am

    How I wish I had known this a year ago 🙁

    Reply
  5. Florin Mircea says

    10/24/2018 at 2:09 pm

    Great !!

    Question : Is it supported for vDS 5.5 under VCSA 6.5 U2 ? We migrated to vCSA 6.5 ( new deployment ) , but where we imported the old VDS ( 5.5 ) , which also had a different Vendor ID ( VMware instead of VMware, INC ) . We are building a new infrastructure ( new clusters, new hosts , in new VDS) , but had issues with the online migration due to different vDS version + different Vendor ID . We applied workarounds ( migrating first to vSS , then to VDS , as we don't really want to risk the upgrade of current vDS 5.5 ) .

    Thank you !

    Reply
  6. Jason Chen says

    04/11/2019 at 2:43 am

    This is great article.

    One question, can vMotion network running on VSS not VDS?

    Reply
  7. Derek Charleston says

    06/27/2019 at 6:48 am

    I have used the suggested settings in Advanced configuration on my on-prem vSphere 6.5 vDS 6.5 and vSphere 6.7 vDS version 6.6 and works without any problem. I must that you for this article which allowed me to for go migrating my vDS configuration to a downgraded version. You are the MAN.

    Reply
  8. Mathiau says

    11/08/2019 at 9:29 am

    Great info. Can this only be done via PowerCLI once enabled ? Or can the GUI used ?

    Reply
  9. Rob Dowling says

    04/24/2020 at 5:25 am

    Hey William. Im migrating a lot of VMs from a Nexus1000v switch to VCF on VxRail cluster and needed to use this cheat to allow the cross vCenter vMotions to work. The VMs migrate almost perfect except that the vNIC is disconnected. Once I edit settings and click the checkbox to connect the NIC, all is fine.
    I have a case open with Dell/VMware but waiting to hear from them.
    Do I have a RARP issue that is being masked by me using this setting and ignoring the incompatibility between vDS and N1000v?

    Reply
  10. Marc Alumbaugh says

    01/02/2021 at 10:02 am

    For vSphere 7 use this
    I was trying to vMotion from one cluster running 7.x with NSX-T on vDS 7.0 to a host running 6.5x vDS 6.5
    config.vmprov.enableHybridMode = true
    See here https://kb.vmware.com/s/article/79446 for more details

    Reply
    • William Lam says

      01/02/2021 at 2:38 pm

      Thanks Marc, I was not aware of this new setting and I have just updated the article with a note to this setting and KB

      Reply
  11. Jeff Kowalenko says

    04/15/2022 at 3:30 pm

    These advanced settings appear to no longer work in with the 7.0.3 vDS and vCenter 7 update 3. I am trying to migrate between a 7.0.3 vDS and a 6.5.0 vDS and the vMotion shows incompatible. I created a support case 22321446104 and VMware has been able to replicate.

    Reply
    • Toert ChenKid says

      06/15/2022 at 4:04 am

      Same here, migrate online vm from vcsa 7.0.1 Build: 17491160 with a dvs 6.6.0 to vcsa 7.0.1 Build:19717403 with a dvs 7.0.0 not working. Got the same Message and the advanced settings from https://kb.vmware.com/s/article/79446 do not have any impact.

      Reply
      • Toert ChenKid says

        06/15/2022 at 4:05 am

        sorry, 7.0.3 Build:19717403

        Reply
        • Rob Dowling says

          06/21/2022 at 3:12 am

          I'm hitting this issue now too and putting the call into VMware but am I wasting my time? Did you find any solution? Is it even possible to upgrade the vDS version without upgrading the nodes too? vDS is a vCenter component after all

          Reply
          • Toert ChenKid says

            06/23/2022 at 1:07 am

            After a call with a VMware tech guy i will upgrade dvs 6.6.0 to 7.0.0
            There is a Bug in dvs 7, something with a string that is publish to the lower dvs version 6.6.0 as a wrong value. Will be fixed with vcsa 8.

  12. Toert ChenKid says

    06/23/2022 at 4:17 am

    Hm, now it's working. I've added
    config.vmprov.enableHybridMode on both! vcsa and now its migrating.
    Think VMWare should wrote it in their article.

    Reply
    • Rob Dowling says

      06/23/2022 at 5:48 am

      Yeah, I have reported it as working also. The KB is written very badly. In the reference section it says that migrating from 7.0.3 back to older release is the problem

      Reply
      • Rob Dowling says

        06/23/2022 at 5:51 am

        Actually, they seem to have changed to KB as the original message that it does not work with 7.0.3 has been removed and reference section now lists the issue as moving from 7.0.3 to older release. I can confirm that this is an issue

        Reply

Thanks for the comment! Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Author

William Lam is a Senior Staff Solution Architect working in the VMware Cloud team within the Cloud Infrastructure Business Group (CIBG) at VMware. He focuses on Cloud Native technologies, Automation, Integration and Operation for the VMware Cloud based Software Defined Datacenters (SDDC)

Connect

  • Email
  • GitHub
  • LinkedIn
  • RSS
  • Twitter
  • Vimeo

Recent

  • Self-Contained & Automated VMware Cloud Foundation (VCF) deployment using new VLC Holodeck Toolkit 03/29/2023
  • ESXi configstorecli enhancement in vSphere 8.0 Update 1 03/28/2023
  • ESXi on Intel NUC 13 Pro (Arena Canyon) 03/27/2023
  • Quick Tip - Enabling ESXi Coredumps to be stored on USB 03/26/2023
  • How to disable the Efficiency Cores (E-cores) on an Intel NUC? 03/24/2023

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2023

 

Loading Comments...