WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple
You are here: Home / vSphere / Moving ESXi hosts with LACP/LAG between vCenter Servers?

Moving ESXi hosts with LACP/LAG between vCenter Servers?

11.09.2017 by William Lam // 11 Comments

At VMworld this year, I had received several questions from customers asking whether it was possible to move an ESXi host configured using LACP/LAG from one vCenter Server to another, similar to the workflows outlined here or here. Not having spent much time with LACP/LAG, I reached out to a fellow colleague who I knew would know the answer, Anothony Burke, who you may know as one of the co-creators of the popular Automation tool PowerNSX.

Anthony not only verified that there was indeed a workflow for this scenario, but he was also kind enough to test and verify this in his lab. Below is the procedure that he had shared with me and I merely "prettified" the graphics he initially drafted up 🙂

At a high level, the workflow is similar to the ones shared earlier. The main difference is that for an LACP/LAG-based configuration, you must convert from VDS to VSS and then disconnect from one vCenter Server to the other, you can not simply disconnect and "swing" the ESXi host like you could for non-LACP/LAG configuration or you will run into issues. Once you have re-added the ESXi host to the new vCenter Server, you simply reverse the procedure from VSS to VDS and re-create your LACP/LAG configuration.

Step 1 - Here is an example of a starting point, where we have an ESXi host with 2 pNICs (vmnic0 and vmnic1) connected to an LACP bundle which is then associated with a physical switch.


Step 2 - Move vmnic1 from LACP/LAG configuration and then create a new VSS which it is associated with. To allow existing connections to drain gracefully, place the pNIC into standby rather than just removing it which will simply terminate all existing flows.


Step 3 - Now, that we have a pNIC on both the VDS and VSS, we can now migrate all VMkernel and VM Networking interfaces from the VDS over to the VSS.


Step 4 - Once you have completed all VMK and VM Networking migrations, you can now remove vmnic0 from VDS and associate that to the VSS.


Step 5 - At this point, you can now safely disconnect the ESXi hosts from the current vCenter Server and add that to the new vCenter Server

Step 6 - Now, we just simply perform the reverse set of steps from 2-4, by going from VSS to VDS and re-creating our LACP bundle on the new vCenter Server.

More from my site

  • Automate the reverse, migrating from vSphere Distributed Switch to Virtual Standard Switch using PowerCLI 5.5
  • Automate the migration from Virtual Standard Switch to vSphere Distributed Switch using PowerCLI 5.5
  • How to manually clean up a Distributed Virtual Switch (VDS) on an ESXi host?
  • Quick Tip - Retrieving vSphere Distributed Switch (VDS) DVPort ID & Stats using PowerCLI
  • Migrating ESXi to a Distributed Virtual Switch with a single NIC running vCenter Server

Categories // vSphere Tags // distributed virtual switch, LACP, LAG, vds, vss

Comments

  1. *protectedScott Elliott says

    11/09/2017 at 10:41 am

    We ran into this a while back and the way we handle it was to Export the VDS to a file and restore it to the new vCenter. Next move one host over to the new vCenter (without VMs) and configure it to the VDS. Once that was complete we took a host from the old vCenter, added it to the new vCenter and used vMotion to move VM to the first host added. Then configure the new host with the VDS and repeat for the remainder of your host. Work like a champ but it isn't a supported method. I can say we moved over 500 VMs this way and although tedious it worked.

    Reply
    • William Lam says

      11/09/2017 at 10:46 am

      Thanks for sharing your story Scott! Glad to hear you got it working

      Reply
  2. *protectedMichael says

    11/09/2017 at 11:44 am

    Thanks for sharing. Actually, step 4 is unnecessary, since you will remove vmnic from vss in step 7.

    Reply
    • *protectedChandan says

      07/05/2019 at 4:15 am

      But, if pNIC isn't removed from the vDS as mentioned in the step 4 then pNIC might be still associated with vDS portgroup in source vCenter & may not be added to the target vCenter vDS?

      Reply
  3. *protectedMichael Rottlander says

    11/09/2017 at 10:29 pm

    In step 2 before adding the pnic to the vss you’ll have to reconfigure the physical switch to remove the port from lacp. Otherwise the traffic will be blocked.
    Before re-adding the nic to the vds in step 6, you’ll have to re-configure the seitch again.
    For the move you’ll have to work closely together with networking team as timing is critical.

    Reply
  4. *protectedThomas Staeck says

    11/14/2017 at 10:07 am

    Hello William,

    we also had to move several cluster from one vCenter to another vCenter. In principle the procedure works without a problem. But there are several issues to consider.

    - Templates are "special". In our scenario the Cluster had datastore cluster and when we moved the ESXi server to the new vCenter we experienced during tests that we lose the templates on the datastore cluster from the inventory. I am not sure whether this situation also occur on normal datastores. We had to convert them to a VM before the migration and back to template as the hosts are registered at the new vCenter.

    - Templates are real "special" . In our scenario the template where connected to portgroups on the distributed Switch. You must also migrate the templates to the "migration switch" otherwise you cannot remove the ESXi server from the distributed Switch. Interestingly the template remains connected to the port is was connected to as a VM before it got converted to a template. This means even the template is not on the ESXi server anymore it still blocks removing the ESXi from the distributed switch.

    - if you have overrides on VM bases for the compute and datastore cluster they got lost during the migration. RVTools is a great tool to do the documentation and gives the data for mass changes afterwards.

    - Another point to consider is the network interruption when moving from VDS to VSS and back. During our PoC we had network timeouts from 0 - 6 seconds. For RDP session we saw successful reconnections. For the applications inside the RDP session luckily it was also not a problem.

    All over vSphere proved again that in principle (mostly) every (mis)configuration could be changed without an outage.

    Best Regards

    Thomas

    Reply
  5. *protectedJignesh says

    11/01/2018 at 10:08 pm

    Hi William,

    Thanks for the info. I would like to know if in Step 2 before moving the pnic to VSS do I need to reconfigure the pswitch to remove that port from LACP? and then reverse when recreating the LACP on new VC?

    Regards
    Jignesh

    Reply
  6. *protectedSachin says

    12/22/2018 at 7:14 pm

    Hi Jignesh,

    Have you got your answer as I am also in the same situation

    Regards,
    Sachin

    Reply
  7. *protectedJignesh says

    03/10/2019 at 5:37 pm

    Hi Sachin

    No I haven't got any response

    Regards
    Jignesh

    Reply
  8. *protectedChandan says

    07/05/2019 at 4:18 am

    I think we may have to break the LAG before we move the pNIC from the LAG bundle. Even I am in the same situation.

    Reply
  9. *protectedmanu says

    10/03/2019 at 6:53 am

    Hello,

    We have tested Live Migrating Hosts and Clusters with LAG/LACP between vcenters. We did 6.0 to 6.7 u2 vcenter migration with lag. We had only couple ping loss max. You need to export and import the same dvs s/w to new vcenter, make lacp config to lacp fallback on core switch side and change lacp timeout. That way even if switch dont receive lacp it will still have connectivity.

    Once you connect hosts to new cluster move all hosts to new dvs with same config.

    Thanks

    Reply

Thanks for the comment!Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...