WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Cross vCenter Server operations (clone / migrate) between versions of vSphere 6.x

02.27.2017 by William Lam // 7 Comments

When cross vCenter Server operations such as clone and migrate was first introduced in vSphere 6.0, it required that both the source and destination vCenter Server (includes ESXi hosts) to be running the same vSphere version. With the release of vSphere 6.5, this base requirement still holds true (e.g. vSphere 6.5 for both source and destination), especially when performing these operations using the vSphere Web Client where mixed-vSphere versions is not supported outside of a rolling upgrade.

Having said that, it is possible and supported to clone or migrate a VM across different versions of vSphere 6.x, for example a vSphere 6.5 and a vSphere 6.0 Update 3 environment. This can be accomplished by performing a xVC-vMotion or xVC-Clone operation using the vSphere API. For the the xVC-vMotion use case, I have extensively written about it here and here and with PowerCLI 6.5r1, the Move-VM cmdlet has even been updated based on my feedback to support this capability natively. Furthermore, you can even perform these operations across completely different vCenter Single Sign-On Domains, which enables a new level of mobility for your VMs and access to resources of independently deployed vCenter Server instances.

UPDATE (11/01/17) - The following VMware KB 2106952 has just been updated to reflect what is officially supported in terms of Cross vCenter Operations ( Clone / Migrate) across different versions of vSphere. The matrix in the KB reflects what has been tested by Engineering and one thing you may notice is that Cross vCenter vMotion/Clone from vSphere 6.x to vSphere 6.5 is only supported when running at least vSphere 6.0 Update 3. After speaking with the PM, the reason for this change is that pre-vSphere 6.0 Update 3, there were no pre-checks in the code to prevent Cross vCenter Operations for un-supported target hosts such as ESXi 5.5, which could lead to poor user experience as well as undefined failure scenarios. In addition, vSphere 6.0 Update 3 also includes additional enhancements to properly clean up failed provisioning operations which will make Cross vCenter Operations much more robust. Due to these reasons, though it is possible to perform Cross vCenter vMotion from earlier versions, it will not be officially supported. I have also updated my summarized table below to reflect what is in the VMware KB, but please use the KB as your official source of truth for what VMware supports.

To help make sense of the different combinations of vMotions and cloning operations, below are a few tables to help outline what is possible and supported today.

vMotion

Source vCenter Server Destination vCenter Server Supported UI or API
vSphere 6.0 vSphere 6.0 Yes UI and API
vSphere 6.x (pre 6.0 Update 3) vSphere 6.5 Possible but Not Supported N/A
vSphere 6.0 Update 3 vSphere 6.5 Yes API
vSphere 6.5 vSphere 6.5+ Yes UI and API
vSphere 6.5 vSphere 6.x No No
vSphere 6.5+ VMware Cloud on AWS Yes UI and API
VMware Cloud on AWS vSphere 6.5+ Yes UI and API

Cold Migrate

Source vCenter Server Destination vCenter Server Supported UI or API
vSphere 6.0 vSphere 6.0 Yes UI and API
vSphere 6.x (pre 6.0 Update 3) vSphere 6.5 Possible but Not Supported API
vSphere 6.0 Update 3 vSphere 6.5 Yes API
vSphere 6.5 vSphere 6.5 Yes UI and API
vSphere 6.5 vSphere 6.x No No
vSphere 6.5+ VMware Cloud on AWS Yes UI and API
VMware Cloud on AWS vSphere 6.5+ Yes UI and API

Clone

Source vCenter Server Destination vCenter Server Supported  UI or API
vSphere 6.0 vSphere 6.0 Yes UI and  API
vSphere 6.x (pre 6.0 Update 3) vSphere 6.5 No N/A
vSphere 6.0 Update 3 vSphere 6.5 No N/A
vSphere 6.5 vSphere 6.5+ Yes UI and API
vSphere 6.5 vSphere 6.x No N/A
vSphere 6.5+ VMware Cloud on AWS Yes UI and API
VMware Cloud on AWS vSphere 6.5+ Yes UI and API

Virtual Networking Migration

Source Type Destination Type Supported
VDS VDS Yes
VDS VSS No
VSS VSS Yes
VSS VDS Yes

Note1: vMotioning and/or cloning of VMs which uses the new vSphere Encryption feature introduced in vSphere 6.5 is not supported.

Note2: "Compute" only xVC-vMotion insufficient space issue has now been resolved with vSphere 6.0 Update 3, see this post here for more details.

Note3: xVC-vMotion is not supported on 3rd party switches as we can not checkpoint the switching state.

Here are some additional xVC-vMotion and vMotion articles that may also useful to be aware of:

  • Are Affinity/Anti-Affinity rules preserved during Cross vCenter vMotion (xVC-vMotion)?
  • Duplicate MAC Address concerns with xVC-vMotion in vSphere 6.0
  • Network Compatibility Checks During vMotion Between vCenter Server Instances
  • Auditing vMotion Migrations

Categories // Automation, vSphere 6.0, vSphere 6.5 Tags // Cross vMotion, ExVC-vMotion, vSphere 6.0, vSphere 6.5, vSphere API, xVC-vMotion

Potential ESXi Host Preparation issues with NSX 6.3

02.17.2017 by William Lam // 16 Comments

While working on updating my vGhetto Automated vSphere Lab Deployment script to add support for NSX 6.3 with vSphere 6.5, I ran into an issue with the Host Preparation step. Although the resolution turned out to be quite simple, it was very difficult to diagnose the problem. I suspect this scenario could easily be encountered by others, so I wanted to make folks aware of what I ran into. There is also another potential gotcha for host preparation that I did not encounter myself, but it was brought to my attention that I thought was also worth sharing as well.

Scenario 1 - Attempted Host Preparation and all "Install agent" tasks fails with "Cannot complete the operation. See the event log for details" and below is a screenshot of the error. There was nothing useful when looking at the event logs for either NSX or ESXi using the vSphere Web Client.


There was also nothing useful in the ESXi log /var/log/esxupdate.log that gave insights to why the NSX VIBs failed to install:

2017-02-16T12:38:53Z esxupdate: 73899: Transaction: DEBUG: Populating VIB list from all VIBs in metadata https://vcenter65-1.primp-industries.com:443/eam/vib?id=d4917629-51d1-4da9-82d6-8da54815447d; depots:
2017-02-16T12:38:54Z esxupdate: 73899: downloader: DEBUG: Downloading https://vcenter65-1.primp-industries.com:443/eam/vib?id=d4917629-51d1-4da9-82d6-8da54815447d to /tmp/tmpdfcbr23q...
2017-02-16T12:38:54Z esxupdate: 73899: Metadata.pyc: INFO: Unrecognized file vendor-index.xml in Metadata file
2017-02-16T12:38:54Z esxupdate: 73899: imageprofile: INFO: Adding VIB VMware_locker_tools-light_6.5.0-0.0.4564106 to ImageProfile (Updated) ESXi-6.5.0-4564106-standard
2017-02-16T12:38:54Z esxupdate: 73899: imageprofile: INFO: Adding VIB VMware_bootbank_esx-vsip_6.5.0-0.0.4987428 to ImageProfile (Updated) ESXi-6.5.0-4564106-standard
2017-02-16T12:38:54Z esxupdate: 73899: imageprofile: INFO: Adding VIB VMware_bootbank_esx-vxlan_6.5.0-0.0.4987428 to ImageProfile (Updated) ESXi-6.5.0-4564106-standard
2017-02-16T12:38:54Z esxupdate: 73899: vmware.runcommand: INFO: runcommand called with: args = '['/bin/localcli', 'system', 'maintenanceMode', 'get']', outfile = 'None', returnoutput = 'True', timeout = '0.0'.
2017-02-16T12:38:54Z esxupdate: 73899: HostInfo: INFO: localcli system returned status (0) Output: Disabled Error:

[Read more...]

Categories // NSX, vSphere 6.5 Tags // eam, ESXi 6.5, NSX, vSphere 6.5, vum

Update on Intel NUC 7th Gen (Kaby Lake) & ESXi 6.x

02.15.2017 by William Lam // 28 Comments

Intel just started shipping their i3 7th Gen (Kaby Lake) Intel NUCs and no surprise, Florian of virten.net has already gotten his hands on a unit for testing. The Intel NUCs is a very popular platform for running vSphere/vSAN-based home labs, especially for their price point and footprint. Last week, Florian discovered from his testing that the built-in network adapter on the 7th Gen NUC was not being detected by any of the ESXi installers and had published his findings here.

Since the Intel NUC is not an officially supported platform for ESXi, it does not surprise me that these sort of things happen, even if the NUC had a pretty good track record going back to the 5th Gen releases. Nonetheless, I reached out to Florian to see if he can provide me with a vm-support bundle and see if there was anything I could do to help.

A couple of Engineers took a look and quickly identified the issue with the Kabylake NIC (8086:15d8) but before getting to the solution, I did want to clarify something. The Kabylake NIC is actually NOT an officially supported NIC for ESXi and although it currently shows up in the VMware HCL, it is a mistake. I have been told the VMware HCL will be updated shortly to reflect this, apologies for any confusion that this may have caused.

Ok, so the great news is that we do have a solution for getting ESXi to recognize the built-in NIC on the 7th Gen Intel NUCs. The semi-bad news is that we currently do not have a solution in the short term for any released version of ESXi, as the fix will require an updated version of the e1000e Native Driver which will only be available in a future update of ESXi. I can not provide any timelines, but keep an eye on this blog and I will publish more details once they are available.

In the meantime, if you already own a 7th Gen NUC, there is a workaround which Florian has already blogged about here which uses the USB Ethernet Adapter VIB for the initial ESXi installation. If you are planning to purchase the 7th Gen NUC and would like to wait for folks to confirm the fix, then I would recommend holding off or potentially looking at the 6th Gen if you can not wait. Thanks to Florian and others who shared their experiences with the 7th Gen NUC and also to the VMware Engineers who found a quick resolution to the problem.

UPDATE (07/27/17) - The updated e1000e Native Driver that was included in ESXi 6.0 Update 3 is now included in ESXi 6.5 Update 1 which just GA'ed. You should be able to install ESXi without require any additional modifications to the latest Intel NUCs.

UPDATE (02/24/17) - An updated e1000e driver which contains a fix for 7th Gen NUC is now available as part of ESXi 6.0 Update 3, below are several options in how you can consume the driver.

[Read more...]

Categories // ESXi, Home Lab, Not Supported, vSphere 6.0, vSphere 6.5 Tags // Intel NUC, Kaby Lake, ne1000, ne1000e, vSphere 6.5

  • « Previous Page
  • 1
  • …
  • 5
  • 6
  • 7
  • 8
  • 9
  • …
  • 18
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VCF 9.0 Hardware Considerations 05/30/2025
  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...