WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

Automating Cross vCenter vMotion (xVC-vMotion) between the same & different SSO Domain

05.26.2016 by William Lam // 79 Comments

In the last couple of months, I have noticed an increase in customer interests in using the Cross vCenter vMotion (xVC-vMotion) capability that was introduced back in vSphere 6.0. In my opinion, I still think this is probably one of the coolest features of that release. There is no longer the limitation of restricting your Virtual Machine mobility from within a single vCenter Server, but you can now live migrate a running VM across different vCenter Servers.

The primary method to start a xVC-vMotion is by using the vSphere Web Client which requires your vCenter Servers Servers to be part of the same SSO Domain and will automatically enable the new Enhanced Linked Mode (ELM) feature. ELM allows you to easily manage and view all of your vCenter Servers from within the vSphere Web Client as shown in the example screenshot below.

Screen Shot 2015-02-07 at 10.34.53 AM
However, the vSphere Web Client is not the only way to start a xVC-vMotion, you can also automate it through the use of the vSphere API. In fact, there is even an "Extended" capability of xVC-vMotion that is not very well known which I have written about here which allows to live migrate a running VM across two different vCenter Servers that are NOT part of the same SSO Domain. This Extended xVC-vMotion (unofficially I am calling it ExVC-vMotion) is only available when using the vSphere API as the vSphere Web Client is unable to display vCenter Servers that are part of another SSO Domain. Below is a quick diagram to help illustrate the point in which VM1 can be seamlessly migrated between different vCenter Servers from within the same SSO Domain as well as between different vCenter Servers that are not part of the same SSO Domain.

xvc-vmotion-between-same-and-different-sso-domain-0
Note: For additional details and requirements for Cross vCenter vMotion, please have a look at this VMware KB 210695 and this blog post here for more information.

UPDATE (06/15/17) - I have added a few minor enhancements to the script to support migrating a VM given a vSphere Resource Pool which enables the ability to migrate to and from VMware's upcoming VMware Cloud on AWS (VMC). There is also an additional UppercaseUUID parameter which seems to be required for some xVC-vMotions where the vCenter Server's InstanceUUID must be provided as all upper case or the operation will fail. I have still not identified why this is needed for some migrations, but for now there is a nice flag that can be used to enable this if you are hitting this problem.

UPDATE (04/08/17) - In vSphere 6.0 Update 2, there is a known limitation which prevents a VM that has multiple VMDKs stored across different datastores to be xVC-vMotion (compute only) using the vSphere Web Client. This limitation no longer exists in vSphere 6.0 Update 3 but does require customers to upgrade. If you need to perform a compute-only xVC-vMotion where the VM has multiple VMDKs across different datastores, the vSphere APIs does not have this limitation and you do not necessary need to upgrade to be able to perform this operation. Huge thanks to Askar Kopayev who discovered this and also submitted an enhancement to my xMove-VM PowerCLI script to support this functionality.

Given the amount of interest recently and some of the feedback on my original ExVC-vMotion script which I had written about here, I figured it was time to refactor my code so that it could easily support both ExVC-vMotion as well as standard xVC-vMotion. In addition, I have also added support for migrating to and from a Distributed Virtual Switch (VDS), where as previously the example only supported Virtual Standard Switch (VSS). Lastly, the script now also supports migrating a VM that is configured with multiple vNICs.

The new script is now called xMove-VM.ps1 and is even more simpler than my original script. You will need to edit the script and update the following variables:

Variable Description
vmname Name of the VM to migrate
sourceVC The hostname or IP Address of the source vCenter Server in which the VM currently resides in
sourceVCUsername Username to the Source vCenter Server
sourceVCPassword Password to the Source vCenter Server
destVC The hostname or IP Address of the Destination vCenter Server in which to migrate the VM to
destVCUsername Username to the Destination vCenter Server
destVCpassword Password to the Destination vCenter server
datastore Name of the vSphere Datastore to migrate the VM to
cluster Name of the vSphere Cluster to migrate the VM to
resourcepool Name of the vSphere Resource Pool to migrate the VM to
vmhost Name of the ESXi host to migrate the VM to
vmnetworks Name of the vSphere Network(s). in the order in of the vNIC interfaces to migrate the VM to
switch Name of the vSphere Switch to migrate the VM to that is comma separated and ordered by vNIC
switchtype The type of vSphere Switch (vss or vds)
xvctype Whether this is a Compute-only Cross VC-vMotion (1=true or 0 = false)
UppercaseUUID There cases where the vCenter Server InstanceUUID must be all caps ($true or $false)

Here is a screenshot of running the script:

Screen Shot 2016-05-25 at 8.01.50 PM
Note: When changing the type of vSphere Switch, the following combinations will are supported by the script as well as using the vSphere Web Client: VDS to VDS, VSS to VSS and VSS to VDS. VDS to VSS is not supported using the UI or API and neither are 3rd party switches supported.

Here are some additional xVC-vMotion and vMotion articles that may also useful to be aware of:

  • Are Affinity/Anti-Affinity rules preserved during Cross vCenter vMotion (xVC-vMotion)?
  • Duplicate MAC Address concerns with xVC-vMotion in vSphere 6.0
  • Auditing vMotion Migrations

Categories // Automation, vSphere 6.0 Tags // Cross vMotion, ExVC-vMotion, PowerCLI, RelocateVM_Task, sso, vSphere 6.0, vSphere API, vsphere web client, xVC-vMotion

Customizing the ESXi DCUI to show number of VMs

05.24.2016 by William Lam // 7 Comments

Last week there was a question that was posted internally asking if it was possible to customize the ESXi DCUI screen to include the number of Virtual Machines? Although there is nothing out of the box, you can in fact add add almost anything to the DCUI screen by modifying the /etc/vmware/welcome configuration file which I had blogged about several years back on adding a splash of color to the ESXi DCUI. There was even a recent VMware Fling that provides a VIB that applies a variety of DoD STIG implementations, one of which was to update the DCUI screen with some specific text.

However, instead of having to manually edit the file directly on the ESXi host, we also provide an API in the way of an ESXi Advanced Setting called Annotations.WelcomeMessage which can then be updated remotely using anyone of the vSphere SDK/CLIs that you are familiar with.

Here is an example PowerCLI snippet to connect to an ESXi host (you can also do this by connecting to vCenter Server) and extracting all Virtual Machines residing on the host and then updating the DCUI screen with the total number of VMs as well as the names of each VM. Obviously, if you have more than 10 or so VMs, it may not make much sense to actually list them as it will just run off the screen, but this just gives you an example of some of the things you can do leveraging the vSphere API or any other data that you might have at your disposal.

Connect-VIServer -Server 192.168.1.50 -User root -Password vmware123

$vms = Get-View -ViewType VirtualMachine -Property Name

$DCUIString = "{color:yellow}
                {esxproduct} {esxversion}

                Hostname: {hostname}
                IP: {ip}
                
                Virtual Machines: " + $vms.Count + "
{/color}
{color:white}
"

foreach ($vm in $vms) {
    $DCUIString += "`t`t" + $vm.Name + "`n"
}

Set-VMHostAdvancedConfiguration -Name Annotations.WelcomeMessage -Value $DCUIString

Disconnect-VIServer -Confirm:$false

Note: If you wish to only filter out powered on Virtual Machines, you can add the following: -Filter @{"Runtime.PowerState" = "PoweredOn"} to the end of the Get-View command

Here is a screenshot quick screenshot of what this would look like on the DCUI screen of your ESXi host:

displaying_vms_in_esxi_dcui_1
Note: You can actually run the DCUI over SSH, which can be useful for testing purposes without having to directly login to the console remotely. Take a look at this blog post here for more details.

One final thing to be aware of when editing the welcome message on your ESXi host is that it will also be displayed in the ESXi Embedded Host Client login page as shown in the screenshot below. Just something to be aware of in case you plan to make any sensitive information available as this can be seen without needing to login to the ESXi host (just like the DCUI interface).

displaying_vms_in_esxi_dcui_2

Categories // Automation, ESXi Tags // dcui, ESXi, PowerCLI, vSphere API

ESXi on the new Intel NUC Skull Canyon

05.21.2016 by William Lam // 62 Comments

Earlier this week I found out the new Intel NUC "Skull Canyon" (NUC6i7KYK) has been released and have been shipping for a couple of weeks now. Although this platform is mainly targeted at gaming enthusiast, there have also been a lot of anticipation from the VMware community on leveraging the NUC for a vSphere based home lab. Similiar to the 6th Gen Intel NUC system which is a great platform to run vSphere as well as VSAN, the new NUC includes a several new enhancements beyond the new aesthetics. In addition to the Core i7 CPU, it also includes a dual M.2 slots (no SATA support), Thunderbolt 3 and most importantly, an Intel Iris Pro GPU a Thunderbolt 3 Controller. I will get to why this is important ...
intel_nuc_skull_canyon_1
UPDATE (05/26/16) - With some further investigation from folks like Erik and Florian, it turns out the *only* device that needs to be disabled for ESXi to successfully boot and install is the Thunderbolt Controller. Once ESXi has been installed, you can re-enable the Thunderbolt Controller and Florian has also written a nice blog post here which has instructions as well as screenshots for those not familiar with the Intel NUC BIOs.

UPDATE (05/23/16) - Shortly after sharing this article internally, Jason Joy, a VMware employee shared the great news that he has figured out how to get ESXi to properly boot and install. Jason found that by disabling unnecessary hardware devices like the Consumer IR/etc in the BIOS, it allowed the ESXi installer to properly boot up. Jason was going to dig a bit further to see if he can identify the minimal list of devices that needed to be disabled to boot ESXi. In the meantime, community blogger Erik Bussink has shared the list of settings he has applied to his Skull Canyon to successfully boot and install latest ESXi 6.0 Update 2 based on the feedback from Jason. Huge thanks to Jason for quickly identifying the workaround and sharing it with the VMware community and thanks to Erik for publishing his list. For all those that were considering the new Intel NUC Skull Canyon for a vSphere-based home lab, you can now get your ordering on! 😀

Below is an except from his blog post Intel NUC Skull Canyon (NUC6I7KYK) and ESXi 6.0 on the settings he has disabled:

BIOS\Devices\USB

  • disabled - USB Legacy (Default: On)
  • disabled - Portable Device Charging Mode (Default: Charging Only)
  • not change - USB Ports (Port 01-08 enabled)

BIOS\Devices\SATA

  • disabled - Chipset SATA (Default AHCI & SMART Enabled)
  • M.2 Slot 1 NVMe SSD: Samsung MZVPV256HDGL-00000
  • M.2 Slot 2 NVMe SSD: Samsung MZVPV512HDGL-00000
  • disabled - HDD Activity LED (Default: On)
  • disabled - M.2 PCIe SSD LEG (Default: On)

BIOS\Devices\Video

  • IGD Minimum Memory - 64GB (Default)
  • IGD Aperture Size - 256 (Default)
  • IGD Primary Video Port - Auto (Default)

BIOS\Devices\Onboard Devices

  • disabled - Audio (Default: On)
  • LAN (Default)
  • disabled - Thunderbolt Controller (Default is Enabled)
  • disabled - WLAN (Default: On)
  • disabled - Bluetooth (Default: On)
  • Near Field Communication - Disabled (Default is Disabled)
  • SD Card - Read/Write (Default was Read)
  • Legacy Device Configuration
  • disabled - Enhanced Consumer IR (Default: On)
  • disabled - High Precision Event Timers (Default: On)
  • disabled - Num Lock (Default: On)

BIOS\PCI

  • M.2 Slot 1 - Enabled
  • M.2 Slot 2 - Enabled
  • M.2 Slot 1 NVMe SSD: Samsung MZVPV256HDGL-00000
  • M.2 Slot 2 NVMe SSD: Samsung MZVPV512HDGL-00000

Cooling

  • CPU Fan HEader
  • Fan Control Mode : Cool (I toyed with Full fan, but it does make a lot of noise)

Performance\Processor

  • disabled Real-Time Performance Tuning (Default: On)

Power

  • Select Max Performance Enabled (Default: Balanced Enabled)
  • Secondary Power Settings
  • disabled - Intel Ready Mode Technology (Default: On)
  • disabled - Power Sense (Default: On)
  • After Power Failure: Power On (Default was stay off)

Over the weekend, I had received several emails from folks including Olli from the nucblog.net (highly recommend a follow if you do not), Florian from virten.net (another awesome blog which I follow & recommend) and few others who have gotten their hands on the "Skull Canyon" system. They had all tried to install the latest release of ESXi 6.0 Update 2 including earlier versions but all ran into a problem while booting up the ESXi installer.

The following error message was encountered:

Error loading /tools.t00
Compressed MD5: 39916ab4eb3b835daec309b235fcbc3b
Decompressed MD5: 000000000000000000000000000000
Fatal error: 10 (Out of resources)

intel_nuc_skull_canyon_2
Raymond Huh was the first individual who had reach out to me regarding this issue and then shortly after, I started to get the same confirmations from others as well. Raymond's suspicion was that this was related to the amount of Memory-Mapped I/O resources being consumed by the Intel Iris Pro GPU and does not leave enough resources for the ESXi installer to boot up. Even a quick Google search on this particular error message leads to several solutions here and here where the recommendation was to either disable or reduce the amount of memory for MMIO within the system BIOS.

Unfortunately, it does not look like the Intel NUC BIOS provides any options of disabling or modifying the MMIO settings after Raymond had looked which including tweaking some of the video settings. He currently has a support case filed with Intel to see if there is another option. In the mean time, I had also reached out to some folks internally to see if they had any thoughts and they too came to the same conclusion that without being able to modify or disable MMIO, there is not much more that can be done. There may be a chance that I might be able to get access to a unit from another VMware employee and perhaps we can see if there is any workaround from our side, but there are no guarantees, especially as this is not an officially supported platform for ESXi. I want to thank Raymond, Olli & Florian for going through the early testing and sharing their findings thus far. I know many folks are anxiously waiting and I know they really appreciate it!

For now, if you are considering purchasing or have purchased the latest Intel NUC Skull Canyon with the intention to run ESXi, I would recommend holding off or not opening up the system. I will provide any new updates as they become available. I am still hopeful  that we will find a solution for the VMware community, so crossing fingers.

Categories // ESXi, Home Lab, Not Supported Tags // ESXi, Intel NUC, Skull Canyon

  • « Previous Page
  • 1
  • …
  • 320
  • 321
  • 322
  • 323
  • 324
  • …
  • 567
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Ultimate Lab Resource for VCF 9.0 06/25/2025
  • VMware Cloud Foundation (VCF) on ASUS NUC 15 Pro (Cyber Canyon) 06/25/2025
  • VMware Cloud Foundation (VCF) on Minisforum MS-A2 06/25/2025
  • VCF 9.0 Offline Depot using Synology 06/25/2025
  • Deploying VCF 9.0 on a single ESXi host? 06/24/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...