WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Quick Tip - How to enable vGPU vMotion in vSphere 6.7 Update 1

10.19.2018 by William Lam // 10 Comments

Since this question has come up a few times this week, I thought it is worth a quick blog post on how to enable the new vGPU vMotion feature which is now available in latest vSphere 6.7 Update 1 release. If you try to vMotion a VM that has been configured with a vGPU, you see the following message stating vGPU hot migration is not enabled.

To enable vGPU vMotion, you just need to update the following vCenter Server Advanced Setting vgpu.hotmigrate.enabled to true using the vSphere UI. The change will go into effect immediately and you will now be able to vMotion a VM configured with vGPU. This setting is actually documented in the official vSphere documentation here, but from all the folks I spoke with, it looks like it never came up or it must have been missed.


In addition to vMotion support, you can also perform Storage vMotion & Cross vMotion (Compute & Storage) for vGPU enabled VMs. Make sure that both your vCenter Server and ESXi hosts have been upgraded to vSphere 6.7 Update 1 and that you have NVIDIA GRID hardware and VIB installed on ESXi host. For folks interested in learning more about the new vMotion features in vSphere 6.7 Update 1, be sure to check out the VMworld 2018 session What's New in vMotion Technical Deep Dive.

Lastly, for those that prefer to automate this configuration change, here is a quick PowerCLI snippet for enabling vGPU vMotion:

Get-AdvancedSetting -Entity $global:DefaultVIServer -Name vgpu.hotmigrate.enabled | Set-AdvancedSetting -Value $true -Confirm:$false

Categories // ESXi, vSphere Tags // vGPU, vgpu.hotmigrate.enabled, vmotion, vSphere 6.7 Update 1

vMotion across different VDS version between onPrem and VMC

09.19.2018 by William Lam // 21 Comments

For those of you who have attempted a vMotion (whether that is within a vCenter Server or between different vCenter Servers (including across SSO Domains), if the VM is running on a Distributed Virtual Switch (VDS) and the version of the VDS is not the same between the source and destination (VDS 6.5 to VDS 6.7), the operation will fail with the following error message (UI and API):

Currently connected network interface 'Network adapter 1' cannot use network 'DVPG-VM-Network (VDS-67)', because the destination distributed switch has a different version or vendor than the source distributed switch.


This behavior is no different on VMware Cloud on AWS (VMC) or at least, I thought it was, until I recently learned about a really neat feature that was introduced in the VMC 1.4p2 release, here is a snippet from the release notes:

Cross VDS version vMotion Compatibility
With this advanced configuration option enabled, bi-directional vMotion between on-premises and VMware Cloud on AWS can be achieved across different virtual distributed switch (VDS) versions (greater than or equal to version 6.0). This must be enabled on the on-premises vCenter.

It turns out there is actually a way to allow vMotions across different VDS versions, this is important for VMC because the software stack will always be using a newer version than what we ship to our onPrem customers. However, due to this limitation, we could not benefit from the latest VDS version but had to default it to VDS 6.0 to ensure that customers could migrate their workloads. The advanced setting mentioned in the release notes allows us to disable the strict compatibility check which is performed on the destination vCenter Server when a vMotion is initiated, this setting is now enabled by default on the VMC vCenter Server which is why you can perform migration across different VDS without having to do anything special on your onPrem vCenter Server.

UPDATE (11/07/21) - Thanks to Robert Cranedonk, it looks like you can now also vMotion across different NSX-T Logical Switches by adding vCenter advanced setting called  config.vpxd.network.allowVmotionBetweenLogicalSwitches and set the value to true.

UPDATE (01/02/21) - If you are running vSphere 7.x, an additional advanced setting must be configured called config.vmprov.enableHybridMode and set the value to true. For more details, you can refer to this VMware KB 79446. Thanks to reader Marc Alumbaugh for sharing this finding!

UPDATE (10/16/18) - With the release of vSphere 6.7 Update 1, customers can now also vMotion VMs from on-prem running on a VDS to VMC with NSX-T N-VDS.

[Read more...]

Categories // Automation, NSX, VMware Cloud on AWS, vSphere Tags // Cross vMotion, ExVC-vMotion, NSX, vmotion, VMware Cloud on AWS, xVC-vMotion

Quick Tip - Requirements for using Guest Operation APIs (Invoke-VMScript & Copy-VMGuestFile) in VMC

08.02.2018 by William Lam // 1 Comment

Since this question came up again today, I figure it was worth sharing in case others also had trouble using the vSphere Guest Operations API in VMware Cloud on AWS (VMC), which includes PowerCLI's Invoke-VMScript and Copy-VMGuestFile cmdlet. There are a couple of requirements that you must satisfy both in the GuestOS as well as between your on-prem vSphere environment and VMC.

  1. VMware Tools installed and running, it may seem obvious, but I have had customers trying to use various scripts without realizing this was a requirement. You should also ensure that you are running the latest version of VMware Tools, especially as there bugfixes that may impact Guest Operations APIs.
  2. VPN or Direct Connect (DX) configured between your on-prem vSphere environment and VMC, this is required as you will need access to ESXi hosts which is only available through a VPN or DX
  3. Create a VMC firewall rule to allow access from your on-prem network to VMC's ESXi hosts on port 443 which is used for Guest Operations access including transferring files to and from the GuestOS


The VMC firewall rule is usually the thing that most folks forget about and this simply because for most on-prem environment, access to ESXi over 443 is just sort of a default.

Once you have configured the VMC firewall to allow 443 to ESXi hosts, you will be able to use the Guest Operations API including Invoke-VMScript and Copy-VMGuestFile to a VM running in VMC

Categories // Automation, PowerCLI, VMware Cloud on AWS, vSphere Tags // copy-vmguestfile, guest operations, invoke-vmscript, PowerCLI, VMC, VMware Cloud on AWS

  • « Previous Page
  • 1
  • …
  • 50
  • 51
  • 52
  • 53
  • 54
  • …
  • 109
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...