WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Considerations when migrating VMs between vCenter Servers

05.02.2014 by William Lam // 19 Comments

Something that I really enjoy when I get a chance to, is to speak with our field folks and learn a bit more about our customer environments and some of the challenges they are facing. Last week I had quick call with one of our TAMs (Technical Account Managers) regarding the topic of Virtual Machine migration between vCenter Servers. The process of migrating Virtual Machines between two vCenter Servers is not particularly difficult, you simply disconnect the ESXi hosts from one vCenter Server and re-connect to the new vCenter Server. This is something I have performed on several occasions when I was a customer and with some planning it works effortlessly.

However, there are certain scenarios and configurations when migrating VMs between vCenter Servers that could potentially cause Virtual Machine MAC Address collisions. Before we jump into these scenarios, here is some background. By default, a Virtual Machine MAC Address is automatically generated by vCenter Server and the algorithm is based on vCenter Server's unique ID (0-63) among few other parameters which is documented here. If you have more than one vCenter Server, a best practice is to ensure that these VC IDs are different, especially if they are in the same broadcast domain.

As you can imagine, if you have two vCenter Servers that are configured with the same VC ID, there is a possibility that a duplicate MAC Address could be generated. You might think this is probably a rare event given the 65,000 possible MAC Address combinations. However, it actually happens more frequently than you think, especially in very large scale environments and/or Dev/Test for continuous build/integration environments which I have worked in the past and I have personally seen these issues before.

Going back to our vCenter Server migration topic, there are currently two main scenarios that I see occurring in customr environments and we can explore each in a bit more detail and their implications:

  • Migrate ALL Virtual Machines from old vCenter Server to new vCenter Server
  • Migrate portion of Virtual Machines from old vCenter Server to new vCenter Server

Migrate all Virtual Machines:

vm-migration-between-vcenter-0
In the diagram above, we have vCenter Server 1 and vCenter Server 2 providing a before/after view. To make things easy, lets say they have VC ID 1 and 2. If we migrate ALL Virtual Machines across, we can see their original MAC Addresses will be preserved as we expect. For any new Virtual Machines being created, the 4th octet of the MAC Address will differ as expected and the vCenter Server will guarantee it is unique. If you want to ensure that new Virtual Machines keep a similar algorithm, you could change the vCenter Server ID to 1. No issues here and the migration is very straight forward.

Migrate A portion of Virtual Machines:

vm-migration-between-vcenter-1
In the second diagram, we still have vCenter Server 1 and vCenter Server with unique VC IDs. However, in this scenario we are only migrating a portion of the Virtual Machines from vCenter Server 1 to vCenter Server 2. By migrating VM2 off of vCenter Server 1, the MAC Address of VM2 is no longer registered with that vCenter Server. What this means is that vCenter Server 1 can potentially re-use that MAC Address when it generates a new request. As you can see from the above diagram, this is a concern because VM2 is still using that MAC Address in vCenter Server 2, but vCenter Server 1 is no longer aware of its existence.

The scenario above is what the TAM was seeing at his customer's site and after understanding the challenge, there are a couple of potential solutions:

  1. Range-Based MAC Address allocation - Allows you to specify a range of MAC Addresses to allocate from which may or may not helpful if the migrated MAC Addresses are truly random
  2. Prefix-Based MAC Address allocation -  Allows you to modify the default VMware OUI (00:50:56) which would then ensure no conflicts would be created with previously assigned MAC Addresses. Though this could solve the problem, you potentially could run into collisions with other OUI's within your environment
  3. Leave VMs in a disconnected state - This was actually a solution provided by another TAM on an internal thread which ended up working for his customer. The idea was that instead of disconnecting and removing the ESXi host when migrating a set of Virtual Machines, you would just leave it disconnected in vCenter Server 1. You would still be able to connect the ESXi host and Virtual Machines to vCenter Server 2 but from vCenter Server 1 point of view, the MAC Addresses for those Virtual Machines are seen as in use and it would not be reallocated.

I thought option #3 was a pretty interesting and out of the box solution that the customer came up with. The use case that caused them to see this problem in the first place is due to the way they provision remote environments. The customer has a centralized build environment in which everything is built and then shipped off to the remote sites which is a fairly common practice. Since the centralized vCenter Server is not moving, you can see how previously used MAC Addresses could be re-allocated.

Although option #3 would be the easiest to implement, I am not a fan of seeing so many disconnected systems from an operational perspective as it is hard to tell if there is an issue with the ESXi host and Virtual Machines or because it has been migrated. I guess one way to help with that is to create a Folder called "Migrated" and move all disconnected ESXi hosts into that folder which would help mask that away and disable any alarms for those hosts.

Some additional per-requisite checks that you can perform prior to the partial Virtual Machine migration:

Ensure that the destination vCenter Server is not configured with the same VC ID else you can potentially run into duplicate MAC Address conflicts. You can do this either manually through the vSphere Web/C# Client or leveraging our CLI/API to do so.

Here is an example using PowerCLI to retrieve the vCenter Server ID:

Get-AdvancedSetting -Entity $server -Name instance.id

Ensure no duplicate MAC Addresses exists by comparing the MAC Addresses of the Virtual Machines to be migrated to the Virtual Machines in the new environment. Again, you can either do this by hand (which I would not recommend) or leveraging our CLI/API to extract this information.

Here is an example using PowerCLI to retrieve the MAC Addresses for a Virtual Machine:

Get-VM |  Select-Object -Property Name, PowerState, @{"Name"="MAC";"Expression"={($_ | Get-NetworkAdapter).MacAddress}}

If there are other scenarios or solutions that you have seen with Virtual Machine migrations between vCenter Serves, feel free to leave a comment. I am sure others can benefit from past experiences or any other lesson learns.

Categories // vSphere Tags // mac address, migration, vCenter Server, vSphere

Blocking vSphere C# Client Logins

12.10.2012 by William Lam // 8 Comments

I recently picked up on this neat little tidbit from Mr. Not Supported aka Randy Keener, where you can block a user from logging into the vCenter Server using the vSphere C# Client. Other than playing a prank on your co-workers, you might be wondering is there a use case for this? Surprisingly, this is a request I have heard from a few customers in the past where they would like to block their users from using the vSphere C# Client in favor of leveraging only the vSphere APIs for routine tasks.

Since the vSphere C# Client also uses the vSphere API itself, a user with proper credentials to the vSphere environment can easily download the client from an alternative source and still login. Of course, there are ways of preventing this such as restricting application installation on end users desktop but there is some amount of management overhead of identifying those existing and new users, especially if access is delegated out to other teams.

There is a very simple solution if you choose to block ALL users from using the vSphere C# Client which requires a tiny modification on the vCenter Server itself and it takes effect immediately with no service restarts.

Disclaimer: This is probably not officially supported by VMware, use at your own risk.

Login to your vCenter Server and locate a file called version.txt

Windows: C:\ProgramData\VMware\VMware VirtualCenter\docRoot\client
VCSA: /etc/vmware-vpx/docRoot/client

There is parameter called exactVersion which will be set to current supported version of the vSphere C# Client which should also match the version of your vCenter Server. You just need to change this to some other value that you know will not exist in your environment such as 9.0.0. Once you have made this change, now when a user tries to connect and there is a miss-match in the version, the vCenter Server will provide you with a download to the vSphere C# Client located on the server as it normally would if you did not have the latest client.

What the user will find out shortly, is that this will continue in an infinite loop even after installing the proper vSphere C# Client. The reason for this is that the number in version.txt will never match the vSphere C# Client and vCenter Server will just continue serving the installer in an infinite loop. I also looked into this trick for a standalone ESXi host and you can do the same by editing a file called clients.xml which is located in /usr/lib/vmware/hostd/docroot/client and users will not be able to login to the ESXi host using the vSphere C# Client.

Now, even though this prevents users from logging into the vSphere C# Client, users will still be able to connect using the vSphere API which includes the use of vCLI/ESXCLI, PowerCLI, vCO, SDKs, etc. and the use of the vSphere Web Client for either vSphere 5.0 or 5.1 will continue to work. Ideally, it would be nice to be able to control this access on a per user/group basis and perhaps even specify how a user can connect whether that is through the use of the APIs or UI only. Is this even useful to have at all? Would love to hear your comments.

For now, if you want users to get familiar with the new vSphere Web Client 5.1 ... this is one way of "encouraging" them 😉

Categories // ESXi, vSphere Tags // ESXi, vCenter Server, vsphere C# client, vsphere client

How to automatically add ESX(i) host to vCenter in Kickstart

03.29.2011 by William Lam // 65 Comments

While recently updating my Automating Active Directory Domain Join in ESX(i) Kickstart article, it reminded me an old blog post by Justin Guidroz who initially identified a way to add an ESXi host to vCenter using python and the vSphere MOB. The approach was very neat but was not 100% automated as it required some user interaction with the vSphere MOB to identify certain API properties before one could potentially script it within a kickstart installation.

I decided to revisit this problem as it was something I had investigated awhile back. There are numerous ways on getting something like this to work in your environment, but it all boils down to your constraints, naming convention and provisioning process. If you have a well defined environment and utilizing a good naming structure and can easily identify which vCenter a given ESX(i) host should be managed from, then this can easily be integrated into your existing kickstart with minor tweaks. This script was tested on vCenter 4.1 Update 1 and ESXi 4.1 and 4.1 Update1.

UPDATE (03/29/2011): Updated the IP Address extraction to use gethostbyname and added proper logout logic after joining vCenter.

UPDATE (02/01/2013): I have provided a download link to the joinvCenter.py script below as there have been some funky formatting issues when displaying the script. For ESXi 5.x hosts, you will need to ensure httpClient is enabled (disabled by default) on the ESXi firewall else it will not be able to connect to your vCenter Server. Please refer below for the instructions.

There are a few steps that are necessary before we get started and a recommended one for those that have security concerns around this solution.

Step 1 - You will need to extract some information from the vCenter server in which you would like your ESX(i) hosts to join. You will need to generate an inventory path to the vCenter cluster which will take the form of: [datacenter-name]/host/[cluster-name], this will automatically locate the managed object ID of your vCenter cluster which is required as part of the host add process. This was a manual process in Justin's original solution.

In this example, I have a datacenter called "Primp-Skunkworks" and a cluster under that datacenter called "Primp-Skunkworks-Cluster", the inventory path will look like the following:  

"Primp-Skunkworks/host/Primp-Skunkworks-Cluster"

You will need this value to populate a variable in the script which will be described a little bit later

Step 2 - As you may have guessed, to add an ESX(i) host to vCenter, you will need to connect to vCenter server and use an account that has the permission to add a host. It is recommended that you do not use or expose any administrative accounts for this as the credentials are stored within the script unencrypted. A work around is to create a service account whether that is a local account or an Active Directory account with only the permission to add an ESX(i) host to a vCenter cluster. You will create a new role, in this example I call it "JoinvCenter" and you just need to provide the Host->Inventory->Add host to cluster privilege.

Once you have created the role, you will need to assign this role to the service account user either globally in vCenter if you want to add to multiple cluster or a given datacenter/cluster.

Now that we have the pre-requisites satisfied, we will need to populate a few variables within the script which will be used in your %post section of ESX(i) kickstart configuration file.

This variable defines the name of your vCenter server, please provide the FQDN:

This variable define the vCenter cluster path which was generated earlier:

These variables define the server account credentials used to add an ESX(i) host to vCenter. You will need to run the following command to encode the selected password. You will need access to a system with python interpreter to run the following command:

python -c "import base64; 
print base64.b64encode('MySuperDuperSecretPasswordYo')"

Note: This does not encrypt your password but obfuscate it slightly so that you are not storing the password in plain text. If a user has access to the encoded hash, it is trivial to decode it.

These variables define the ESX(i) root credentials which is required as part of the vCenter add process. If you do not want to store these in plain text, you will also need to encode them using the command in previous section:

We are now all done and ready to move forward with the actual script which will be included in your kickstart configuration. As a sanity check, you can run this script manually on an existing ESX(i) host to ensure that the process works before testing in kickstart. For ESXi 5.x hosts, ensure httpClient firewall ruleset is enabled by following ESXCLI command:

esxcli network firewall ruleset set -e true -r httpClient

 You should also ensure this is the very last script to execute as I ran into a race condition while the root password was being updated automatically from the default 999.* scripts. To ensure this is the very last script, set the --level to something like 9999 in your %firstboot stanza

Download: joinvCenter.py

To aide in troubleshooting, the script also outputs the details to syslog and on ESX(i), it will be stored in /var/log/messages and you can just search for the string "GHETTO-JOIN-VC". If everything is successful, after %firstboot section has completed, you should be able to see an ESX(i) host join vCenter and the following in the logs.

Tips: You should only see "Success" messages, if you see any "Failed" messages, something went wrong. If you are still running into issues, make sure your ESX(i) host has it's hostname configured with FQDN and you should see an error on your vCenter server if it fails to whether it's due to hostname and/or credentials. You can also redirect the output of the script to local VMFS volume for post-troubleshooting.

Depending on your provision process and how you determine which ESX(i) host should join which vCenter/cluster, you can easily add logic in the main kickstart configuration file to automatically determine or extract from a configuration file and dynamically update joinvCenter.py script prior to execution.

I would like to thank Justin Guidroz and VMTN user klich for their contributions on the python snippets that were used in the script. 

FYI - I am sure the python code could be cleaner but I will leave that as an exercise for those more adept to python. My python-fu is not very strong 😉

Categories // Uncategorized Tags // ESXi 4.1, kickstart, vCenter Server

  • « Previous Page
  • 1
  • …
  • 16
  • 17
  • 18
  • 19
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...