WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

How to automate VM deployment from large USB keys using ESXi Kickstart?

10.08.2014 by William Lam // 8 Comments

During VMworld US, I had the opportunity to speak with several customers to learn about their VMware environment and some of the challenges they were facing. In some scenarios, I was able to offer a solution or a different way of solving the problem. For others, it was primarily feedback on how we can better improve some of our capabilities/features or specific feature requests they would like to see get added.

One interesting challenge that arose from a class of customers who manages hundreds of remote sites is the ability fully automate the provisioning of an ESXi host as well as set of Virtual Machines as part of the initial deployment. The provisioning is all done through Kickstart (unattended installation of ESXi) and usually from a USB device but it could also be from a custom ISO. One ask that kept coming up was the support for larger USB key support within ESXi so that it could be used to include additional payload.

As some of you may or may not know, ESXi can only access USB devices within the ESXi Shell formatted using the FAT16 filesystem which allows for a maximum file size of 2GB for each partition. However, this limitation is only for the ESXi Shell itself and for the size of the ESXi installation media, this is more than sufficient. If you wish to leverage larger USB keys which has increased significantly in recent years from 32GB, 64GB and even 128GB, you can directly pass that into any guest OS through the USB Arbitrator Service (enabled by default) and there you will be able to consume the entire capacity of the USB device. The challenge is how do you go about bootstrapping ESXi as well as the initial set of Virtual Machines with these limitations and completely automated using an ESXi Kickstart?

Over the years I have seen some really creative solutions to solving this problem and funny enough, right before VMworld I had several folks reach out asking similar questions. I decided to take a look and also build upon some earlier work done by a fellow VMware SE (Tim S) to come up with a completely automated solution that would scale to any size USB device and hopefully make it easy to extend if needed.

For this project, I used a 64GB USB key which I received from the folks over at Micron who I visited in the Solution Exchange during VMworld US (these guys are doing some really awesome stuff with VSAN and an All-Flash array, be sure to check them out).

automate-esxi-kickstart-and-servicevm-usb-3
Here is a diagram of the partition structure for the 64GB USB key which I will explain further:

esxi-usb-partition
The first partition is 2GB using a FAT16 filesystem and this is used to store the actual ESXi media along with an embedded ESXi Kickstart configuration file. You can easily reference a remote Kickstart if you wish, but for simplicity purposes and to support some of the requested use cases from customers, I have embedded it.

The second partition is also 2GB using a FAT16 filesystem and this is used to store a tiny VM which I am calling a "Service VM". This VM needs to be small enough to fit the partition and will be used to read the remainder capacity of the USB device which will be using a more capable filesystem type. I have decided to store a pre-configure vMA appliance which is tarred up to reduce the disk footprint.

The third and final partition will consume the remainder capacity of the USB device, in this case it would be 60GB and using a FAT32 partition which can support up to 2TB for a single volume. This is where additional Virtual Machines would be stored and accessed by the "Service VM".

As you can probably guess, the idea is to install ESXi as you normally would to a local disk or directly onto the USB device in which case an additional partition would be required. As part of the installation, the "Service VM" would be boot strapped as it would be visible within the ESXi Shell and registered and powered on during first bootup. A first boot script could then be included in the guestOS which can receive some details about the ESXi deployment which could be hard coded (not recommended) or dynamically discovered as I have implemented it. The USB device would then be passed directly to this "Service VM" to mount and then it would be able to deploy the remainder Virtual Machines which would be stored in this larger partition.

Here is the complete ESXi Kickstart which implements what has been discussed so far and I have also included a break down of the kickstart below:

vmaccepteula
install --firstdisk --overwritevmfs
rootpw vmware123
reboot

network --bootproto=static --ip=192.168.1.200 --netmask=255.255.255.0 --gateway=192.168.1.1 --hostname=mini.primp-industries.com --nameserver=192.168.1.1 --addvmportgroup=1

%post --interpreter=busybox

# stop USB Arbitrator service to access USB device in ESXi Shell
/etc/init.d/usbarbitrator stop

# copy service VM to local VMFS datastore
cp /vmfs/volumes/SERVICEVM/vMA.tar.gz /vmfs/volumes/datastore1
tar -zvxC /vmfs/volumes/datastore1 -f /vmfs/volumes/datastore1/vMA.tar.gz
rm -f /vmfs/volumes/datastore1/vMA.tar.gz

# add guestinfo property for ESXi IP Address for adv. VM deployment
ESXI_IP=$(localcli network ip interface ipv4 get | grep vmk0 | awk '{print $2}')
echo "guestinfo.esxi_ip = ${ESXI_IP}" >> /vmfs/volumes/datastore1/vMA/vMA.vmx

%firstboot --interpreter=busybox

# Ensure hostd is ready
while ! vim-cmd hostsvc/runtimeinfo; do
sleep 10
done

# enable & start SSH
vim-cmd hostsvc/enable_ssh
vim-cmd hostsvc/start_ssh

# enable & start ESXi Shell
vim-cmd hostsvc/enable_esx_shell
vim-cmd hostsvc/start_esx_shell

# Suppress ESXi Shell warning
esxcli system settings advanced set -o /UserVars/SuppressShellWarning -i 1

# rename datastore1
vim-cmd hostsvc/datastore/rename datastore1 mini-local-datastore-1

# Register VM
vim-cmd solo/registervm /vmfs/volumes/mini-local-datastore-1/vMA/vMA.vmx

# connect USB device via passthrough
USB_DEV_NAME="Alcor Micro Corp"
USB_DEV_BUSID=$(lsusb | grep "${USB_DEV_NAME}" | awk '{print $2}' | cut -c 2)
vim-cmd vmsvc/device.connusbdev 1 "path:${USB_DEV_BUSID}/0/1 version:2"

# power on VM
vim-cmd vmsvc/power.on 1

Line12 - Need to disable the USB Arbitrator Service so the USB device can be seen by ESXi since it is by default made ready to be exposed to a VM. The service will be automatically re-enabled after the installation of ESXi which will allow for the VM to connect to the USB device.

Line15-17 - Copy the "Service VM" from USB device to local VMFS datastore1. In the example, I have pre-configured the vMA appliance tarred up the VMX and its respective VMDK.

Line20-21 - Extract the ESXi IP Address and sets a custom guestInfo property so the "Service VM" knows where to deploy the additional VMs to

Line26-29 - This checks to ensure hostd is up and running before continuing on

Line45 - Register the "Service VM" within ESXi

Line49-50 - Identify the USB device ID which will be required to mount to the "Service VM". You will need to update USB_DEV_NAME based on the USB device you are using

Line51 - Connect the USB Device to "Service VM"

Line54 - Power on the "Service VM"

At this point, you should be able to access the USB device from within the "Service VM". We can easily verify this by running the following command:

sudo fdisk -l

automate-esxi-kickstart-and-servicevm-usb-0
As seen in the screenshot above, we can see our three partitions and third is the one with our FAT32 partition which contains a couple of Virtual Machines that I want to deploy. Of course, this partition can contain anything you wish to store, so the sky is the limit!

To mount the USB device and the specific partition, we will create a temporarily directory and issue the mount command by running these two commands:

sudo mkdir -p /mnt/USB;sudo mount /dev/sdb3 /mnt/USB

automate-esxi-kickstart-and-servicevm-usb-1
For my USB key, I have stored both the VCSA and NSX Manager OVA which can then be deployed using ovftool. The last part to be able to make this as seamless and automated as possible is to be able to identify the ESXi host information. If you recall earlier, we had set a custom guestInfo property within our "Service VM". This custom property can then be read by the guestOS leveraging VMware Tools and provides the IP Address to the guest. You can easily set other metadata information but to be able to deploy these additional OVA's, we would need to know the IP Address of the ESXi host and this makes it so you do not need to hard code anything (perhaps ESXi host credentials).

To retrieve this custom property, you will need to run the following command:

vmtoolsd --cmd "info-get guestinfo.esxi_ip"

automate-esxi-kickstart-and-servicevm-usb-2
With these last few guestOS commands, you will be able to create a firstboot script which will automatically mount the appropriate USB partition and deploy these additional Virtual Machines. This is just one of the many possibilities on how you can deploy additional VMs as part of your ESXi Kickstart deployment. Hopefully this solution provides a base in which you can easily customize based on your own requirements.

Categories // Automation, ESXi, vSphere Tags // ESXi, fat16, fat32, kickstart, ks.cfg, sd, usb, vSphere

How to move a VSAN Cluster from one vCenter Server to another?

09.26.2014 by William Lam // 42 Comments

I recently caught an interesting VMTN thread where a user wanted to move an exiting VSAN Cluster from one vCenter Server to another vCenter Server with minimal impact to the ESXi hosts and running Virtual Machines. The great news is that this can be done without any impact to your ESXi hosts and more importantly, there is no impact to your workloads. I have personally performed this operation on several occasions without any problems and the process is actually quite straight forward and thought I walk you through it because it is literally a couple of steps.

The main reason this is not a challenge is that VSAN has been architected to not have a reliance on vCenter Server for its normal operations. It is true that vCenter Server is required for the configuration and management of the VSAN Cluster and VM Storage Policies, but once those configurations have been applied, then the vCenter Server is no longer in picture from operational point of view. This means if you need to move your VSAN Cluster from a development vCenter Server to a production vCenter Server or if you accidentally destroyed your original vCenter Server, the VSAN Cluster can easily be re-created on a new vCenter Server.

To demonstrate the process, I have a 3 Node VSAN Cluster with a running Virtual Machine on vCenter Server (vcenter55-1) and I have built a new vCenter Server (vcenter55-3) which I would like to move the existing VSAN Cluster over to.

UPDATE2 (11/02/17) - There was a question a couple of weeks back on whether the procedure outlined below could also apply to a vSAN Stretched Cluster. I did not see any technical reasons preventing this and one of our GSS Engineers had recently validated this with a customer and successfully moved a vSAN Stretched Cluster. I asked if he could share the modified instructions in case others were interested.

  1. Copy all VDS settings to new cluster
  2. Enable vSAN on new cluster (follow Step 2 below)
  3. Disable stretched cluster
  4. Move each host
  5. Move witness
  6. Re-enable stretched cluster (follow Step 4 below)

Step 1 - Deploy a new vCenter Server and create a vSphere Cluster with VSAN Enabled.

migrate-vsan-cluster-from-one-vcenter-to-another-0
Step 2 -

UPDATE1 (05/02/17) - Updated to include vSAN 6.6. specific instructions.

Pre-vSAN 6.6 - Disconnect one of the ESXi hosts from your existing VSAN Cluster and then add that to the VSAN Cluster in your new vCenter Server.

Note: Technically, you do not even have to disconnect the ESXi hosts from the old vCenter Server. You could just add the ESXi hosts to the new vCenter Server and once you have confirmed you wish to move the ESXi host, it will automatically be disconnected once added. This would actually save you an extra step.

vSAN 6.6 - An additional configuration is needed to be applied to all ESXi hosts PRIOR to disconnecting from the original vCenter Server and adding them into the new vCenter Server. Below are a few examples on how to apply the ESXi Advanced Setting which should be a value of 1:

Here is an example using ESXCLI (local or remotely) on an individual ESXi host:

esxcli system settings advanced set -o /VSAN/IgnoreClusterMemberListUpdates -i 1

Here is an example of using PowerCLI to apply the setting across all ESXi hosts if the original vCenter Server is still available:

Foreach ($vmhost in (Get-Cluster -Name VSAN-Cluster | Get-VMHost)) {
$vmhost | Get-AdvancedSetting -Name "VSAN.IgnoreClusterMemberListUpdates" | Set-AdvancedSetting -Value 1 -Confirm:$false
}

Here is an example of using PowerCLI to apply the setting directly to an ESXi host if the original vCenter Server is no longer available:

Get-VMHost -Name 192.168.1.100 | Get-AdvancedSetting -Name "VSAN.IgnoreClusterMemberListUpdates" | Set-AdvancedSetting -Value 1 -Confirm:$false

migrate-vsan-cluster-from-one-vcenter-to-another-1
Once you have successfully added the ESXi host, you should see a warning within the VSAN Configuration page stating there is a "Misconfiguration detected" which is expected. What is happening is that this ESXi has been configured in an existing VSAN Cluster and the ESXi hosts that it is supposed to be able to communicate with are not part of this VSAN Cluster. Once we add the remainder ESXi hosts, then the VSAN Cluster will be happy and this error will go away.

Note: If you try to add all of the ESXi hosts from the existing VSAN Cluster to the new VSAN Cluster at once, you will see an error regarding UUID mismatch. The trick is to add one host first and once that has been done, you can then bulk add the remainder ESXi hosts and you will not have an issue. This is handy if you are trying to automate this process.

Step 3 - Add the remainder ESXi hosts to the VSAN Cluster in the new vCenter Server. Once all hosts have been added to the new VSAN Cluster, you will see the warning icons disappear and your VSAN Cluster is now fully managed by the new vCenter Server. We can also confirm that there are no network partition as all original VSAN configurations have been retained on the ESXi hosts.

UPDATE1 (05/02/17)

Step 4 - This last step is ONLY applicable to vSAN 6.6 hosts. Once all hosts have been successfully to the new vCenter Server and you have verified cluster status is healthy and there are no network partitions. We now need to update the ESXi Advanced Setting we had set earlier from a value of 1 back to value of 0.

Here is a PowerCLI snippet which given a vSAN Cluster, it will automatically go through all ESXI hosts and update the setting:

Foreach ($vmhost in (Get-Cluster -Name VSAN-Cluster | Get-VMHost)) {
$vmhost | Get-AdvancedSetting -Name "VSAN.IgnoreClusterMemberListUpdates" | Set-AdvancedSetting -Value 0 -Confirm:$false
}

migrate-vsan-cluster-from-one-vcenter-to-another-2
Disclaimer: As mentioned there is no impact to the ESXi hosts (other than not being able to manage it while you disconnect and re-connect on the new vCenter Server) and there is no impact to the running Virtual Machines and any VM Storage Policies that have been applied to the VM will still be enforced by each of the ESXI hosts. However, one thing to be aware of is that the VM Storage Policies in your original vCenter Server will not be available in the new vCenter Server. You will need to re-create each of the VM Storage Policies and re-attach them to the existing Virtual Machines. This can of course be automated by using the vSphere API or leveraging the new PowerCLI 5.8 R1 release which includes VM Storage Policie cmdlets.

Here is an example of exporting a VM Storage Policy named "FTT=1" to a file called policy.xml on your desktop:

Export-SpbmStoragePolicy -StoragePolicy (Get-SpbmStoragePolicy -Name FTT=1) -FilePath C:\Users\Administrator\Desktop\policy.xml

Currently this is the only impact by moving a VSAN Cluster from one vCenter Server to another and of course this assumes you have created VM Storage Policies aside from the default policies.

I received a couple of questions regarding the networking setup for my VSAN Cluster. In the above example I was using VSS (Virtual Standard Switch). I did however, retest this scenario completely on VDS (Virtual Distributed Switch) and the results were the same. When all ESXi hosts have been added to the new vCenter Server, you will see a warning about proxy host switch. The key to properly migrating the networks (VMkernel & VM Portgroup) is to add each ESXi hosts to the new VDS that you will need to create. If you original vCenter Server is still available, you can export and import the VDS configuration. If it is not available, then you will need to manually re-create the Distributed Portgroups before proceeding.

The first step is to go to the Networking view and right click select "Add and Manage Hosts"

migrate-vsan-cluster-from-one-vcenter-to-another-3
Go ahead and walk through the guided wizard and make sure you only add one hosts at a time, as I saw issues when trying to add multiple hosts at a time. Once the ESXi host has been added to the new VDS and its uplinks, VMkernel and VM Portgroups are all connected. You should now see two VDS under the Networking view of ESXi host under "Manage".

migrate-vsan-cluster-from-one-vcenter-to-another-4
This can be seen clearly using the vSphere C# Client as it allows you to view both on the same screen. Once you have confirmed that the everything looks good, then you can go ahead and remove the old VDS switch as shown in the screenshot above. At this point, your ESXi hosts networking is now running on the new VDS. You will continue this same workflow for the remainder ESXi hosts until they all have been migrated over to the new VDS.

Categories // ESXi, VSAN, vSphere, vSphere 5.5 Tags // ESXi, Virtual SAN, VSAN, VSAN 6.6

Community stories of VMware & Apple OS X in Production: Part 8

09.25.2014 by William Lam // 3 Comments

Company: Mid-Pacific Institute (Private School in Hawaii)
Software: VMware vSphere and Fusion
Hardware: Apple Mac Pro

[William] - Hi Derick, I appreciate you taking some time out of your busy schedule to talk with us all the way from Hawaii 🙂 Before we get started, can you quickly introduce yourself?

[Derick] - Sure William. My name is Derick Okihara, and I work for Mid-Pacific Institute. We are a private K-12 institution with about 1600 students. My role here is general IT and server administration. I've been working with computers since I was in high school. I have been a long time Apple user (since //gs), but really started working with them professionally about 10 years ago. We currently have a 1:1 iPad program for the students, and 2:1 iPad+laptops for faculty, so there's a lot of technology to support.

[William] - That’s awesome Derick. Look forward to hearing more about your environment. Speaking of which, I hear you are currently managing some Apple hardware running on VMware? Could you tell us a little bit about the hardware configuration and the VMware software you are currently using?

[Derick] - We are currently using vCenter with ESXi 5.5, we have 2 Mac Pro (5,1s) in a cluster with a Synology 1813+ shared storage. The network storage is connected via the iSCSI software initiator using round-robin. We also use VMware Fusion for the Mac. The Mac Pros have 24GB of RAM and 4 port Intel Gigabit Ethernet cards for a total of 6 ports each.

[William] - What made you decide on using a Synology for shared storage and what configuration/capacity did you go with? Were there any other options you were looking at?

[Derick] - A lot of the decisions for this setup was made on price. How this all started, was that I was asked to create an ESXi server to host a VM Appliance to run our campus wide Informacast speaker system. I had already been planning an ESXI deployment on the Mac, testing on Mac Mini. Instead of building a PC server just for this appliance, I was given the OK to build on an existing Mac Pro, so it could serve multiple purposes.

Being forward thinking, I knew we needed redundancy, so I opted to go for network storage. With a tight budget, and being able to use CPUs we already had, I decided on the Synology 1813+ for it's 4 gigabit ports. This allowed me to later add the 2nd host in our vcenter cluster when we expanded.

[William] - Can you talk a little bit about the type of workloads and applications you are currently virtualizing on the Mac Pro’s and are these all OSX VMs?

[Derick] - Since this is still version 1.0, we aren't heavily taxing our cluster. Right now it hosts 5 VMs (2 Virtual Appliance, 2 OS X Server, 1 Windows Server). I'd want to add more RAM as well, OS X VMs are very RAM hungry.

The OS X servers are a student file server (AFP/SMB) and an Apple Caching server / Munki repository. The Windows server is mostly a test bed, the Virtual Appliances are the aforementioned Informacast manager and VCenter Appliance.

Here is a picture Derick's two Apple Mac Pros:

derick-mac-pro
[William] - The Mac Pro’s have a maximum amount of memory that it supports, do you plan on expanding the infrastructure to accommodate additional workloads or would you be looking at upgrading to the latest generation of Mac Pro (black)?

[Derick] - Honestly, with our current needs and budget, I think I would be looking at the next generation Mac Mini combined with some sort of PCI-E enclosure. Like the Sonnet XMac server. I know the Mini will likely not be fully supported, but I like what i've seen on virtuallyGhetto with the current generation 🙂 That is, if the next gen Mac Mini supports 32GB of RAM!

[William] - Very cool! So from your point of view, you would rather have more Mac Mini’s than a couple of Mac Pro’s? It sounds like cost plays a huge factor, but what other constraints or requirements that is making you lean more towards Mac Mini’s instead of going to a new Mac Pro which can get up to 64GB of memory and 6-Core CPU?

[Derick] - Footprint - the Mac Pros we currently have take up a large amount of rack space. Even the new mac pros would not be rack mountable without an additional enclosure. For us, having 3 x Mac Mini with 32GB of RAM would be ideal price/performance ratio. (We have a 3 CPU license). Eventually our Mac Pro 5,1's will die, so I'm already thinking about what's next. Having 3 x Mac Mini servers in a cluster, that takes up only 3U would be pretty sweet!

[William] - Speaking of support, did you purchase any type of extended contracts with Apple on the hardware or you going to deal with them on a case by case basis? Have you had any issues with failing hardware on the Mac Pro’s?

[Derick] - We only had the initial Apple Care (now since expired). We have 1 spare Mac Pro currently running other loads but that could be migrated in the event that we have a hardware failure. We have not had many issues on the Mac Pro 5,1s other than internal hard drive failure. They've been rock solid.

[William] - Has there been any interesting issues or challenges you had faced while setting up this infrastructure? Either the hardware, software or managing the VMs?

[Derick] - This whole process was a learning experience for me. At a previous job, I had inherited an ESXI server running multiple CentOS and Ubuntu VMs, but I had never set it up myself, let alone on a Mac. Thanks to the multiple resources on the web (Rich Trouton's blog, VirtuallyGhetto, and a P2V script from Alan Gordon at MacSysAdmin) the process and gotchas were mostly done before me

The biggest challenge for me was configuring the Synology for iSCSI-round robin. In my research I found that one could utilize multiple gigabit connections with Multi-path IO for higher bandwidth. After lots of configuring and back and forth with Synology / VMware support, I finally found the proper settings that allowed me to utilize more than 1 gigabit link.
However, after I updated to ESXi 5.5, it broke.

I was stuck, because I needed to upgrade to 5.5 in order to run an OS X caching server (12-character serial number fix in 5.5). But Synology said the 1813+ was not compatible with 5.5 and would not help me. Long story short, one of my hosts is running 5.5 (with OS X Caching server) and the other host is running 5.1 (file services) because it needs the greater throughput.

[William] - Derick, I want thank you for taking the time and sharing with us your experiences with managing VMware and Apple hardware. Before I let you go, do you have any tips for our readers that may be in a similar environment (academic) and needs to build out an infrastructure to support their end users? Any gotchas or things you would recommend if you had to do this all over again?

[Derick] - Anyone looking to reduce their machine footprint should definitely look into virtualization. VMware has very attractive pricing for the EDU market if you're looking to build a cluster with high availability, or you can run a single host for free. The best piece of advice I can give is just to test thoroughly. Virtualization is very complicated, and combines a multitude of areas of expertise (Storage, Networks, Workloads, and ESXI platform itself). It can be daunting but it's very rewarding. If you get stuck, just ask William on Twitter @lamw, jk

If you are interested in sharing your story with the community (can be completely anonymous) on how you use VMware and Mac OS X in Production, you can reach out to me here.

  • Community stories of VMware & Apple OS X in Production: Part 1
  • Community stories of VMware & Apple OS X in Production: Part 2
  • Community stories of VMware & Apple OS X in Production: Part 3
  • Community stories of VMware & Apple OS X in Production: Part 4
  • Community stories of VMware & Apple OS X in Production: Part 5
  • Community stories of VMware & Apple OS X in Production: Part 6
  • Community stories of VMware & Apple OS X in Production: Part 7
  • Community stories of VMware & Apple OS X in Production: Part 8
  • Community stories of VMware & Apple OS X in Production: Part 9
  • Community stories of VMware & Apple OS X in Production: Part 10

 

Categories // Apple, ESXi, vSphere Tags // apple, ESXi, fusion, mac pro, osx, vSphere

  • « Previous Page
  • 1
  • …
  • 39
  • 40
  • 41
  • 42
  • 43
  • …
  • 61
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...