WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

How to VMFork aka Instant Clone Nested ESXi?

08.03.2015 by William Lam // 15 Comments

vmfork-aka-instant-clone
The VMware Fling's team recently released an update to the existing PowerCLI Extensions which now exposes the new VMFork aka Instant Clone capability that was introduced in vSphere 6.0. The Fling contains a set of PowerCLI Extension Modules which in turn provides new PowerCLI cmdlets for accessing the Instant Clone feature. The idea behind the Fling is to help VMware understand how customers would like to consume the Instant Clone feature not only from a CLI point of view but also from an API and UI standpoint. Prior to this, Instant Clone was only available through the use of either Horizon View or the Big Data Extensions product. I think this is a great opportunity for customers and partners to help shape how Instant Clone should be consumed more generally.

One of the use cases I had in my mind when I had first heard about the Instant Clone feature was to be able to quickly instantiate new Nested ESXi VMs. When I got the opportunity to help test out early prototypes of the Instant Clone cmdlet to help provide feedback and usability improvements, I knew I had to give Nested ESXi a try!

Requirements:

  • Fresh installation of Nested ESXi 6.0 in VM (unconfigured)
  • PowerCLI 6.0 Release 1
  • Instant Clone PowerCLI Extensions Fling
  • Nested ESXi 6.0 Instant Clone Scripts

High level process:

  1. A "preparation" script will be manually uploaded & executed within the Nested ESXi VM (Parent VM) to prep the system for Instant Cloning
  2. As the Parent VM is quiesce, both the pre/post customization script will be uploaded to the Parent VM automatically. The "pre-customization" is also then executed within the Parent VM which properly setups the library path to the VMware Tools binary (applicable to ESXi 6.0 only) and is then placed in a ready state for creating Instant Clones
  3. As new Instant Clone (Child VMs) are spun up, the "post-customization" script is automatically executed to add additional configurations and most importantly ensure newly created Instant Cloned Nested ESXi VMs have unique network identities

Note: For Instant Cloning regular OSes, only step 2 and 3 are really needed. Due to a known issue with VMware Tools for Nested ESXi, I have found that it is easier to prepare the Nested ESXi VM prior to quiescing and creating Instant Clones from the Parent VM.

Instructions:

Step 1 - Download and install both PowerCLI 6.0 Release 1 & Instant Clone PowerCLI Extensions Fling.

Step 2 - Perform a fresh Nested ESXi 6.0 installation in a VM, do not configure additional settings outside of enabling ESXi Shell and SSH.

Step 3 - Download the Nested ESXi 6.0 Instant Clone Scripts which contains the following four files:

  • prep-esxi60.sh - Prepares the Nested ESXi VM and ensures that new Child VMs will not retain the Parent VM's MAC Address which is baked in several places
  • pre-esxi60.sh - Pre-customization script which is used to properly setup the library paths to use the VMware Tools daemon to retrieve guest properties from PowerCLI script
  • post-esxi60.sh - Post-customization script which is used to apply networking configuration and hostnames for example
  • vmfork-esxi60.ps1 - An example PowerCLI script which issues the Instant Clone cmdlets

Note: For out of the box use, the only script that needs to be modified is the PowerCLI "vmfork-esxi60.ps1" script, the rest of the scripts should work or require very little to no modifications assuming you have followed the instruction thus far.

Step 4 - Upload the prep-esxi60.sh to Nested ESXi 6.0 VM (Parent VM) and then execute it using either the ESXi Shell over SSH or through a VMRC session. If you use SSH, you will notice that the script hangs, that is because the VMkernel interface is deleted as part of the script.

Step 5 - Next, we need to make a few edits to the vmfork-esxi60.ps1 script to update the name of your ESXi VM, along with its credentials and the full path to both the pre and post customization scripts. Below is an example of the variables that you will need to edit:

$parentvm = 'vESXi6'
$parentvm_username = 'root'
$parentvm_password = 'vmware123'
$precust_script = 'C:\Users\lamw\Desktop\vmfork\esxi60\pre-esxi60.sh'
$postcust_script = 'C:\Users\lamw\Desktop\vmfork\esxi60\post-esxi60.sh'

The section shown below will also need to be edited which contains the customization properties which are then passed down to the guestOS for configuration as part of the Instant Clone process.

 $configSettings = @{
 'hostname' = "$vmname.primp-industries";
 'ipaddress' = "192.168.1.$_"; 
 'netmask' = '255.255.255.0'; 
 'gateway' = '192.168.1.1';
 }

Step 6 - Lastly, it is time to run the script by issuing the following command:

.\vmfork-esxi60.ps1

instant-clone-nested-esxi-0
If everything was successful, you should see a couple of new powered on Instant Cloned Nested ESXi VMs that have been fully customized and ready for use!

instant-clone-nested-esxi-1
Note: There have been a couple of times where newly Instant Clone VMs have not been properly customized and when looking in the Instant Clone logs under /var/tmp/quiesce.log you may find "Unable to fork" error message. I usually have to re-quiesce the Parent VM which I do so by reverting back to a snapshot that captures the state after Step 4. Once I re-run the PowerCLI script, I am able to successfully deploy N-Number of Instant Clone Nested ESXi VMs. For additional best practices and tips/tricks, be sure to check out this blog post here.

Big thanks to Jim Mattson for some of his earlier research and work on this topic which made implementing these scripts much easier.

Categories // Automation, ESXi, Nested Virtualization, vSphere 6.0 Tags // ESXi 6.0, fling, instant clone, nested, nested virtualization, PowerCLI, vmfork

Using PowerCLI to invoke Guest Operations API to a Nested ESXi VM

07.14.2015 by William Lam // 1 Comment

In my opinion, the Guest Operations API in vSphere is still one of the most powerful Virtual Machine capabilities that is available to vSphere Administrators and anyone else who integrates with the vSphere Platform. The Guest Operations API allows users to perform guest operations (running commands, transferring files, etc) directly within the guestOS as if you were logged in. Valid guest credentials are still required and once authenticated, the operations are then proxied through VMware Tools. Networking is not even required which makes this a handy feature for troubleshooting and can even extend into application level provisioning through a single API.

Obviously, I am a huge fan of this capability and have used it myself on more than one occasion. However, one limitation that I discovered awhile back when using the Guest Operations API with Nested ESXi VMs is that it threw some very strange memory related errors. It was only recently did I find out that there was a known issue with the VMware Tools for Nested ESXi, both the installable VIB and the pre-installed version in ESXi 6.0 on how the guest operations are executed. The good news is that for now, there is a simple workaround that can be applied when using the Guest Operations API.

You will need to add the following option, which runs the command under a specific resource group in the ESXi Shell:

'++group=host/vim/tmp'

Here is an example if I were to run the 'echo' command:

/bin/echo '++group=host/vim/tmp' "hello world"

A more interesting example would be to issue ESXCLI commands using the Guest Operations API, perhaps configuring the welcome message?

/bin/python '++group=host/vim/tmp' '/bin/esxcli.py system welcomemsg set -m "vGhetto Was Here"'

Notice, we need to pass in the resource group command to the "python" binary versus ESXCLI binary. The reason for this is that /bin/esxcli is really just a symlink to /bin/esxcli.py which is just a Python wrapper. The actual command being launched is the python interpreter and the arguments to the command is /bin/esxcli.py and the ESXCLI arguments itself.

For those who prefer to consume the Guest Operations API without having to directly use the vSphere API, you can use PowerCLI and use the Invoke-VMScript cmdlet. The problem with that is due to the way the cmdlet has been abstracted, the necessary underlying API details can not be accessed due to certain assumed defaults which can not be overridden or extended. This is a general problem that I have seen in more than one occasion where the abstraction is to make the consumption of the API simpler but in certain cases, it can also inhibit the use of the underlying API feature.

In this case, we will actually need to call into the vSphere API and using PowerCLI as an example, I have created a script called runGuestOpsInNestedESXiVM.ps1 which implements the specific Guest Operations APIs to issue commands to a Nested ESXi VM.

Here is an example of running the script and configuring the welcome message using ESXCLI:

guest_operations_api_nested_esxi

Categories // Automation, ESXi, PowerCLI, vSphere, vSphere 6.0 Tags // guest operations, nested, nested virtualization, vix, vix api, vmware tools

Running Nested ESXi / VSAN Home Lab on Ravello

04.14.2015 by William Lam // 3 Comments

nested_esxi_on_ravello
There are many options when it comes to building and running your own vSphere home lab. There are going to be different pros and cons to each of these solutions which you will need to evaluate things like cost, performance, maintenance, ease of use and complexity to name a few. Below is a list of the currently available options to you today.

Home Lab Options:


On-Premises

  • Using hardware on the VMware HCL
  • Using Apple Mac Mini, Intel NUC, etc.
  • Using whitebox or off the shelf hardware

Off-Premises (hosted)

  • VMware HOL
  • VMware vCloud Air or other vCloud Air Service Providers
  • Colo-located labs

For example, you could purchase a couple of Apple Mac Mini's and build out a decent size vSphere environment, but it could potentially be costly and not to mention a bit limited on the memory options. Compared to other platforms, it is pretty energy efficient and easy to use and maintain. If you did not want to manage any hardware at all, you could look at a hosted or an on-demand lab such as vCloud Air which can run Nested ESXi unofficially or anyone of the many vCloud Air Service Providers. Heck, you could even use VMware Hands On Lab, though the access will be limited as you will be constrained by the pre-built labs and would not be able to directly upload or download files to the lab. However, this could be a quick way to get access to an environment for testing and best of all, it is 100% free. As you can see, there are many options for a home lab and it really just depends on your goals and what you are trying to accomplish.

Ravello says hello to Nested ESXi


Today, we have a new player entering the off-premises (hosted) option for running vSphere based home labs. I am please to announce that Ravello, a startup that uses Nested Virtualization to target dev/test workloads has just introduced beta support for running Nested ESXi on their platform. I have written about Ravello in the past and you can find more details here. Ravello uses their own home grown KVM-based nested hypervisor called HVX which runs on top of a VM provisioned by either Amazon EC2 or Google Compute Engine. As you can imagine, this was not a trivial feature to add support for especially when things like Intel-VT/AMD-V is not directly exposed to the virtual machines in EC2 or GCE which is required to run ESXi. The folks over at Ravello has solved this in a very interesting way by "emulating" the capabilities of Intel-VT/AMD-V using Binary Translation with direct execution.

Over the last month, I have had the privilege of getting early access to the Ravello platform with the Nested ESXi capability and have been providing early feedback to their R&D team to ensure the best possible user experience for customers looking to run Nested ESXi on their platform. I have also spent quite a bit of time working out the proper workflow for getting Nested ESXi running and being able to quickly scale up the number of nodes, especially useful when testing new features like VSAN 6.0. I have also been working with their team to develop a script that will allow users to quickly spin up as many Nested ESXi VMs as needed after a one time initial preparation. This will greatly simplify deployments of more than a couple of Nested ESXi VMs. Hopefully I will be able to share more details about the script in the very near future.

Before jumping into the instructions on getting Nested ESXi running on the Ravello platform, I also wanted to quickly highlight what is currently supported from a vSphere perspective as well as some of the current limitations and caveats regarding Nested ESXi that you should be aware of. Lastly, I have also provided some details around pricing so the proper expectations is set if you are considering a vSphere home lab on Ravello. You can find more information in the next few sections else you can go straight to the setup instructions.

Supports:


  • vCenter Server 5.x (Windows) & VCSA 5.x
  • vCenter Server 6.0 (Windows)
  • ESXi 5.x
  • ESXi 6.0

Caveats:


Coming from a pure vSphere background, I have enjoyed many of the simplicities that VMware has built into their core platform such as support for OVF capabilities like Dynamic Disks and Deployment Options for example. While using the Ravello platform I came across several limitations with respect to Nested ESXi and the VCSA. Below is just a quick list of the caveats that I have found while testing the platform and I have been told that many of these are being looked at and hopefully will be resolved in the future. Nonetheless, I still wanted to make sure these were called out so that you go in with the right expectations.

  • There is currently no support for virtuallyGhetto's Nested ESXi /VSAN VM OVF Templates (though you can import the OVFs, most of the configurations are lost)
  • There is currently no support for VM Advanced Settings such as marking a VMDK as an SSD or enabling UUID for disks for example (configurations are not preserved through import)
  • There is currently no support for VCSA 6.0 OVA due to disk controller limitation + no OVF property support, you will need to use Windows based vCenter Server for now (VCSA 5.5 is supported)
  • There is currently no OVF property support
  • There is currently no support for VMXNET3 for Nested ESXi VM, e1000 must be used due to a known network bug
  • Running Nested SMP-FT is not supported as 10Gbit vNICs are required and VMXNET3 is not currently supported

Pricing:


When publishing your Ravello Application, you have the option selecting two different deployment optimization. The first is optimized for cost, if TCO is what you care most about, then the platform will automatically select the cloud provider (EC2 or GCE) that is the cheapest to satisfy the requirements. The second option is to optimize based on performance and if selected, you can choose to place your application on either EC2 or GCE. In both of cases, you will be provided with an estimated cost which is broken down to compute, storage, networking as well as a final cost (per hour). Once you agree to the terms, you can then click on the "publish" button which will then deploy your workload onto the selected cloud provider.

Here is a screenshot summary view of a Ravello Application which I have built that consists of 65 VMs (1 Windows VM for vCenter Server) and 64 Nested ESXi VMs and I chose to optimize based on cost. The total price would be $17.894/hr

ravello-vghetto-nested-esxi-vsan-6.0-64-Node-cost-optmized
Note: Prices as of 04/05/2015

I also went through an exercise of going through several more configurations to give you an idea of what the cost could be for varying sized environments. Below is a table for a 3 Node, 32 Node & 64 Node VSAN setup (includes one additional VM for the vCenter Server).

# of VM Optimization Hosting Platform Compute Cost Storage Cost Network Cost Public IP Cost Total Price
4 Cost N/A $1.09/hr $0.0292/hr $0.15/GB $0.01/hr $1.1292/hr
4 Performance Amazon $1.62/hr $0.0292/hr $0.15/GB $0.01/hr $1.6592/hr
4 Performance Google $1.38/hr $0.0292/hr $0.15/GB $0.01/hr $1.4192/hr
33 Cost N/A $8.92/hr $0.1693/hr $0.15/GB $0.01/hr $9.0993/hr
33 Performance Amazon $13.22/hr $0.1693/hr $0.15/GB $0.01/hr $13.3993/hr
33 Performance Google $11.24/hr $0.1693/hr $0.15/GB $0.01/hr $11.4193/hr
65 Cost N/A $17.56/hr $0.324/hr $0.15/GB $0.01/hr $17.894/hr
65 Performance Amazon $26.02/hr $0.324/hr $0.15/GB $0.01/hr $26.354/hr
65 Performance Google $22.12/hr $0.324/hr $0.15/GB $0.01/hr $22.454/hr

How to Setup:


Here is the process for setting up Nested ESXi on the Ravello platform. The process consists of installing a single Nested ESXi VM and "preparing" it so that it can then be used later to deploy additional unique Nested ESXi instances from the Ravello Library.

Step 1 - Upload either an ESXi 5.x or 6.0 ISO to the Library using the Ravello VM Uploader tool which you will be prompted to install.

Screen Shot 2015-04-08 at 8.43.14 PM
Step 2 - Deploy the empty Ravello ESXi VM Template from the Library which has already been prepared with the required CPU ID

<ns1:cpuIds value="0000000768747541444d416369746e65" index="f00d”/>

Adding the above CPU ID will enable the emulation of Intel VT-x/AMD-V. If you decide to create your own Ravello VM Template, you will need to perform this operation yourself which is only available today via their REST API today, you can find more details here.

Step 3 - Add a CD-ROM device to the Nested ESXi VM by highlighting the ESXi VM and under "Disks" (yes, this was not intuitive for me either)

Screen Shot 2015-04-08 at 8.48.40 PM
Once you have added the CD-ROM, you will want to mount the ESXi ISO.

Step 4 - Power on the Nested ESXi VM and perform a regular installation of ESXi as you normally would.

At this point, you have now successfully installed Nested ESXi on Ravello! The next series of step is to "prepare" this ESXi image so that it can be duplicated (cloned) to deploy additional instances without causing conflicts, else you would have to perform this step N-number of times for additional nodes which I am sure many of you would not want to do. The steps outlined here will be following the process which I have documented in my How to properly clone a Nested ESXi VM? article.

Step 5 - Login to the console of ESXi VM and run the following ESXCLI command:

esxcli system settings advanced set -o /Net/FollowHardwareMac -i 1

Note: If you wish to connect to the ESXi VM directly for ease of use versus going through the remote console. You can go to "Services" tab for the VM and enable external access as seen in the screenshot below.

ravello-networking
Step 6 - Edit /etc/vmware/esx.conf and remove the uuid entry and then run /sbin/auto-backup.sh to ensure the changes have been saved.

At this point, you have prepared a vanilla Nested ESXi VM. You can save this image into the Ravello Library and you can deploy additional instances and by default Ravello platform is set for DHCP. You can of course change it to DHCP reservations so you get a particular IP Address or specifying a static IP Address assignment.

If you wish to prepare the Nested ESXi VM for use with VSAN, then you will need to run through these additional steps:

  • Create a claim rule to mark the 4GB VMDK as SSD
  • Enable VSAN traffic type on vmk0

Step 7 - I have also enabled remote logging as well as suppress any shell warnings and you just need to run the snippet below within the ESXi Shell

DEVICE=$(esxcli storage core device list  | grep -iE '(   Display Name: |   Size: )' | grep -B1 4096 | grep mpx | awk -F '(' '{print $2}' | sed 's/)//g');esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d $DEVICE -o enable_ssd;esxcli storage core claiming reclaim -d $DEVICE;esxcli vsan network ipv4 add -i vmk0;esxcli system syslog config set --loghost=10.0.0.100;esxcli system settings advanced set -o /UserVars/SuppressShellWarning -i 1

Step 8 -

If you wish to setup 32 Nodes with VSAN 1.0, then you will need to run this additional command:

esxcli system settings advanced set -o /CMMDS/goto11 -i 1

If you wish to setup 64 Nodes with VSAN 6.0, then you will need to run this additional command:

esxcli system settings advanced set -o /VSAN/goto11 -i 1

At this point, you have completed preparing your Nested ESXi VM. You can now save your image to the Ravello Library and once that has been done, you can now easily clone additional Nested ESXi instances by simply dragging/dropping into your canvas from the Ravello Library. For vCenter Server, if you are setting up a vSphere 5.x environment you will need to upload the VCSA and go through the normal configuration using the VAMI UI. For vCenter Server 6.0, you will not be able to use the VCSA 6.0 because there is a limitation in the platform today that does not support it. At this time, you will need to deploy and install a Windows VM and then install the vCenter Server 6.0 installation.

I of course had some fun with the Ravello platform and below are some screenshots of running both a 32 Node VSAN Cluster (vSphere 5.5) as well as a 64 Node VSAN Cluster (vSphere 6.0) Overall, I thought it was a pretty good experience. There were definitely some sluggishness while installing vCenter Server bits and navigating through the vSphere Web Client. It took a little over 40min which was almost double the amount of time that I have seen in my home lab. I was told that VNC might perform better than RDP, though RDP is what Ravello folks recommend for connecting to a Windows based desktop. It is great to see another option for running vSphere home labs and I think the performance is probably acceptable for most people and hopefully it will continue to improve in the future. I definitely recommend giving Ravello a try and who knows, it might be the platform of choice for your vSphere home lab.

Nested ESXi 5.5 running 32 Node VSAN Cluster:

vghetto-nested-esxi-5.5-32-node-cluster-ravello-1

vghetto-nested-esxi-5.5-32-node-cluster-ravello-0

Nested ESXi 6.0 running 64 Node VSAN Cluster:

vghetto-nested-esxi-64-node-cluster-ravello-1

vghetto-nested-esxi-64-node-cluster-ravello-0

Categories // ESXi, Home Lab, Nested Virtualization, vSphere Tags // homelab, intel vt, nested, nested virtualization, ravello

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • …
  • 14
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...