WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Automated vSphere Lab Deployment for vSphere 6.x

11.21.2016 by William Lam // 95 Comments

For those of you who follow me on Twitter, you may have seen a few tweets from me hinting at a vSphere deployment script that I have been working on. This was something I had initially built for my own lab use but I figured it probably could benefit the larger VMware community, especially around testing and evaluational purposes. Today, I am please to announce the release of my vGhetto vSphere Lab Deployment (VVLD) scripts which leverages the new PowerCLI 6.5 release which is partly why I needed to wait until it was available before publishing.

There are literally hundreds if not more ways to build and configure a vSphere lab environment. Over the years, I have noticed that some of these methods can be quite complex simply due to their requirements, or incomplete as they only handle specific portion of the deployment or add additional constraints and complexity because they are composed of several other tools and scripts which can make it hard to manage. One of my primary goals for the project was to be able to stand up a fully functional vSphere environment, not just deploying a vCenter Server Appliance (VCSA) or a couple of Nested ESXi VMs, but rather the entire vSphere stack and have it fully configured and ready for use. I also wanted to develop the scripts using a single scripting language that was not only easy to use, so that others could enhance or extend it further but also with the broadest support into the various vSphere APIs. Lastly, as a stretch goal, I would love to be able to run this script across the different OS platforms.

With these goals in mind, I decided to build these scripts using the latest PowerCLI 6.5 release. Not only is PowerCLI super easy to use, but I was able to immediately benefit from some of the new functionality that was added in the latest PowerCLI release such as the native VSAN cmdlets which I could use a sub-set of the cmdlets against prior releases of vSphere like 6.0 Update 2 for example. Although, not all functionality in PowerCLI has been ported over to PowerCLICore, you can see where VMware is going with it and my hope is that in the very near future, what I have created can one day be executed across all OS platforms whether that is Windows, Linux or Mac OS X and potentially even ARM-based platforms 🙂

Changelog:

  • 11/22/16
    • Automatically handle Nested ESXi on vSAN
  • 01/20/17
    • Resolved "Another task in progress" thanks to Jason M
  • 02/12/17
    • Support for deploying to VC Target
    • Support for enabling SSH on VCSA
    • Added option to auto-create vApp Container for VMs
    • Added pre-check for required files
  • 02/17/17
    • Added missing dvFilter param to eth1 (missing in Nested ESXi OVA)
  • 02/21/17 (All new features added only to the vSphere 6.5 Std deployment)
    • Support for deploying NSX 6.3 & registering with vCenter Server
    • Support for updating Nested ESXi VM to ESXi 6.5a (required for NSX 6.3)
    • Support for VDS + VXLAN VMkernel configuration (required for NSX 6.3)
    • Support for "Private" Portgroup on eth1 for Nested ESXi VM used for VXLAN traffic (required for NSX 6.3)
    • Support for both Virtual & Distributed Portgroup on $VMNetwork
    • Support for adding ESXi hosts into VC using DNS name (disabled by default)
    • Added CPU/MEM/Storage resource requirements in confirmation screen
  • 04/18/18
    • New version of the script vsphere-6.7-vghetto-standard-lab-deployment.ps1 to support vSphere 6.7
    • Added support for vCenter Server 6.7, some of the JSON params have changed for consistency purposes which needed to be updated
    • Added support for new Nested ESXi 6.7 Virtual Appliance (will need to download that first)
    • vMotion is now enabled by default on vmk0 for all Nested ESXi hosts
    • Added new $enableVervoseLoggingToNewShell option which spawns new PowerShell session to provide more console output during VCSA deploy. FR by Christian Mohn
    • Removed dvFilter code, since thats now part of the Nested ESXi VA

Requirements:

  • 1 x Physical ESXi host OR vCenter Server running at least ESXi 6.0 Update 2 or greater
  • PowerCLI 6.5 R1 installed on a Window system
  • Nested ESXi 6.0 or 6.5 Virtual Appliance OVA
  • vCenter Server Appliance (VCSA) 6.0 or 6.5 extracted ISO
  • NSX 6.3 OVA (optional)
    • ESXi 6.5a offline patch bundle

Supported Deployments:

The scripts support deploying both a vSphere 6.0 Update 2 as well as vSphere 6.5 environment and there are two types of deployments for each:

  • Standard - All VMs are deployed directly to the physical ESXi host
  • Self Managed - Only the Nested ESXi VMs are deployed to physical ESXi host. The VCSA is then bootstrapped onto the first Nested ESXi VM

Below is a quick diagram to help illustrate the two deployment scenarios. The pESXi in gray is what you already have deployed which must be running at least ESXi 6.0 Update 2. The rest of the boxes is what the scripts will deploy. In the "Standard" deployment, three Nested ESXi VMs will be deployed to the pESXi host and configured with vSAN. The VCSA will also be deployed directly to the pESXi host and the vCenter Server will be configured to add the three Nested ESXi VMs into its inventory. This is a pretty straight forward and basic deployment, it should not surprise anyone. The "Self Managed" deployment is simliar, however the biggest difference is that rather than the VCSA being deployed directly to the pESXi host like the "Standard" deployment, it will actually be running within the Nested ESXi VM. The way that this deployment scenario works is that we will still deploy three Nested ESXi VM onto the pESXi host, however, the first Nested ESXi VM will be selected as a "Bootstrap" node which we will then construct a single-node vSAN to then deploy the VCSA. Once the vCenter Server is setup, we will then add the remainder Nested ESXi VMs into its inventory.

vsphere-6-5-vghetto-lab-deployment-0
For most users, I expect the "Standard" deployment to be more commonly used but for other advanced workflows such as evaluating the new vCenter Server High Availability feature in vSphere 6.5, you may want to use the "Self Managed" deployment option. Obviously, if you select the latter deployment, the provisioning will take longer as you are now "double nested" and depending on your underlying physical resources, this can take quite a bit more time to deploy as well as consume more physical resources as your Nested ESXi VMs must now be larger to accommodate the VCSA. In both scenarios, there is no reliance on additional shared storage, they will both create a three node vSAN Cluster which of course you can expand by simply editing the script.

Deployment Time:

Here is a table breaking down the deployment time for each scenario and vSphere version:

Deployment Type Duration
vSphere 6.5 Standard 36 min
vSphere 6.0 Standard 26 min
vSphere 6.5 Self Managed 47 min
vSphere 6.0 Self Managed 34 min

Obviously, your miles will vary based on your hardware configuration and the size of your deployment.

Scripts:

There are four different scripts which covers the scenarios we discussed above:

  • vsphere-6.0-vghetto-self-manage-lab-deployment.ps1
  • vsphere-6.0-vghetto-standard-lab-deployment.ps1
  • vsphere-6.5-vghetto-self-manage-lab-deployment.ps1
  • vsphere-6.5-vghetto-standard-lab-deployment.ps1
  • vsphere-6.7-vghetto-standard-lab-deployment.ps1

Instructions:

Please refer to the Github project here for detailed instructions.

Verification:

Once you have saved all your changes, you can then run the script. You will be provided with a summary of what will be deployed and you can verify that everything is correct before attempting the deployment. Below is a screenshot on what this would look like:

Sample Execution:

Here is an example of running a vSphere 6.5 "Standard" deployment:


Here is an example of running a vSphere 6.5 "Self Managed" deployment:

vsphere-6-5-vghetto-lab-deployment-2
If everything is succesful, you can now login to your new vCenter Server and you should either see the following for a "Standard" deployment:

vsphere-6-5-vghetto-lab-deployment-5
or the following for "Self Managed" deployment:

vsphere-6-5-vghetto-lab-deployment-6
I hope you find these scripts as useful as I do and feel free to enhance these scripts to perform additional functionality or extend them to cover other VMware product deployments such as NSX or vRealize products for example. Enjoy!

Categories // Automation, Home Lab, PowerCLI, VCSA, vSphere 6.0, vSphere 6.5 Tags // homelab, Nested ESXi, nested virtualization, PowerCLI, vSphere 6.5

Virtual NVMe and Nested ESXi 6.5?

10.26.2016 by William Lam // 4 Comments

After publishing my Nested ESXi enhancements for vSphere 6.5 article, I had received a number of questions on whether the new Virtual NVMe (vNVMe) capability introduced in the upcoming vSphere 6.5 release would also work with a Nested ESXi VM? The answer is yes, similiar to PVSCSI and VMXNET3, we also have an NVMe driver for ESXi running in VM.

Disclaimer: Nested ESXi and Nested Virtualization is not officially supported by VMware, please use at your own risk.

To consume the new vNVMe for a Nested ESXi VM, you will need to use the latest ESXi 6.5 and later compatibility (vHW 13). Once that has been done, you can then add the a new NVMe Controller to your Nested ESXi VM and then assign that to one of the virtual disks as shown in the screenshot below.

nested-esxi-65-nvme-1
Next, you would install ESXi 6.5 as you normally would and the NVMe controller will automatically be detected and driver will be loaded. In the example below, you can see I only have a single disk which ESXi itself is installed on and it is backed by the NVMe Controller.

nested-esxi-65-nvme-0
One of the biggest benefit of using an NVMe interface over the traditional SCSI is that it can significantly reduce the amount of overhead compared to the SCSI protocol which in turn consumes less CPU cycles as well as reducing the overall amount of IO latency for your VM workloads. Obviously, when using it inside of a Nested ESXi VM, YMMV but hopefully you should also see an improvement there as well. For those who plan to give this a try in their environment, it would be good to hear what type of use cases you might have in mind for this and if you have any feedback (good/bad), feel free to leave a comment.

Categories // ESXi, Home Lab, Not Supported, vSphere 6.5 Tags // nested, Nested ESXi, nested virtualization, NVMe, vSphere 6.5

Nested ESXi Enhancements in vSphere 6.5

10.19.2016 by William Lam // 15 Comments

As many of you have probably heard by now, vSphere 6.5 was just announced at VMworld Barcelona this week and it is packed with a ton of new features and capabilities which you can read more about here. One area that is near and dear to me and has not been covered are some of the Nested ESXi enhancements that have been made in vSphere 6.5.

To be clear, VMware has NOT changed its support stance in vSphere 6.5. Both Nested ESXi as well as general Nested Virtualization is still NOT officially supported. Okay, so thats out of the way now, lets see what is new?

  • Paravirtual SCSI (PVSCSI) support
  • GuestOS Customization
  • Pre-vSphere 6.5 enablement on vSphere 6.0 Update 2
  • Virtual NVMe support

Lets take a closer look at each of these enhancements.

Paravirtual SCSI (PVSCSI) support

When vSphere 6.0 Update 2 was released, I had hinted that PVSCSI support might be a possibility in the near future. I am happy to announce that this is now possible with running Nested ESXi and having the guestOS as ESXi 6.5. In vSphere 6.5, a new GuestOS type has been introduced called vmkernel65 which is optimized for running ESXi 6.5 in a VM as shown in the screenshot below.

nested-esxi-enhancements-in-vsphere-6-5-4
As you can see from the VM Virtual Hardware configuration screen below, both the PVSCSI and VMXNET3 adapter are now the recommended default when creating a Nested ESXi VM for running ESXi 6.5.

nested-esxi-enhancements-in-vsphere-6-5-1
Similiar to the VMXNET3 driver, the PVSCSI driver is automatically bundled within the version of VMware Tools for Nested ESXi. That is to say, the drivers are included in the default ESXi image itself and is ONLY activated when ESXi detects that it is running inside of a VM. From a user standpoint, this means there are no additional configurations or installations that is required. You simply select ESXi 6.5 as the GuestOS and install ESXi as you normally would and this will automatically be enabled for you.

The only requirement to leverage this new capability is that BOTH the GuestOS type is ESXi 6.5 (vmkernel65) and the actual OS is running ESXi 6.5. The underlying physical ESXi host can either be ESXi 6.0 or ESXi 6.5. In addition to new virtual hardware defaults, I have also found that the new ESXi 6.5 GuestOS type now uses EFI firmware over the legacy BIOS compared to previous ESXi 6.x/5.x/4.x GuestOS types.

For customers who wish to push their storage IO a bit more for Nested ESXi guests, this is a great addition, especially with lower overhead when using a PVSCSI adapter.

GuestOS Customization

One of the very last capability that has been missing from Nested ESXi is the ability to perform a simple GuestOS customization when cloning or deploying a VM from template running Nested ESXi. Today, you can deploy my Nested ESXi Virtual Appliance which basically provides you with the ability to customize your deployment but would it not be great if this was native in the platform? Well, I am pleased to say this is now possible!

When you go and clone or deploy a VM from template that is a Nested ESXi VM, you will now have the option to select the Customize guest OS option. As you can see from the screenshot below, you can now create a new Customization Spec which is based on the Linux customization spec. The customization only covers networking configuration (IP Address, Netmask, Gateway and Hostname) and only applies it to the first VMkernel interface, all others will be ignored. The thinking here is that once you have your Nested ESXi VM on the network, you can then fully manage and configure it using the vSphere API rather than re-creating the same functionality just for cloning.

nested-esxi-enhancements-in-vsphere-6-5-2
To use this new Nested ESXi GuestOS customization, there are two things you will need to do:

  • Perform two configuration changes within the Nested ESXi VM which will prepare them for cloning. You can find the configuration changes described in my blog post here
  • Ensure BOTH the GuestOS type is ESXi 6.5 (vmkernel65) and the actual OS is running ESXi 6.5. This means that your underlying physical vSphere infrastructure can be running either vSphere 6.0 Update 2 or vSphere 6.5

You can monitor the progress of the guest customization by going to the VM's Monitor->Tasks & Events using the vSphere Web Client or vSphere API if you are doing this programmatically. Below is a screenshot of a successful Nested ESXi guest customization. If there are any errors, you can take a look at /var/log/vmware-imc/toolsDeployPkg.log within the cloned Nested ESXi VM to determine what went wrong.

nested-esxi-enhancements-in-vsphere-6-5-3
I know this will be a very welcome capability for customers who extensively use the guest customization feature or if you just want to quickly clone an existing Nested ESXi VM that you have already configured.

Pre-vSphere 6.5 enablement in vSphere 6.0 Update 2

By now, you probably have figured out what this last enhancement is all about 🙂 It is exactly as it sounds, we are enabling customers to try out ESXi 6.5 by running it as a Nested ESXi VM on your existing vSphere 6.0 environment and specifically the Update 2 release (this includes both vCenter Server as well as ESXi). Although this has always been possible with past releases of vSphere running newer versions, we are now pre-enabling ESXi 6.5 specific Nested ESXi capabilities in the latest release of vSphere 6.0 Update 2. This means when vSphere 6.5 is generally available, you will be able to test drive some of the new Nested ESXi 6.5 capabilities that I had mentioned on your existing vSphere infrastructure. This is pretty darn cool if you ask me!?

Virtual NVMe support

I had a few folks ask on whether the upcoming Virtual NVMe capability in vSphere 6.5 would be possible with Nested ESXi and the answer is yes. Please have a look at this post here for more details.

For those of you who use Nested ESXi, hopefully you will enjoy these new updates! As always, if you have any feedback or requests, feel free to leave them on my blog and I will be sure to share it with the Engineering team and who knows, it might show up in the future just like some of these updates which have been requested from customers in the past 😀

Categories // ESXi, Nested Virtualization, vSphere 6.5 Tags // guest customization, nested, Nested ESXi, nested virtualization, pvscsi, vmxnet3, vSphere 6.0 Update 2, vSphere 6.5

  • « Previous Page
  • 1
  • …
  • 9
  • 10
  • 11

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...