WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Automated vSphere Lab Deployment for vSphere 6.x

11.21.2016 by William Lam // 95 Comments

For those of you who follow me on Twitter, you may have seen a few tweets from me hinting at a vSphere deployment script that I have been working on. This was something I had initially built for my own lab use but I figured it probably could benefit the larger VMware community, especially around testing and evaluational purposes. Today, I am please to announce the release of my vGhetto vSphere Lab Deployment (VVLD) scripts which leverages the new PowerCLI 6.5 release which is partly why I needed to wait until it was available before publishing.

There are literally hundreds if not more ways to build and configure a vSphere lab environment. Over the years, I have noticed that some of these methods can be quite complex simply due to their requirements, or incomplete as they only handle specific portion of the deployment or add additional constraints and complexity because they are composed of several other tools and scripts which can make it hard to manage. One of my primary goals for the project was to be able to stand up a fully functional vSphere environment, not just deploying a vCenter Server Appliance (VCSA) or a couple of Nested ESXi VMs, but rather the entire vSphere stack and have it fully configured and ready for use. I also wanted to develop the scripts using a single scripting language that was not only easy to use, so that others could enhance or extend it further but also with the broadest support into the various vSphere APIs. Lastly, as a stretch goal, I would love to be able to run this script across the different OS platforms.

With these goals in mind, I decided to build these scripts using the latest PowerCLI 6.5 release. Not only is PowerCLI super easy to use, but I was able to immediately benefit from some of the new functionality that was added in the latest PowerCLI release such as the native VSAN cmdlets which I could use a sub-set of the cmdlets against prior releases of vSphere like 6.0 Update 2 for example. Although, not all functionality in PowerCLI has been ported over to PowerCLICore, you can see where VMware is going with it and my hope is that in the very near future, what I have created can one day be executed across all OS platforms whether that is Windows, Linux or Mac OS X and potentially even ARM-based platforms 🙂

Changelog:

  • 11/22/16
    • Automatically handle Nested ESXi on vSAN
  • 01/20/17
    • Resolved "Another task in progress" thanks to Jason M
  • 02/12/17
    • Support for deploying to VC Target
    • Support for enabling SSH on VCSA
    • Added option to auto-create vApp Container for VMs
    • Added pre-check for required files
  • 02/17/17
    • Added missing dvFilter param to eth1 (missing in Nested ESXi OVA)
  • 02/21/17 (All new features added only to the vSphere 6.5 Std deployment)
    • Support for deploying NSX 6.3 & registering with vCenter Server
    • Support for updating Nested ESXi VM to ESXi 6.5a (required for NSX 6.3)
    • Support for VDS + VXLAN VMkernel configuration (required for NSX 6.3)
    • Support for "Private" Portgroup on eth1 for Nested ESXi VM used for VXLAN traffic (required for NSX 6.3)
    • Support for both Virtual & Distributed Portgroup on $VMNetwork
    • Support for adding ESXi hosts into VC using DNS name (disabled by default)
    • Added CPU/MEM/Storage resource requirements in confirmation screen
  • 04/18/18
    • New version of the script vsphere-6.7-vghetto-standard-lab-deployment.ps1 to support vSphere 6.7
    • Added support for vCenter Server 6.7, some of the JSON params have changed for consistency purposes which needed to be updated
    • Added support for new Nested ESXi 6.7 Virtual Appliance (will need to download that first)
    • vMotion is now enabled by default on vmk0 for all Nested ESXi hosts
    • Added new $enableVervoseLoggingToNewShell option which spawns new PowerShell session to provide more console output during VCSA deploy. FR by Christian Mohn
    • Removed dvFilter code, since thats now part of the Nested ESXi VA

Requirements:

  • 1 x Physical ESXi host OR vCenter Server running at least ESXi 6.0 Update 2 or greater
  • PowerCLI 6.5 R1 installed on a Window system
  • Nested ESXi 6.0 or 6.5 Virtual Appliance OVA
  • vCenter Server Appliance (VCSA) 6.0 or 6.5 extracted ISO
  • NSX 6.3 OVA (optional)
    • ESXi 6.5a offline patch bundle

Supported Deployments:

The scripts support deploying both a vSphere 6.0 Update 2 as well as vSphere 6.5 environment and there are two types of deployments for each:

  • Standard - All VMs are deployed directly to the physical ESXi host
  • Self Managed - Only the Nested ESXi VMs are deployed to physical ESXi host. The VCSA is then bootstrapped onto the first Nested ESXi VM

Below is a quick diagram to help illustrate the two deployment scenarios. The pESXi in gray is what you already have deployed which must be running at least ESXi 6.0 Update 2. The rest of the boxes is what the scripts will deploy. In the "Standard" deployment, three Nested ESXi VMs will be deployed to the pESXi host and configured with vSAN. The VCSA will also be deployed directly to the pESXi host and the vCenter Server will be configured to add the three Nested ESXi VMs into its inventory. This is a pretty straight forward and basic deployment, it should not surprise anyone. The "Self Managed" deployment is simliar, however the biggest difference is that rather than the VCSA being deployed directly to the pESXi host like the "Standard" deployment, it will actually be running within the Nested ESXi VM. The way that this deployment scenario works is that we will still deploy three Nested ESXi VM onto the pESXi host, however, the first Nested ESXi VM will be selected as a "Bootstrap" node which we will then construct a single-node vSAN to then deploy the VCSA. Once the vCenter Server is setup, we will then add the remainder Nested ESXi VMs into its inventory.

vsphere-6-5-vghetto-lab-deployment-0
For most users, I expect the "Standard" deployment to be more commonly used but for other advanced workflows such as evaluating the new vCenter Server High Availability feature in vSphere 6.5, you may want to use the "Self Managed" deployment option. Obviously, if you select the latter deployment, the provisioning will take longer as you are now "double nested" and depending on your underlying physical resources, this can take quite a bit more time to deploy as well as consume more physical resources as your Nested ESXi VMs must now be larger to accommodate the VCSA. In both scenarios, there is no reliance on additional shared storage, they will both create a three node vSAN Cluster which of course you can expand by simply editing the script.

Deployment Time:

Here is a table breaking down the deployment time for each scenario and vSphere version:

Deployment Type Duration
vSphere 6.5 Standard 36 min
vSphere 6.0 Standard 26 min
vSphere 6.5 Self Managed 47 min
vSphere 6.0 Self Managed 34 min

Obviously, your miles will vary based on your hardware configuration and the size of your deployment.

Scripts:

There are four different scripts which covers the scenarios we discussed above:

  • vsphere-6.0-vghetto-self-manage-lab-deployment.ps1
  • vsphere-6.0-vghetto-standard-lab-deployment.ps1
  • vsphere-6.5-vghetto-self-manage-lab-deployment.ps1
  • vsphere-6.5-vghetto-standard-lab-deployment.ps1
  • vsphere-6.7-vghetto-standard-lab-deployment.ps1

Instructions:

Please refer to the Github project here for detailed instructions.

Verification:

Once you have saved all your changes, you can then run the script. You will be provided with a summary of what will be deployed and you can verify that everything is correct before attempting the deployment. Below is a screenshot on what this would look like:

Sample Execution:

Here is an example of running a vSphere 6.5 "Standard" deployment:


Here is an example of running a vSphere 6.5 "Self Managed" deployment:

vsphere-6-5-vghetto-lab-deployment-2
If everything is succesful, you can now login to your new vCenter Server and you should either see the following for a "Standard" deployment:

vsphere-6-5-vghetto-lab-deployment-5
or the following for "Self Managed" deployment:

vsphere-6-5-vghetto-lab-deployment-6
I hope you find these scripts as useful as I do and feel free to enhance these scripts to perform additional functionality or extend them to cover other VMware product deployments such as NSX or vRealize products for example. Enjoy!

Categories // Automation, Home Lab, PowerCLI, VCSA, vSphere 6.0, vSphere 6.5 Tags // homelab, Nested ESXi, nested virtualization, PowerCLI, vSphere 6.5

How to enable vCenter Server High Availability (VCHA) in vSphere 6.5 w/single ESXi host?

11.16.2016 by William Lam // 5 Comments

One of the big new features that was introduced in vSphere 6.5, exclusively for vCenter Server Appliance (VCSA), is the vCenter Server High Availability (VCHA) capablilty. Feidhlim O'Leary has an excllent blog post covering what VCHA provides as well as a couple of demo videos on how it works, definitely worth checking out! After upgrading one of my home lab enviornments to vSphere 6.5, I wanted to try out this feature from an educational standpoint and specifically around using new VCHA vSphere APIs.

Like most vSphere Home Labbers, I have limited hardware and if you try to enable VCHA with only a single ESXi host, you will see the following error:

This operation would violate a virtual machine affinity/anti-affinity rule.

enable-vcha-on-single-esxi-host-0
As you might expect, VCHA will automatically provision affinity rules to ensure that the active, passive and witness node are not all running on the same physical ESXi host. For a production deployment this is completely valid but for lab and testing purposes, this might be a tough requirement to satisfy. I was hoping there might be an override option and searching for the word "ha" in the vCenter Server Advanced Settings lead me to an interesting property called config.vpxd.vcha.drsAntiAffinity. This discovery was purely by luck and I had noticed it was set to true by default, so I decided to change it to false and see what would happen.

enable-vcha-on-single-esxi-host-1 
To my surprise, changing this setting worked and I was able to successfully enable VCHA in my lab with all three nodes just running on a single ESXi host 😀

enable-vcha-on-single-esxi-host-2
An alternative solution would be to deploy a 3-Node Nested ESXi cluster which would not require this modification, but my physical ESXi host was limited on memory, only 16GB and would have been a lot slower.

Categories // VCSA, vSphere 6.5 Tags // VCHA, vSphere 6.5

VCSA alarm for VCDB space utilization in vSphere 6.5

11.10.2016 by William Lam // 4 Comments

With prior releases of the vCenter Server Appliance (VCSA), there was little to no visibility to the underlying vCenter Server Database (VCDB) which uses an embedded vPostgres Database. This was especially true for being able to get basic storage utilization of the VCDB including the breakdown of the different data types being stored. More importantly, there was no easy way to even monitor the storage utilization of the VCDB to help prevent the rare case where the VCDB could be filling up for whatever reason.

In vSphere 6.5, there have been huge amount of improvements to provide customers with greater visibility into the VCDB. Not only can customers get granular into the specific types of data being consumed: Stats, Events, Alarm & Tasks (SEAT), Transaction Log & VC Inventory within the VCDB, but this information can also be easily accessed both from a UI as well as API (using the VAMI REST API) standpoint. The Virtual Appliance Management Interface, better known as the VAMI for the VCSA has received a huge face lift in vSphere 6.5. As you can see from the screenshot below, there is now a Database section which gives you the current utilization of your VCDB. In addition, you can also see how this utilization trends over time for the various data types.

vcdb-space-utilization-vcenter-alarms-1
From a reporting and visibility standpoint, this is great but how do you go about operationalizing this data and ensuring that you do not run into situation where your VCDB is out of space or is close to being out of space? Another improvement that has been made to the VCSA 6.5 is that there is now a default vCenter Server Database Health alarm that will monitor the space utilization of your VCDB.

vcdb-space-utilization-vcenter-alarms-0
The way in this work is that system will check the VCDB space utilization every 15minutes with the following trigger events defined:

  • If the current storage utilization is at 80%, a Warning alarm will be triggered
  • If the current storage utilization is 95%, an Error alarm will be triggered and the action is to shutdown the vCenter Server application to protect the database

These default triggers can be changed by simply editing the following vCenter Server advanced settings: vpxd.vdb.space.errorPercent and vpxd.vdb.space.warningPercent (restart of VC service is not required).

vcdb-space-utilization-vcenter-alarms
Customers can also extend these alarms to send an additional email and/or SNMP trap to their monitoring system so that not only is this visible in the vSphere Web Client but the appropriate administrators can also be notified. The above is just one of the many improvements the VCSA 6.5 has received and I definitely recommend customers spend some time looking at what is now available in the VAMI UI as well as being able to pull this information using our new VAMI REST API.

Categories // VCSA, vSphere 6.5 Tags // SEAT, vcenter server appliance, vCenter Server Database, VCSA, vcva, vpostgres, vpxd.vdb.space.errorPercent, vpxd.vdb.space.warningPercent

  • « Previous Page
  • 1
  • …
  • 17
  • 18
  • 19
  • 20
  • 21
  • …
  • 46
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...