WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Hardware Options
    • Hardware Reviews
    • Lab Deployment Scripts
    • Nested Virtualization
    • Homelab Podcasts
  • VMware Nostalgia
  • Apple

ESXi Learnswitch – Enhancement to the ESXi MAC Learn DvFilter

04.24.2017 by William Lam // 23 Comments

The ESXi MAC Learn dvFilter Fling was released a little over two years ago and it has become a must have when it comes to running our ESXi Hypervisor within a VM, also referred to as Nested ESXi. The reason this Fling has become such a popular hit amongst our customers and partners is that it greatly improves the performance when “Promiscuous Mode” is enabled on a Virtual or Distributed Virtual Portgroup, which is a requirement for using Nested ESXi. Although this Fling works great, there are a couple of limitations with this solution today. The first of which is called out in the original Fling release notes, that once a MAC Address has been learned, it never ages out which is not ideal for long running Nested ESXi environments that generates a large amount of new MAC Addresses. The second, is the lack of vMotion support where the learned MAC Address table is not transfered to the destination ESXi host and must be re-learned.

To help address both of these limitations, the folks over in the Network and Security Business Unit (NSBU) have been working hard to improve upon the existing solution and have developed a new native MAC Learning VMkernel module called the Learnswitch. This new Learnswitch not only helps improves Nested ESXi workloads but it can also potentially benefit other workloads such as Nested Containers or other 3rd Party network inspection software. One immediate difference from the previous MAC Learn dvFilter solution is that rather than operating on the Network IO Chain, the filtering is now performed within the outer virtual switch layer itself which will provide some additional performance gains. The other added benefit from an internal VMware standpoint is that the Learnswitch is now vmkapi compatible, which means we will have a better backwards compatible story for supporting old releases of ESXi. One downside to this new solution compared to the previous one is that because the dvFilter operated below the virtual switch layer, it could support both a Virtual Standard Switch as well as the Distributed Virtual Switch. With the new Learnswitch, a Distributed Virtual Switch will be required. If you currently do not meet the requirements of the new Learnswitch, you can continue using the dvFilter, but it is recommended that you do not mix both on a single system but you can definitely make use of both solutions across different ESXi hosts depending on the constraints of your environment.

Here are some of the new capabilities provided by the new Learnswitch module:

  • Overlay Network based that learning and filtering are done in Etherswitch forwarding check
  • MAC Address learning is based on VLAN ID or VXLAN ID on uplink and leaf port
  • Packet is filtered on uplink and leaf port if the MAC is learned on a different port
  • MAC Address table size is 32k per system
  • MAC Address aging support with default aging time of 5 minutes and configurable
  • Unknown unicast packet is flooded by default and configurable to drop
  • vMotion support that the MAC table learned on the port is transferred to destination host and RARP packet is sent
  • Standalone VMkernel module available as a VIB
  • net-learnswitch CLI to display MAC Address table, configuration and stats

[Read more...]

Categories // ESXi, Nested Virtualization, NSX Tags // dvFilter, ESXi, Learnswitch, mac learning, Nested ESXi, nested virtualization, NSX, VXLAN

Automated vSphere Lab Deployment for vSphere 6.x

11.21.2016 by William Lam // 95 Comments

For those of you who follow me on Twitter, you may have seen a few tweets from me hinting at a vSphere deployment script that I have been working on. This was something I had initially built for my own lab use but I figured it probably could benefit the larger VMware community, especially around testing and evaluational purposes. Today, I am please to announce the release of my vGhetto vSphere Lab Deployment (VVLD) scripts which leverages the new PowerCLI 6.5 release which is partly why I needed to wait until it was available before publishing.

There are literally hundreds if not more ways to build and configure a vSphere lab environment. Over the years, I have noticed that some of these methods can be quite complex simply due to their requirements, or incomplete as they only handle specific portion of the deployment or add additional constraints and complexity because they are composed of several other tools and scripts which can make it hard to manage. One of my primary goals for the project was to be able to stand up a fully functional vSphere environment, not just deploying a vCenter Server Appliance (VCSA) or a couple of Nested ESXi VMs, but rather the entire vSphere stack and have it fully configured and ready for use. I also wanted to develop the scripts using a single scripting language that was not only easy to use, so that others could enhance or extend it further but also with the broadest support into the various vSphere APIs. Lastly, as a stretch goal, I would love to be able to run this script across the different OS platforms.

With these goals in mind, I decided to build these scripts using the latest PowerCLI 6.5 release. Not only is PowerCLI super easy to use, but I was able to immediately benefit from some of the new functionality that was added in the latest PowerCLI release such as the native VSAN cmdlets which I could use a sub-set of the cmdlets against prior releases of vSphere like 6.0 Update 2 for example. Although, not all functionality in PowerCLI has been ported over to PowerCLICore, you can see where VMware is going with it and my hope is that in the very near future, what I have created can one day be executed across all OS platforms whether that is Windows, Linux or Mac OS X and potentially even ARM-based platforms 🙂

Changelog:

  • 11/22/16
    • Automatically handle Nested ESXi on vSAN
  • 01/20/17
    • Resolved "Another task in progress" thanks to Jason M
  • 02/12/17
    • Support for deploying to VC Target
    • Support for enabling SSH on VCSA
    • Added option to auto-create vApp Container for VMs
    • Added pre-check for required files
  • 02/17/17
    • Added missing dvFilter param to eth1 (missing in Nested ESXi OVA)
  • 02/21/17 (All new features added only to the vSphere 6.5 Std deployment)
    • Support for deploying NSX 6.3 & registering with vCenter Server
    • Support for updating Nested ESXi VM to ESXi 6.5a (required for NSX 6.3)
    • Support for VDS + VXLAN VMkernel configuration (required for NSX 6.3)
    • Support for "Private" Portgroup on eth1 for Nested ESXi VM used for VXLAN traffic (required for NSX 6.3)
    • Support for both Virtual & Distributed Portgroup on $VMNetwork
    • Support for adding ESXi hosts into VC using DNS name (disabled by default)
    • Added CPU/MEM/Storage resource requirements in confirmation screen
  • 04/18/18
    • New version of the script vsphere-6.7-vghetto-standard-lab-deployment.ps1 to support vSphere 6.7
    • Added support for vCenter Server 6.7, some of the JSON params have changed for consistency purposes which needed to be updated
    • Added support for new Nested ESXi 6.7 Virtual Appliance (will need to download that first)
    • vMotion is now enabled by default on vmk0 for all Nested ESXi hosts
    • Added new $enableVervoseLoggingToNewShell option which spawns new PowerShell session to provide more console output during VCSA deploy. FR by Christian Mohn
    • Removed dvFilter code, since thats now part of the Nested ESXi VA

Requirements:

  • 1 x Physical ESXi host OR vCenter Server running at least ESXi 6.0 Update 2 or greater
  • PowerCLI 6.5 R1 installed on a Window system
  • Nested ESXi 6.0 or 6.5 Virtual Appliance OVA
  • vCenter Server Appliance (VCSA) 6.0 or 6.5 extracted ISO
  • NSX 6.3 OVA (optional)
    • ESXi 6.5a offline patch bundle

Supported Deployments:

The scripts support deploying both a vSphere 6.0 Update 2 as well as vSphere 6.5 environment and there are two types of deployments for each:

  • Standard - All VMs are deployed directly to the physical ESXi host
  • Self Managed - Only the Nested ESXi VMs are deployed to physical ESXi host. The VCSA is then bootstrapped onto the first Nested ESXi VM

Below is a quick diagram to help illustrate the two deployment scenarios. The pESXi in gray is what you already have deployed which must be running at least ESXi 6.0 Update 2. The rest of the boxes is what the scripts will deploy. In the "Standard" deployment, three Nested ESXi VMs will be deployed to the pESXi host and configured with vSAN. The VCSA will also be deployed directly to the pESXi host and the vCenter Server will be configured to add the three Nested ESXi VMs into its inventory. This is a pretty straight forward and basic deployment, it should not surprise anyone. The "Self Managed" deployment is simliar, however the biggest difference is that rather than the VCSA being deployed directly to the pESXi host like the "Standard" deployment, it will actually be running within the Nested ESXi VM. The way that this deployment scenario works is that we will still deploy three Nested ESXi VM onto the pESXi host, however, the first Nested ESXi VM will be selected as a "Bootstrap" node which we will then construct a single-node vSAN to then deploy the VCSA. Once the vCenter Server is setup, we will then add the remainder Nested ESXi VMs into its inventory.

vsphere-6-5-vghetto-lab-deployment-0
For most users, I expect the "Standard" deployment to be more commonly used but for other advanced workflows such as evaluating the new vCenter Server High Availability feature in vSphere 6.5, you may want to use the "Self Managed" deployment option. Obviously, if you select the latter deployment, the provisioning will take longer as you are now "double nested" and depending on your underlying physical resources, this can take quite a bit more time to deploy as well as consume more physical resources as your Nested ESXi VMs must now be larger to accommodate the VCSA. In both scenarios, there is no reliance on additional shared storage, they will both create a three node vSAN Cluster which of course you can expand by simply editing the script.

Deployment Time:

Here is a table breaking down the deployment time for each scenario and vSphere version:

Deployment Type Duration
vSphere 6.5 Standard 36 min
vSphere 6.0 Standard 26 min
vSphere 6.5 Self Managed 47 min
vSphere 6.0 Self Managed 34 min

Obviously, your miles will vary based on your hardware configuration and the size of your deployment.

Scripts:

There are four different scripts which covers the scenarios we discussed above:

  • vsphere-6.0-vghetto-self-manage-lab-deployment.ps1
  • vsphere-6.0-vghetto-standard-lab-deployment.ps1
  • vsphere-6.5-vghetto-self-manage-lab-deployment.ps1
  • vsphere-6.5-vghetto-standard-lab-deployment.ps1
  • vsphere-6.7-vghetto-standard-lab-deployment.ps1

Instructions:

Please refer to the Github project here for detailed instructions.

Verification:

Once you have saved all your changes, you can then run the script. You will be provided with a summary of what will be deployed and you can verify that everything is correct before attempting the deployment. Below is a screenshot on what this would look like:

Sample Execution:

Here is an example of running a vSphere 6.5 "Standard" deployment:


Here is an example of running a vSphere 6.5 "Self Managed" deployment:

vsphere-6-5-vghetto-lab-deployment-2
If everything is succesful, you can now login to your new vCenter Server and you should either see the following for a "Standard" deployment:

vsphere-6-5-vghetto-lab-deployment-5
or the following for "Self Managed" deployment:

vsphere-6-5-vghetto-lab-deployment-6
I hope you find these scripts as useful as I do and feel free to enhance these scripts to perform additional functionality or extend them to cover other VMware product deployments such as NSX or vRealize products for example. Enjoy!

Categories // Automation, Home Lab, PowerCLI, VCSA, vSphere 6.0, vSphere 6.5 Tags // homelab, Nested ESXi, nested virtualization, PowerCLI, vSphere 6.5

Virtual NVMe and Nested ESXi 6.5?

10.26.2016 by William Lam // 4 Comments

After publishing my Nested ESXi enhancements for vSphere 6.5 article, I had received a number of questions on whether the new Virtual NVMe (vNVMe) capability introduced in the upcoming vSphere 6.5 release would also work with a Nested ESXi VM? The answer is yes, similiar to PVSCSI and VMXNET3, we also have an NVMe driver for ESXi running in VM.

Disclaimer: Nested ESXi and Nested Virtualization is not officially supported by VMware, please use at your own risk.

To consume the new vNVMe for a Nested ESXi VM, you will need to use the latest ESXi 6.5 and later compatibility (vHW 13). Once that has been done, you can then add the a new NVMe Controller to your Nested ESXi VM and then assign that to one of the virtual disks as shown in the screenshot below.

nested-esxi-65-nvme-1
Next, you would install ESXi 6.5 as you normally would and the NVMe controller will automatically be detected and driver will be loaded. In the example below, you can see I only have a single disk which ESXi itself is installed on and it is backed by the NVMe Controller.

nested-esxi-65-nvme-0
One of the biggest benefit of using an NVMe interface over the traditional SCSI is that it can significantly reduce the amount of overhead compared to the SCSI protocol which in turn consumes less CPU cycles as well as reducing the overall amount of IO latency for your VM workloads. Obviously, when using it inside of a Nested ESXi VM, YMMV but hopefully you should also see an improvement there as well. For those who plan to give this a try in their environment, it would be good to hear what type of use cases you might have in mind for this and if you have any feedback (good/bad), feel free to leave a comment.

Categories // ESXi, Home Lab, Not Supported, vSphere 6.5 Tags // nested, Nested ESXi, nested virtualization, NVMe, vSphere 6.5

  • « Previous Page
  • 1
  • …
  • 9
  • 10
  • 11
  • 12
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Ultimate Lab Resource for VCF 9.0 06/25/2025
  • VMware Cloud Foundation (VCF) on ASUS NUC 15 Pro (Cyber Canyon) 06/25/2025
  • VMware Cloud Foundation (VCF) on Minisforum MS-A2 06/25/2025
  • VCF 9.0 Offline Depot using Synology 06/25/2025
  • Deploying VCF 9.0 on a single ESXi host? 06/24/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...