WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

What's New Whitepapers for vSphere 6.0 Platform, VSAN 6.0 & others

02.03.2015 by William Lam // 6 Comments

With the announcement vSphere 6.0, I am sure many of you are curious to learn more about the new features and capabilities in vSphere which includes a major update to Virtual SAN dubbed VSAN 6.0. From what I saw on Twitter yesterday, the blogosphere was on fire yesterday with so much content which was awesome but I also did notice quite a bit of incorrect information being posted. I suspect some of this information was based on earlier vSphere Beta's while others, I am really not sure where they got their information because it was not even mentioned in the Beta's. Remember, that not everything in the vSphere Beta will be released in the final product.

In any case, for the definitive source of What's New in vSphere 6.0, I would highly recommend everyone check out both new whitepapers for vSphere 6.0 Platform as well as VSAN 6.0 as they will contain all the details of what will be available at GA. Below are the direct links for the whitepapers as I did not find it on the Technical Whitepapers section of VMware.com which I suspect is still being updated.

  • What's New vSphere 6.0 Platform WP
  • What's New VSAN 6.0 WP
  • What's New vSphere Operations Management 6.0 (vSOM) WP
  • What's New VMware Horizon View & NVIDIA GRID vGPU WP
  • VMware Integrated OpenStack (VIO) Datasheet

Categories // ESXi, VSAN, vSphere 6.0 Tags // Virtual SAN, VSAN, vSphere 6.0

New VMware Fling to improve Network/CPU performance when using Promiscuous Mode for Nested ESXi

08.28.2014 by William Lam // 44 Comments

I wrote an article awhile back Why is Promiscuous Mode & Forged Transmits required for Nested ESXi? and the primary motivation behind the article was in regards to an observation a customer made while using Nested ESXi. The customer was performing some networking benchmarks on their physical ESXi hosts which happened to be hosting a couple of Nested ESXi VMs as well as regular VMs. The customer concluded in his blog that running Nested ESXi VMs on their physical ESXi hosts actually reduced overall network throughput.

UPDATE (04/24/17) - Please have a look at the new ESXi Learnswitch which is an enhancement to the existing ESXi dvFilter MAC Learn module.

UPDATE (11/30/16) - A new version of the ESXi MAC Learning dvFilter has just been released to support ESXi 6.5, please download v2 for that ESXi release. If you have ESXi 5.x or 6.0, you will need to use the v1 version of the Fling as it is not backwards compat. You can all the details on the Fling page here.

This initially did not click until I started to think about this a bit more and the implications when enabling Promiscuous Mode which I think is something that not many of us are not aware of. At a very high level, Promiscuous Mode allows for proper networking connectivity for our Nested VMs running on top of a Nested ESXi VMs (For the full details, please refer to the blog article above). So why is this a problem and how does this lead to reduced network performance as well as increased CPU load?

The diagram below will hopefully help explain why. Here, I have a single physical ESXi host that is connected to either a VSS (Virtual Standard Switch) or VDS (vSphere Distributed Switch) and I have a portgroup which has Promiscuous Mode enabled and it contains both Nested ESXi VMs as well as regular VMs. Lets say we have 1000 Network Packets destined for our regular VM (highlighted in blue), one would expect that the red boxes (representing the packets) will be forwarded to our regular VM right?

nested-esxi-prom-new-01
What actually happens is shown in the next diagram below where every Nested ESXi VM as well as other regular VMs within the portgroup that has Promiscuous Mode enabled will receive a copy of those 1000 Network Packets on each of their vNICs even though they were not originally intended for them. This process of performing the shadow copies of the network packets and forwarding them down to the VMs is a very expensive operation. This is why the customer was seeing reduced network performance as well as increased CPU utilization to process all these additional packets that would eventually be discarded by the Nested ESXi VMs.

nested-esxi-prom-new-02
This really solidified in my head when I logged into my own home lab system which I run anywhere from 15-20 Nested ESXi VMs at any given time in addition to several dozen regular VMs just like any home/development/test lab would. I launched esxtop and set the refresh cycle to 2seconds and switched to the networking view. At the time I was transferring a couple of ESXi ISO’s for my kicskstart server and realized that ALL my Nested ESXi VMs got a copy of those packets.

nested-esxi-mac-learning-dvfilter-0
As you can see from the screenshot above, every single one of my Nested ESXi VMs was receiving ALL traffic from the virtual switch, this definitely adds up to a lot of resources being wasted on my physical ESXi host which could be used for running other workloads.

I decided at this point to reach out to engineering to see if there was anything we could do to help reduce this impact. I initially thought about using NIOC but then realized it was primarily designed for managing outbound traffic where as the Promiscuous Mode traffic is all inbound and it would not actually get rid of the traffic. After speaking to a couple of Engineers, it turns out this issue had been seen in our R&D Cloud (Nimbus) which provides IaaS capabilities to the R&D Organization for quickly spinning up both Virtual/Physical instances for development and testing.

Christian Dickmann was my go to guy for Nimbus and it turns out this particular issue has been seen before. Not only has he seen this behavior, he also had a nice solution to fix the problem in the form of an ESXi dvFilter that implemented MAC Learning! As many of you know our VSS/VDS does not implement MAC Learning as we already know which MAC Addresses are assigned to a particular VM.

I got in touch with Christian and was able to validate his solution in my home lab using the latest ESXi 5.5 release. At this point, I knew I had to get this out to the larger VMware Community and started to work with Christian and our VMware Flings team to see how we can get this released as a Fling.

Today, I am excited to announce the ESXi Mac Learning dvFilter Fling which is distributed as an installable VIB for your physical ESXi host and it provides support for ESXi 5.x & ESXi 6.x

esxi-mac-learn-dvfilter-fling-logo
Note: You will need to enable Promiscuous Mode either on the VSS/VDS or specific portgroup/distributed portgroup for this solution to work.

You can download the MAC Learning dvFilter VIB here or you can install directly from the URL shown below:

To install the VIB, run the following ESXCLI command if you have VIB uploaded to your ESXi datastore:

esxcli software vib install -v /vmfs/volumes/<DATASTORE>/vmware-esx-dvfilter-maclearn-0.1-ESX-5.0.vib -f

To install the VIB from the URL directly, run the following ESXCLI command:

esxcli software vib install -v http://download3.vmware.com/software/vmw-tools/esxi-mac-learning-dvfilter/vmware-esx-dvfilter-maclearn-1.0.vib -f

A system reboot is not necessary and you can confirm the dvFilter was successfully installed by running the following command:

/sbin/summarize-dvfilter

You should be able see the new MAC Learning dvFilter listed at the very top of the output.

nested-esxi-mac-learning-dvfilter-2
For the new dvFilter to work, you will need to add two Advanced Virtual Machine Settings to each of your Nested ESXi VMs and this is on a per vNIC basis, which means you will need to add N-entries if you have N-vNICs on your Nested ESXi VM.

    ethernet#.filter4.name = dvfilter-maclearn
    ethernet#.filter4.onFailure = failOpen

This can be done online without rebooting the Nested ESXi VMs if you leverage the vSphere API. Another way to add this is to shutdown your Nested ESXi VM and use either the “legacy” vSphere C# Client or vSphere Web Client or for those that know how to append and reload the .VMX file as that’s where the configuration file is persisted
on disk.

nested-esxi-mac-learning-dvfilter-3
I normally provision my Nested ESXi VMs with 4 vNICs, so I have four corresponding entries. To confirm the settings are loaded, we can re-run the summarize-dvfilter command and we should now see our Virtual Machine listed in the output along with each vNIC instance.

nested-esxi-mac-learning-dvfilter-4
Once I started to apply this changed across all my Nested ESXi VMs using a script I had written for setting Advanced VM Settings, I immediately saw the decrease of network traffic on ALL my Nested ESXi VMs. For those of you who wish to automate this configuration change, you can take a look at this blog article which includes both a PowerCLI & vSphere SDK for Perl script that can help.

I highly recommend anyone that uses Nested ESXi to ensure you have this VIB installed on all your ESXi hosts! As a best practice you should also ensure that you isolate your other workloads from your Nested ESXi VMs and this will allow you to limit which portgroups must be enabled with Promiscuous Mode.

Categories // ESXi, Home Lab, Nested Virtualization, vSphere, vSphere 6.0 Tags // dvFilter, ESXi, Fling, mac learning, nested, nested virtualization, promiscuous mode, vib

Quick Tip - Steps to shutdown/startup VSAN Cluster w/vCenter running on VSAN Datastore

07.08.2014 by William Lam // 11 Comments

I know Cormac Hogan already wrote about this topic awhile ago, but there was a question that was recently brought up that included a slight twist which I thought it would be useful to share some additional details. The question that was raised: How do you properly shutdown an entire VSAN Cluster when vCenter Server itself is also running on the VSAN Datastore? One great use case for VSAN in my opinion is a vSphere Management Cluster that would contain all your basic infrastructure VMs including vCenter Server which can be bootstrapped onto a VSAN Datastore. In the event that you need to shutdown the entire VSAN Cluster which may also include your vCenter Server, what is the exact procedure?

To help answer this question, I decided to perform this operation in my own lab which contains a 3-Node (physical) VSAN Cluster that had several VMs running on the VSAN Datastore including the vCenter Server VM that was managing the VSAN Cluster.

stutdown-vsan-cluster-with-vcenter-on-vsan-datastore-0
Below are the steps that I took to properly shutdown down a VSAN Cluster as well as powering everything back on.

UPDATE (4/27) - Added instructions for shutting down a VSAN 6.0 Cluster when vCenter Server is running on top of VSAN.

Shutdown VSAN Cluster (VSAN 6.0)

Step 1 - Shutdown all Virtual Machines running on the VSAN Cluster except for the vCenter Server VM, that will be the last VM you shutdown.

stutdown-vsan-cluster-with-vcenter-on-vsan-datastore-1
Step 2 - To help simplify the startup process, I recommend migrating the vCenter Server VM to the first ESXi host so you can easily find the VM when powering back on your VSAN Cluster.

Step 3 - Ensure that there are no vSAN Components being resync'ed before proceeding to the next step. You can find this information by going to the vSAN Cluster and under Monitor->vSAN->Resyncing Components as shown in the screenshot below.

Step 4 - Shutdown the vCenter Server VM which will now make the vSphere Web Client unavailable.

stutdown-vsan-cluster-with-vcenter-on-vsan-datastore-4
Step 5 - Next, you will need to place ALL ESXi hosts into Maintenance Mode. However, you must perform this operation through one of the CLI methods that supports setting the VSAN mode when entering Maintenance Mode. You can either do this by logging directly into the ESXi Shell and running ESXCLI locally or you can invoke this operation on a remote system using ESXCLI.

Here is the ESXCLI command that you will need to run and ensure that "No Action" option is selected when entering Maintenance Mode:

esxcli system maintenanceMode set -e true -m noAction

Step 5 - Finally, you can now shutdown all ESXi hosts. You can login to each ESXi hosts using either the vSphere C# Client / ESXi Shell or you can also perform this operation remotely using the vSphere API such as leveraging PowerCLI as an example.

Shutdown VSAN Cluster (VSAN 1.0)

Step 1 - Shutdown all Virtual Machines running on the VSAN Cluster except for the vCenter Server VM.

stutdown-vsan-cluster-with-vcenter-on-vsan-datastore-1
Step 2 - To help simplify the startup process, I recommend migrating the vCenter Server VM to the first ESXi host so you can easily find the VM when powering back on your VSAN Cluster.

stutdown-vsan-cluster-with-vcenter-on-vsan-datastore-2
Step 3 - Place all ESXi hosts into Maintenance Mode except for the ESXi host that is currently running the vCenter Server. Ensure you de-select "Move powered-off and suspend virtual machines to other hosts in the Cluster" as well as selecting the "No Data Migration" option since we do not want any data to be migrated as we are shutting down the entire VSAN Cluster.

Note: Make sure you do not shutdown any of the ESXi hosts during this step because the vCenter Server VSAN Components are distributed across multiple hosts. If you do this, you will be unable to properly shutdown the vCenter Server VM because its VSAN components will not available.

stutdown-vsan-cluster-with-vcenter-on-vsan-datastore-3
Step 4 - Shutdown the vCenter Server VM which will now make the vSphere Web Client unavailable.

stutdown-vsan-cluster-with-vcenter-on-vsan-datastore-4
Step 6 - Finally, you can now shutdown all ESXi hosts. You can login to each ESXi hosts using either the vSphere C# Client / ESXi Shell or you can also perform this operation remotely using the vSphere API such as leveraging PowerCLI as an example.

Startup VSAN Cluster

Step 1 - Power on all the ESXi hosts that is part of the VSAN Cluster.

Step 2 - Once all the ESXi hosts have been powered on, you can then login to the ESXi host that contains your vCenter Server. If you took my advice earlier from the shutdown procedure, then you can login to the first ESXi host and power on your vCenter Server VM.

Note: You can perform steps 2-4 using the vSphere C# Client but you can also do this using either the API or simply calling vim-cmd from the ESXi Shell. To use vim-cmd, you need to first search for the vCenter Server VM by running the following command:

vim-cmd vmsvc/getallvms

startup-vsan-cluster-with-vcenter-on-vsan-datastore-0
You will need to make a note of the Vmid and in this example, our vCenter Server has Vmid of 6

Step 3 - To power on the VM, you can run the following command and specify the Vmid:

vim-cmd vmsvc/power.on [VMID]

startup-vsan-cluster-with-vcenter-on-vsan-datastore-1
Step 4 - If you would like to know when the vCenter Server is ready, you can check the status of VMware Tools as that should give you an indication that system is up and running. To do so, you can run the following command and look for the VMware Tools status:

vim-cmd vmsvc/get.guest [VMID]

startup-vsan-cluster-with-vcenter-on-vsan-datastore-2
Step 5 - At this point, you can now login to the vSphere Web Client and take all of your ESXi hosts out of Maintenance Mode and then power on the rest of your VMs.

startup-vsan-cluster-with-vcenter-on-vsan-datastore-3
As you can see the process to shutdown an entire VSAN Cluster even with vCenter Server running on the VSAN Datastore is fairly straight forward. Once you are comfortable with the procedure, you can even automate this entire process using the vSphere API/CLI, so you do not even need a GUI to perform these steps. This might even be a good idea if you are monitoring a UPS and have an automated way of sending remote commands to shutdown your infrastructure.

Categories // ESXi, VSAN, vSphere 5.5, vSphere 6.0 Tags // ESXi 5.5, vCenter Server, VSAN, vsanDatastore, vSphere 5.5, vSphere 6.0

  • « Previous Page
  • 1
  • …
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • VMware Flings is now available in Free Downloads of Broadcom Support Portal (BSP) 05/19/2025
  • VMUG Connect 2025 - Minimal VMware Cloud Foundation (VCF) 5.x in a Box  05/15/2025
  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...