WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Long awaited Fling, Windows vCenter Server to VCSA Converter Appliance is finally here!

03.02.2015 by William Lam //

vcs-migration-appliance-smallBack in VMworld 2013, the Office of CTO held its annual Fling Contest where customers can submit their ideas for cool new Flings that they would like to see. If selected, not only would the individual get a free pass to VMworld but VMware Engineers would also build and release the Fling, how cool is that!? There were over 200+ submissions that year and I was very fortunate to have been on the panel to help select the winner. The winning Fling for that year was the Windows vCenter Server (VCS) to VCSA Converter Appliance by Stephen Athanas.

UPDATE 09/15/16 - The officially supported VCSA Migration Tool has has GA'ed with the release of vSphere 6.0 Update 2m. Please see this blog post here for more details.

The idea of a VCS to VCSA Converter really resonated with me as well as with many of our customers. In fact, everyone that I had spoken with who has used the VCSA just love the simplicity, ease of deployment and management it provides compared to its Windows counterpart. However, one of the biggest adoption barrier that I have seen from talking to customers is that is no simple way of migrating from a Windows based vCenter Server to the VCSA. You literally have to start fresh and this is pretty a show stopper for the majority of our customers and I do not disagree with them.

Customers want a migration path to be able to preserve all their vCenter Server configurations such as Folder structures, Permissions, Alarms, Tags, VM Storage Policies, etc. This is the idea behind the VCS to VCSA Converter Appliance which helps migrate a Windows vCenter Server running on an external Microsoft SQL Server Database to an embedded VCSA running a vPostgres Database. Today, I am very proud to announce the release of the VCS to VCSA Converter Appliance Fling.

The Converter Appliance migrates the vCenter database, Roles, Permissions, Privileges, Certificates, Alarms and Inventory Service which contains Tags and VM Storage Policies. At the end of the migration, you will end up with a fully functional VCSA with the original hostname/IP Address fully intact and ready to use. As you can imagine, this was no easy task and we had some of the smartest VMware Engineers working on this project. Todd Valentine from the OCTO managed the overall program with Ravi Soundararajan as the Chief Architect working closely with Mike Stunes, Jignesh Shah, Raju Angani. Being a huge advocate and supporter of the VCSA, I also had the unique opportunity to be involved in this project and working closely with some amazing engineers to help design, test and validate the migration appliance.

We hope you give the VCS to VCSA Converter Appliance a try in your lab (Please carefully read through the documentation along with the requirements and caveats before getting started). Let us know what you think by either leaving a comment here on my blog or on the Flings webpage. This is our first release and we already have some ideas of features and capabilities we would love to add to future releases but if there are things that you feel that are currently missing or enhancements you wold like to see, please let us know!

If you wish to provide private feedback about your environment or engage with us further, feel free to send an email to Todd Valentine at: tvalentine [at] vmware [dot] com

Categories // VCSA, vSphere 5.5 Tags // fling, migrate2vcsa, VCSA, vcva, vSphere 5.5

New VMware Fling to improve Network/CPU performance when using Promiscuous Mode for Nested ESXi

08.28.2014 by William Lam // 44 Comments

I wrote an article awhile back Why is Promiscuous Mode & Forged Transmits required for Nested ESXi? and the primary motivation behind the article was in regards to an observation a customer made while using Nested ESXi. The customer was performing some networking benchmarks on their physical ESXi hosts which happened to be hosting a couple of Nested ESXi VMs as well as regular VMs. The customer concluded in his blog that running Nested ESXi VMs on their physical ESXi hosts actually reduced overall network throughput.

UPDATE (04/24/17) - Please have a look at the new ESXi Learnswitch which is an enhancement to the existing ESXi dvFilter MAC Learn module.

UPDATE (11/30/16) - A new version of the ESXi MAC Learning dvFilter has just been released to support ESXi 6.5, please download v2 for that ESXi release. If you have ESXi 5.x or 6.0, you will need to use the v1 version of the Fling as it is not backwards compat. You can all the details on the Fling page here.

This initially did not click until I started to think about this a bit more and the implications when enabling Promiscuous Mode which I think is something that not many of us are not aware of. At a very high level, Promiscuous Mode allows for proper networking connectivity for our Nested VMs running on top of a Nested ESXi VMs (For the full details, please refer to the blog article above). So why is this a problem and how does this lead to reduced network performance as well as increased CPU load?

The diagram below will hopefully help explain why. Here, I have a single physical ESXi host that is connected to either a VSS (Virtual Standard Switch) or VDS (vSphere Distributed Switch) and I have a portgroup which has Promiscuous Mode enabled and it contains both Nested ESXi VMs as well as regular VMs. Lets say we have 1000 Network Packets destined for our regular VM (highlighted in blue), one would expect that the red boxes (representing the packets) will be forwarded to our regular VM right?

nested-esxi-prom-new-01
What actually happens is shown in the next diagram below where every Nested ESXi VM as well as other regular VMs within the portgroup that has Promiscuous Mode enabled will receive a copy of those 1000 Network Packets on each of their vNICs even though they were not originally intended for them. This process of performing the shadow copies of the network packets and forwarding them down to the VMs is a very expensive operation. This is why the customer was seeing reduced network performance as well as increased CPU utilization to process all these additional packets that would eventually be discarded by the Nested ESXi VMs.

nested-esxi-prom-new-02
This really solidified in my head when I logged into my own home lab system which I run anywhere from 15-20 Nested ESXi VMs at any given time in addition to several dozen regular VMs just like any home/development/test lab would. I launched esxtop and set the refresh cycle to 2seconds and switched to the networking view. At the time I was transferring a couple of ESXi ISO’s for my kicskstart server and realized that ALL my Nested ESXi VMs got a copy of those packets.

nested-esxi-mac-learning-dvfilter-0
As you can see from the screenshot above, every single one of my Nested ESXi VMs was receiving ALL traffic from the virtual switch, this definitely adds up to a lot of resources being wasted on my physical ESXi host which could be used for running other workloads.

I decided at this point to reach out to engineering to see if there was anything we could do to help reduce this impact. I initially thought about using NIOC but then realized it was primarily designed for managing outbound traffic where as the Promiscuous Mode traffic is all inbound and it would not actually get rid of the traffic. After speaking to a couple of Engineers, it turns out this issue had been seen in our R&D Cloud (Nimbus) which provides IaaS capabilities to the R&D Organization for quickly spinning up both Virtual/Physical instances for development and testing.

Christian Dickmann was my go to guy for Nimbus and it turns out this particular issue has been seen before. Not only has he seen this behavior, he also had a nice solution to fix the problem in the form of an ESXi dvFilter that implemented MAC Learning! As many of you know our VSS/VDS does not implement MAC Learning as we already know which MAC Addresses are assigned to a particular VM.

I got in touch with Christian and was able to validate his solution in my home lab using the latest ESXi 5.5 release. At this point, I knew I had to get this out to the larger VMware Community and started to work with Christian and our VMware Flings team to see how we can get this released as a Fling.

Today, I am excited to announce the ESXi Mac Learning dvFilter Fling which is distributed as an installable VIB for your physical ESXi host and it provides support for ESXi 5.x & ESXi 6.x

esxi-mac-learn-dvfilter-fling-logo
Note: You will need to enable Promiscuous Mode either on the VSS/VDS or specific portgroup/distributed portgroup for this solution to work.

You can download the MAC Learning dvFilter VIB here or you can install directly from the URL shown below:

To install the VIB, run the following ESXCLI command if you have VIB uploaded to your ESXi datastore:

esxcli software vib install -v /vmfs/volumes/<DATASTORE>/vmware-esx-dvfilter-maclearn-0.1-ESX-5.0.vib -f

To install the VIB from the URL directly, run the following ESXCLI command:

esxcli software vib install -v http://download3.vmware.com/software/vmw-tools/esxi-mac-learning-dvfilter/vmware-esx-dvfilter-maclearn-1.0.vib -f

A system reboot is not necessary and you can confirm the dvFilter was successfully installed by running the following command:

/sbin/summarize-dvfilter

You should be able see the new MAC Learning dvFilter listed at the very top of the output.

nested-esxi-mac-learning-dvfilter-2
For the new dvFilter to work, you will need to add two Advanced Virtual Machine Settings to each of your Nested ESXi VMs and this is on a per vNIC basis, which means you will need to add N-entries if you have N-vNICs on your Nested ESXi VM.

    ethernet#.filter4.name = dvfilter-maclearn
    ethernet#.filter4.onFailure = failOpen

This can be done online without rebooting the Nested ESXi VMs if you leverage the vSphere API. Another way to add this is to shutdown your Nested ESXi VM and use either the “legacy” vSphere C# Client or vSphere Web Client or for those that know how to append and reload the .VMX file as that’s where the configuration file is persisted
on disk.

nested-esxi-mac-learning-dvfilter-3
I normally provision my Nested ESXi VMs with 4 vNICs, so I have four corresponding entries. To confirm the settings are loaded, we can re-run the summarize-dvfilter command and we should now see our Virtual Machine listed in the output along with each vNIC instance.

nested-esxi-mac-learning-dvfilter-4
Once I started to apply this changed across all my Nested ESXi VMs using a script I had written for setting Advanced VM Settings, I immediately saw the decrease of network traffic on ALL my Nested ESXi VMs. For those of you who wish to automate this configuration change, you can take a look at this blog article which includes both a PowerCLI & vSphere SDK for Perl script that can help.

I highly recommend anyone that uses Nested ESXi to ensure you have this VIB installed on all your ESXi hosts! As a best practice you should also ensure that you isolate your other workloads from your Nested ESXi VMs and this will allow you to limit which portgroups must be enabled with Promiscuous Mode.

Categories // ESXi, Home Lab, Nested Virtualization, vSphere, vSphere 6.0 Tags // dvFilter, ESXi, fling, mac learning, nested, nested virtualization, promiscuous mode, vib

pyvmomi (vSphere SDK for Python) 5.5.0-2014.1 released!

08.15.2014 by William Lam // 1 Comment

The 5.5.0-2014.1 release of @pyvmomi is now available https://t.co/deHgZviLN1

— Shawn Hartsock ☁️ (@hartsock) August 15, 2014

I just saw an awesome update from Shawn Hartsock, a fellow VMware colleague. For those of you who do not know him, Shawn works in our Ecosystems and Solutions Engineering (EASE) organization and is the primary maintainer of VMware's pyvmomi (vSphere SDK for Python) open-source project. The pyvmomi project was open sourced since last December which I had written about here, it has received over 3K+ downloads and has a very active community. Much of this success has been due to the hard from Shawn fostering an active community around pyvmomi.

The announcement today from Shawn is a new release of pyvmomi at version 5.5.0-2014.1:

  • Download for pyvmomi 5.5.0-2014.1
  • Release Notes for pyvmomi 5.5.0-2014.1

As mentioned earlier, the pyvmomi project is a very active project and Shawn is constantly engaging with users looking for feedback, suggestions or requests for new samples to build. If you are interested in vSphere Automation and would like to leverage Python, be sure to check out the pyvmomi Github repository! Lastly, if you have written some cool scripts/applications or would like to request specific sample scripts, be sure to send a pull request to Shawn as we would love to see more contributions and collaborations from the community!

Categories // Automation Tags // ESXi, fling, python, pyVmomi, vSphere, vSphere SDK

  • « Previous Page
  • 1
  • …
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...