Earlier this week I needed test something which required a VMware Distributed Virtual Switch (VDS) and this had to be a physical setup, so Nested ESXi was out of the question. I could have used my remote lab, but given what I was testing was a bit "experimental", I prefered using my home lab in the event I need direct console access. At home, I run ESXi on a single Apple Mac Mini and one of the challenges with this and other similar platforms (e.g. Intel NUC) is that they only have a single network interface. As you might have guessed, this is a problem when looking to migrate from a Virtual Standard Switch (VSS) to VDS, as it requires at least two NICS.
Unfortunately, I had no other choice and needed to find a solution. After a couple minutes of searching around the web, I stumbled across this serverfault thread here which provided a partial solution to my problem. In vSphere 5.1, we introduced a new feature which would automatically roll back a network configuration change if it negatively impacted network connectivity to your vCenter Server. This feature could be disabled temporarily by editing the vCenter Server Advanced Setting (config.vpxd.network.rollback) which would allow us to by-pass the single NIC issue, however this does not solve the problem entirely. What ends up happening is that the single pNIC is now associated with the VDS, but the VM portgroups are not migrated and the reason that this is problematic is that the vCenter Server is also running on the ESXi host which it is managing and has now lost network connectivity 🙂
I lost access to my vCenter Server and even though I could connect directly to the ESXi host, I was not able to change the VM Network to the Distributed Virtual Portgroup (DVPG). This is actually an expected behavior and there is an easy work around, let me explain. When you create a DVPG, there are three different bindings: Static, Dynamic, and Ephemeral that can be configured and by default, Static binding is used. Both Static and Dynamic DVPGs can only be managed through vCenter Server and because of this, you can not change the VM network to a non-Ephemeral DVPG and in fact, it is not even listed when connecting to the vSphere C# Client. The simple work around is to create a DVPG using the Ephemeral binding and this will allow you to then change the VM network of your vCenter Server and is the last piece to solving this puzzle.
Disclaimer: This is not officially supported by VMware, please use at your own risk.
Here are the exact steps to take if you wish to migrate an ESXi host with a single NIC from a VSS to VDS and running vCenter Server:
Step 1 - Change the following vCenter Server Advanced Setting config.vpxd.network.rollback to false:
Note: Remember to re-enable this feature once you have completed the migration
Step 2 - Create a new VDS and the associated Portgroups for both your VMkernel interfaces and VM Networks. For the DVPG which will be used for the vCenter Server's VM network, be sure to change the binding to Ephemeral before proceeding with the VDS migration.
Step 3 - Proceed with the normal VDS Migration wizard using the vSphere Web/C# Client and ensure that you perform the correct mappings. Once completed, you should now be able connect directly to the ESXi host using either the vSphere C# Client or ESXi Embedded Host Client to confirm that the VDS migration was successful as seen in the screenshot below.
Note: If you forgot to perform Step 2 (which I initially did), you will need to login to the DCUI of your ESXi host and restore the networking configurations.
Step 4 - The last and final step is to change the VM network for your vCenter Server. In my case, I am using the VCSA and due to a bug I found in the Embedded Host Client, you will need to use the vSphere C# Client to perform this change if you are running VCSA 6.x. If you are running Windows VC or VCSA 5.x, then you can use the Embedded Host Client to modify the VM network to use the new DVPG.
Once you have completed the VM reconfiguration you should now be able to login to your vCenter Server which is now connected to a DVPG running on a VDS which is backed by a single NIC on your ESXi host 😀
There is probably no good use case for this outside of home labs, but I was happy that I found a solution and hopefully this might come in handy for others who might be in a similar situation and would like to use and learn more about VMware VDS.
Jesse S says
The dvSwitch wizard in NGC gives you the option to move the vmkernel and VM portgroups at the same time
William Lam says
I'm aware of that 🙂 I've used the wizard on several occasions, however as mentioned in the article, VM portgroups are NOT migrated. I suspect the reason is there's a tiny blip in network connection when the VMkernel interfaces are migrated and VC does not actually get the last request to also move the VM portgroups (that's my guess).
Hellbartonio says
Thanks a lot. Not so much time ago I suffered from this issue. Finally completed my objective with connecting additional nic but your article makes it so simple, thanks again and sorry for my poor English.
Don says
Brilliant! thank you very much for this!
Jeroen v.d. Berg says
Every time I come across your site I get a slap in the face: "of course, that's how you do it!" and get reminded to visit more often for great tips. I will try this in my homelab on my 2 old Supermicro servers with only 1 NIC. Thanks!
blink182 says
any chance of a command line way of doing this ?
Tommy McNicholas says
I wonder does anyone make a two-port USB NIC?
Mederic Mausse says
Dude, .... You literally would have saved me 2 hours of rolling back through ESXi console after messing around with this on single nics blades I had to create a VDS for. I got until the ephemereal part at which stage network went bananas. You sorted out for me the next steps and will retry this.
Thanks
ChrisO says
Brilliant stuff!!! Saved me days messing around in my lab trying to get this working!!!
Marcel says
Thanks for sharing William! Now i got VDS on my single nic nuc's
Phil Buckley-Mellor says
Once again you saved me from pulling my hair out William, thank you.
Graham says
Just found this for use in my lab. Thanks, makes perfect sense.
paul says
thanks man you saved my life
wildseed says
Me and my Intel NUC home lab thank you. Not being able to have distributed vswitches really limited my testing.
Peter says
Thank you!
Marco van den Helm says
Thank you , Had some problem reconnecting the Portgroup to the vCenter. because somehow i'm unable to edit settings. But i found the following site to script to set the port group of the vCenter. http://www.lucd.info/2010/03/04/dvswitch-scripting-part-8-get-and-set-network-adapters/
Med says
Thank you very much !
yang yu says
I have to say so many times when I try to google a hack out of net I was guided here... Thanks man. As working on virtualized cisco + vmware, you really play it well.
Eirik Toft says
Of course in my conversion I went gang busters and saved the vCenter server for last. No biggie, just added temp port group with binding mode ephemeral, migrated there, and then on step three proceeded normally by logging into the host and modifying the VMK to move it to the existing port group that had all of the migrated VMs in it.
Top Notch!!
Alvaro Jose Soto says
Thank you! I have a lab on Intel NUCs and need the DVS for VRA Network profiles.
Kleber says
Excellent, exactly what I need for my home lab with NUC, keep it up 🙂
Benoit says
Thank you for this tuto.
Is it mandatory to only one VDS "for both your VMkernel interfaces and VM Networks."
Will the procedure work if VMK and VM Networks are on separate switches ? (my VMK will stay on VSS, but VM Network including VCSA network have to go on DVS)
Kind Regards,
philuxephiluxe says
thanks a lot, this saved me lot of time ...
philuxephiluxe says
could you confirm whether WOL will work with a single NIC host using distributed virtual switch ?
Vita says
Huh!... I didn't know there was a two NIC requirement for VDSes. I have this host I use only for VCSA because I like to keep it apart from the main cluster, it only has one NIC. I thought it was a some sort of a bug that the thing always seemed to fail and get sort of unpredictable after but with time I learned to create the skeleton --so to speak-- of a VDS and then just move right into place.
I've been doing that for years now! And again I recently re-read the networking PDF for vSphere 7 and I missed that AGAIN. Granted, I was focusing on LACP and other stuff for the main cluster, but y'know… It doesn't end there, even more recently I was going through all the advanced settings in vCenter just because and I saw that rollback setting and immediately knew what it was yet until now I put 2&2 together that I could use it to avoid vMotioning vCenter if I ever have to redeploy. 😂
Thanks--A MILLION THANKS!
gedogarne says
Hi,
just wondering of this still works in ESXi 7.0.3...?
William Lam says
Yes
gedogarne says
thanks for reply... yes - just short handed after posting this i tried - and worked perfectly. So now running 3 MacMini in a cluster on an DS.
Thanks for this solution - which works quite well.
Sean says
The option to add the tags is missing for me in 7.0.3.