WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

ESXi host with network redundancy using NSX-T and only 2 pNICs?

03.27.2018 by William Lam // 8 Comments

In todays data centers, it is not uncommon to find servers with only 2 x 10GbE network interfaces, this is especially true with the rise of Hyper-Converged Infrastructure over the last several years. For customers looking to deploy NSX-T with ESXi, there is an important physical network constraint to be aware of which is quickly mentioned in the NSX-T documentation here.

For example, your hypervisor host has two physical links that are up: vmnic0 and vmnic1. Suppose vmnic0 is used for management and storage networks, while vmnic1 is unused. This would mean that vmnic1 can be used as an NSX-T uplink, but vmnic0 cannot. To do link teaming, you must have two unused physical links available, such as vmnic1 and vmnic2.

As shown in the diagram below, an ESXi host with only two physical NICs can not provide complete network redundancy as each pNIC can only be associated with a single switch (VSS/VDS or the new N-VDS) as pNICs can not be shared across switches.


For customers, this means that you need to allocate a minimum of 4 pNICs to provide redundancy for both overlay traffic and non-overlay VMkernel traffic such as Management, vMotion, VSAN, etc. This is much easier said than done as not all hardware platforms can easily be expanded and even if they can, there still is a huge cost in expanding the physical network footprint (switch port, cabling, etc).

UPDATE (06/12/18) - As of NSX-T 2.2, which was recently released, there is now a UI in NSX-T Manager for managing the migration of VMkernel interfaces to the N-VDS. For automation purposes, you may still find this article useful but now you have option of using the UI.

[Read more...]

Categories // Automation, ESXi, NSX Tags // ESXi, N-VDS, NSX-T, REST API

Thunderbolt to 10GbE Network Adapters for ESXi

03.15.2018 by William Lam // 5 Comments

I was recently made aware of this article in which the author, Karim Elatov, had successfully demonstrated the use of a Sonnett Thunderbolt 2 to 10 Gigabit Ethernet Adapter with ESXi running on an Apple Mac Mini. As far as I am aware of, this may be the first public confirmation that such a device would work with ESXi, not to mention having it functional on the Mac Mini. I know in past years, there have been unconfirmed reports on various forums mentioning a Thunderbolt to 10GbE solution that works with ESXi but it was unclear on whether custom drivers were needed or if it would even work with newer versions of ESXi.


This topic has been popular amongst our customers who virtualize Apple MacOS on vSphere. In fact, several years back I had written an article on Thunderbolt Storage for ESXi, which includes a number of solutions that our customers have implemented to provide remote storage for their vSphere infrastructure running on either an Apple XServe, Mac Pro or Mac Mini. Questions around a functional Thunderbolt to 10GbE has definitely been asked about, but I had never heard from any customer who have had a successful story to share, at least until now.

From Karim's post, it looks like he was able to get this working using ESXi 6.0 but it was unclear if there was anything he needed to do to get the device recognized. I reached out to Karim and he was able to confirm that the Thunderbolt device was recognized by ESXi without any additional driver installation. In fact, if you look at this console output on his blog, you will see that it simply uses the inbox Intel ixgbe driver. I had also asked if Karim tried this with the latest version of ESXi, which is currently at 6.5 Update 1. Karim was kind enough to perform one additional test for me which was to confirm the device would still work with the latest ESXi release, which you can see for yourself in the screenshot below.

UPDATE (02/04/19) - Chad Moon recently shared his experiences on getting 10GbE support with an Intel NUC using the OWC Mercury Helios 3, Thunderbolt3 to PCIe expansion enclosure

[Read more...]

Categories // Apple, ESXi, Home Lab Tags // 10GbE, ESXi, mac mini, mac pro, SFP+, Sonnet, thunderbolt, thunderbolt 3

Identifying ESXi boot method & boot device

01.09.2018 by William Lam // 13 Comments

There was an interesting discussion on our internal Socialcast platform last week on figuring out how an ESXi host is booted up whether it is from local device like a disk or USB device, Auto Deploy or even boot from SAN along with its respective boot device? Although I had answered the question, I was not confident that we actually had a reliable and programmatic method for identifying all the different ESXi boot methods, which of course piqued my interest.

With a bit of trial and error in the lab, I believe I have found a method in which we can identify the ESXi boot type (Local, Stateless, Stateless Caching, Stateful or Boot from SAN) along with some additional details pertaining to the boot device. To demonstrate this, I have created the following PowerCLI script ESXiBootDevice.ps1 which contains a function called Get-ESXiBootDevice.

The function can be called without any parameters, in which it will query all ESXi hosts for a given vCenter Server and/or standalone ESXi host. You can also specify a specific ESXi host by simply passing in the -VMHostname option.

Here is an example output for one of my lab environments which shows several ESXi hosts and their different boot methods from local disk to Auto Deploy which can include stateless, stateless caching and stateful deployments. Depending on the BootType, the boot device shown in the Device column will either be the MAC Address of the NIC used to network boot the ESXi host or the identifier of a disk device. I have also included some additional details such as vendor/model along with the media type (SAS, SSD or USB) which is available as part of ESXCLI.


This script also supports ESXi environments that boot from SAN (FC, FCoE or iSCSI) and you can easily identify that with the word "remote" for the BootType. I would like to give a huge thanks to David Stamen who helped me out with the boot from SAN testing.

Categories // Automation, ESXi, PowerCLI, vSphere Tags // /UserVars/ImageCachedSystem, auto deploy, boot from SAN, ESXi, PowerCLI, stateful, stateless, stateless caching, vSphere API

  • « Previous Page
  • 1
  • …
  • 21
  • 22
  • 23
  • 24
  • 25
  • …
  • 61
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...