WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Search Results for: NUC

Working USB Ethernet Adapter (NIC) for ESXi

03.01.2016 by William Lam //

As part of upgrading my personal vSphere home lab from an Apple Mac Mini to an Intel NUC (more on this in a future blog), I have been researching to see if there are other alternatives for adding an additional network adapter. The Intel NUC only includes a single built-in ethernet adapter which is similar to the Mac Mini. However, the NUC also lacks additional IO connectors like a Thunderbolt port which the Mac Mini includes and can support a Thunderbolt to Ethernet adapter. I think this is probably the only downside to the Intel NUC platform which has been similar feedback that I have seen from other vSphere home labbers who currently use or would like to use the NUC. Perhaps, with the next update of the NUC platform code named "Skull Canyon", the rumored Thunderbolt 3 / USB-c connector may make things easier as some of the existing vendors who produce Thunderbolt to ethernet adapter also use common drivers like the Broadcom tg3 which have historically worked with ESXi.

One option that has been suggested by many folks over the years was to see if a USB based ethernet adapter could be used to provide additional networking to ESXi? Historically, the answer had been no because there were no known device drivers that would work with ESXi. I had even looked into this a few years ago and although I ran into some folks who seemed to have made it work, I was never able to find the right USB ethernet adapter to personally confirm myself. It was only until last week, I decided to start fresh again and after a bit of Googling I came across an old VMTN thread here where VMTN user AK_____28 mentioned he had success with the StarTech USB 3.0 to Gigabit Ethernet NIC Network Adapter and using a custom compiled driver that was posted over here by another user named Trickstarter.

UPDATE (02/12/19) - A new VMware Native Driver for USB-based NICs has just been released for ESXi 6.5/6.7, please use this driver going forward. If you are still on ESXi 5.5/6.0, you can continue using the existing driver but please note there will be no additional development in the existing vmklinux-based driver.

UPDATE (03/29/16) - Please have a look at this updated article here which includes ESXi 5.5 and 6.0 driver.

Disclaimer: In case it is not clear and apparent, you should never install any unknown 3rd party software on your ESXi host as it can potentially lead to instability issues or worse open yourself to a security hole. The following solution is not officially supported by VMware, please use at your own risk.

I decided to bite the bullet and give this solution a try and purchased the USB ethernet adapter from Amazon here.

usb-ethernet-adapter
There are two modules that needs to be downloaded, extracted and loaded onto ESXi. I have included the links below for your convenience:

  • ax88179vz026.gz
  • usbnetvz026.gz

As the VMTN thread mentioned, you can load using either the vmkload_mod or ESXCLI. Here are the two commands that I used in the following order:

vmkload_mod /vmfs/volumes/mini-local-datastore-1/usbnetvz026
vmkload_mod /vmfs/volumes/mini-local-datastore-1/ax88179vz026

When I tried to initially load either of the modules, I would always get the following error:

vmkwarning.log:2016-02-28T21:54:54.531Z cpu6:374787)WARNING: Elf: 2041: Load of <usbnetvz026> failed : missing required namespace <com.vmware.usb#9.2.1.0>

As you can imagine, I was pretty bummed to see this since I was afraid that something like this would happen. I was not sure if the device I had purchased no longer worked or if was the drivers? I saw that these modules were initially compiled for ESXi 5.1 (at the time, that was the latest version) and the only difference was that I was using a much newer version of ESXi, specifically 6.0 Update 1. I decided to install the latest version of ESXi 5.1 Update 3 and tried the process again and to my surprise, the modules loaded without errors. I suspect that this was a hard dependency on the namespace version which was version 9.2.1.0 and the latest version is now 9.2.3.0.

usb-network-adapter-esxi-1
After successfully loading the two modules, I ran the following command:

esxcfg-nics -l

to verify that ESXi did in fact did claim the USB ethernet device and as you can see from the screenshot below, it did indeed!

usb-network-adapter-esxi-2
Next up, I needed to verify basic connectivity and added the new uplink to my existing vSwitch. You must use the following ESXCLI command (esxcfg-vswitch command does not work apparently for non vmnicX devices)

esxcli network vswitch standard uplink add -u vusb0 -v vSwitch0

Once added, I hopped over to the vSphere C# Client to see if the device is now showing up under the Network Adapters tab, which it is.

usb-network-adapter-esxi-4
Finally, the last test was to make the vsb0 (this is how ESXi names the device) device the active connection while moving my existing vmnic0 to stand-by. Networking connectivity continued to function and I was even able to transfer an ISO image over the USB ethernet adapter without any issues.

usb-network-adapter-esxi-5
So it looks like it is possible to get a USB based ethernet adapter to function with ESXi, at least with the specific model listed above (PCI ID 0b95:1790). The challenge now is to see if there is a way to build an updated version of the drivers targeted at the latest ESXi 6.0 release. From what I have been able to follow on the forum here, it looks like there was also some amount of non-trivial code changes that were required to get the driver to function. If true, without those changes, it can difficult to re-compile the driver. I have reached out to the original author to see if he might be able to share the changes he made to the driver code. In the mean time, if folks are interested in giving the build process a try, Trickstarter did a great two part write up on how to setup your build environment and compile an example driver.

  • ESXI 5.x Drivers Part 1: Making a Build Environment
  • ESXI 5.x Drivers Part 2: Preparing to compile

Although the write up is targeted at ESXi 5.x, you can download the equilvenet packages for ESXi 6.0 which includes the ESXi Open Source Disclosure Package as well as the VMware Toolchain which is required and used to compile the source code. I have provided the direct download links below.

  • VMware-ESXI-600B-ODP-21Sept2015.iso
  • VMware-TOOLCHAIN-ODP-17July2015.iso

You can also find the latest version of the USB ethernet adapter ax88179 ASIX driver here. I have also attempted to compile just the driver but have already ran into some issues. I have not had time to dig further, so not sure how far I will be able to get. Any tips or tricks others may have for compiling against ESXi 6.0, feel free to share them and I will give them a shot when I get some time!

Categories // ESXi, Home Lab, Not Supported Tags // ESXi, ESXi 5.1, homelab, usb, usb network adapter, vSphere 5.1

Migrating ESXi to a Distributed Virtual Switch with a single NIC running vCenter Server

11.18.2015 by William Lam // 29 Comments

Earlier this week I needed test something which required a VMware Distributed Virtual Switch (VDS) and this had to be a physical setup, so Nested ESXi was out of the question. I could have used my remote lab, but given what I was testing was a bit "experimental", I prefered using my home lab in the event I need direct console access. At home, I run ESXi on a single Apple Mac Mini and one of the challenges with this and other similar platforms (e.g. Intel NUC) is that they only have a single network interface. As you might have guessed, this is a problem when looking to migrate from a Virtual Standard Switch (VSS) to VDS, as it requires at least two NICS.

Unfortunately, I had no other choice and needed to find a solution. After a couple minutes of searching around the web, I stumbled across this serverfault thread here which provided a partial solution to my problem. In vSphere 5.1, we introduced a new feature which would automatically roll back a network configuration change if it negatively impacted network connectivity to your vCenter Server. This feature could be disabled temporarily by editing the vCenter Server Advanced Setting (config.vpxd.network.rollback) which would allow us to by-pass the single NIC issue, however this does not solve the problem entirely. What ends up happening is that the single pNIC is now associated with the VDS, but the VM portgroups are not migrated and the reason that this is problematic is that the vCenter Server is also running on the ESXi host which it is managing and has now lost network connectivity 🙂

I lost access to my vCenter Server and even though I could connect directly to the ESXi host, I was not able to change the VM Network to the Distributed Virtual Portgroup (DVPG). This is actually an expected behavior and there is an easy work around, let me explain. When you create a DVPG, there are three different bindings: Static, Dynamic, and Ephemeral that can be configured and by default, Static binding is used. Both Static and Dynamic DVPGs can only be managed through vCenter Server and because of this, you can not change the VM network to a non-Ephemeral DVPG and in fact, it is not even listed  when connecting to the vSphere C# Client. The simple work around is to create a DVPG using the Ephemeral binding and this will allow you to then change the VM network of your vCenter Server and is the last piece to solving this puzzle.

Disclaimer: This is not officially supported by VMware, please use at your own risk.

Here are the exact steps to take if you wish to migrate an ESXi host with a single NIC from a VSS to VDS and running vCenter Server:

Step 1 - Change the following vCenter Server Advanced Setting config.vpxd.network.rollback to false:

migrating-from-vss-to-vds-with-single-nic-1
Note: Remember to re-enable this feature once you have completed the migration

Step 2 - Create a new VDS and the associated Portgroups for both your VMkernel interfaces and VM Networks. For the DVPG which will be used for the vCenter Server's VM network, be sure to change the binding to Ephemeral before proceeding with the VDS migration.

migrating-from-vss-to-vds-with-single-nic-0
Step 3 - Proceed with the normal VDS Migration wizard using the vSphere Web/C# Client and ensure that you perform the correct mappings. Once completed, you should now be able connect directly to the ESXi host using either the vSphere C# Client or ESXi Embedded Host Client to confirm that the VDS migration was successful as seen in the screenshot below.

migrating-from-vss-to-vds-with-single-nic-2
Note: If you forgot to perform Step 2 (which I initially did), you will need to login to the DCUI of your ESXi host and restore the networking configurations.

Step 4 - The last and final step is to change the VM network for your vCenter Server. In my case, I am using the VCSA and due to a bug I found in the Embedded Host Client, you will need to use the vSphere C# Client to perform this change if you are running VCSA 6.x. If you are running Windows VC or VCSA 5.x, then you can use the Embedded Host Client to modify the VM network to use the new DVPG.

migrating-from-vss-to-vds-with-single-nic-3
Once you have completed the VM reconfiguration you should now be able to login to your vCenter Server which is now connected to a DVPG running on a VDS which is backed by a single NIC on your ESXi host 😀

There is probably no good use case for this outside of home labs, but I was happy that I found a solution and hopefully this might come in handy for others who might be in a similar situation and would like to use and learn more about VMware VDS.

Categories // ESXi, Not Supported, vSphere Tags // distributed portgroup, distributed virtual switch, dvs, ESXi, notsupported, vds

Running Nested ESXi / VSAN Home Lab on Ravello

04.14.2015 by William Lam // 3 Comments

nested_esxi_on_ravello
There are many options when it comes to building and running your own vSphere home lab. There are going to be different pros and cons to each of these solutions which you will need to evaluate things like cost, performance, maintenance, ease of use and complexity to name a few. Below is a list of the currently available options to you today.

Home Lab Options:


On-Premises

  • Using hardware on the VMware HCL
  • Using Apple Mac Mini, Intel NUC, etc.
  • Using whitebox or off the shelf hardware

Off-Premises (hosted)

  • VMware HOL
  • VMware vCloud Air or other vCloud Air Service Providers
  • Colo-located labs

For example, you could purchase a couple of Apple Mac Mini's and build out a decent size vSphere environment, but it could potentially be costly and not to mention a bit limited on the memory options. Compared to other platforms, it is pretty energy efficient and easy to use and maintain. If you did not want to manage any hardware at all, you could look at a hosted or an on-demand lab such as vCloud Air which can run Nested ESXi unofficially or anyone of the many vCloud Air Service Providers. Heck, you could even use VMware Hands On Lab, though the access will be limited as you will be constrained by the pre-built labs and would not be able to directly upload or download files to the lab. However, this could be a quick way to get access to an environment for testing and best of all, it is 100% free. As you can see, there are many options for a home lab and it really just depends on your goals and what you are trying to accomplish.

Ravello says hello to Nested ESXi


Today, we have a new player entering the off-premises (hosted) option for running vSphere based home labs. I am please to announce that Ravello, a startup that uses Nested Virtualization to target dev/test workloads has just introduced beta support for running Nested ESXi on their platform. I have written about Ravello in the past and you can find more details here. Ravello uses their own home grown KVM-based nested hypervisor called HVX which runs on top of a VM provisioned by either Amazon EC2 or Google Compute Engine. As you can imagine, this was not a trivial feature to add support for especially when things like Intel-VT/AMD-V is not directly exposed to the virtual machines in EC2 or GCE which is required to run ESXi. The folks over at Ravello has solved this in a very interesting way by "emulating" the capabilities of Intel-VT/AMD-V using Binary Translation with direct execution.

Over the last month, I have had the privilege of getting early access to the Ravello platform with the Nested ESXi capability and have been providing early feedback to their R&D team to ensure the best possible user experience for customers looking to run Nested ESXi on their platform. I have also spent quite a bit of time working out the proper workflow for getting Nested ESXi running and being able to quickly scale up the number of nodes, especially useful when testing new features like VSAN 6.0. I have also been working with their team to develop a script that will allow users to quickly spin up as many Nested ESXi VMs as needed after a one time initial preparation. This will greatly simplify deployments of more than a couple of Nested ESXi VMs. Hopefully I will be able to share more details about the script in the very near future.

Before jumping into the instructions on getting Nested ESXi running on the Ravello platform, I also wanted to quickly highlight what is currently supported from a vSphere perspective as well as some of the current limitations and caveats regarding Nested ESXi that you should be aware of. Lastly, I have also provided some details around pricing so the proper expectations is set if you are considering a vSphere home lab on Ravello. You can find more information in the next few sections else you can go straight to the setup instructions.

Supports:


  • vCenter Server 5.x (Windows) & VCSA 5.x
  • vCenter Server 6.0 (Windows)
  • ESXi 5.x
  • ESXi 6.0

Caveats:


Coming from a pure vSphere background, I have enjoyed many of the simplicities that VMware has built into their core platform such as support for OVF capabilities like Dynamic Disks and Deployment Options for example. While using the Ravello platform I came across several limitations with respect to Nested ESXi and the VCSA. Below is just a quick list of the caveats that I have found while testing the platform and I have been told that many of these are being looked at and hopefully will be resolved in the future. Nonetheless, I still wanted to make sure these were called out so that you go in with the right expectations.

  • There is currently no support for virtuallyGhetto's Nested ESXi /VSAN VM OVF Templates (though you can import the OVFs, most of the configurations are lost)
  • There is currently no support for VM Advanced Settings such as marking a VMDK as an SSD or enabling UUID for disks for example (configurations are not preserved through import)
  • There is currently no support for VCSA 6.0 OVA due to disk controller limitation + no OVF property support, you will need to use Windows based vCenter Server for now (VCSA 5.5 is supported)
  • There is currently no OVF property support
  • There is currently no support for VMXNET3 for Nested ESXi VM, e1000 must be used due to a known network bug
  • Running Nested SMP-FT is not supported as 10Gbit vNICs are required and VMXNET3 is not currently supported

Pricing:


When publishing your Ravello Application, you have the option selecting two different deployment optimization. The first is optimized for cost, if TCO is what you care most about, then the platform will automatically select the cloud provider (EC2 or GCE) that is the cheapest to satisfy the requirements. The second option is to optimize based on performance and if selected, you can choose to place your application on either EC2 or GCE. In both of cases, you will be provided with an estimated cost which is broken down to compute, storage, networking as well as a final cost (per hour). Once you agree to the terms, you can then click on the "publish" button which will then deploy your workload onto the selected cloud provider.

Here is a screenshot summary view of a Ravello Application which I have built that consists of 65 VMs (1 Windows VM for vCenter Server) and 64 Nested ESXi VMs and I chose to optimize based on cost. The total price would be $17.894/hr

ravello-vghetto-nested-esxi-vsan-6.0-64-Node-cost-optmized
Note: Prices as of 04/05/2015

I also went through an exercise of going through several more configurations to give you an idea of what the cost could be for varying sized environments. Below is a table for a 3 Node, 32 Node & 64 Node VSAN setup (includes one additional VM for the vCenter Server).

# of VM Optimization Hosting Platform Compute Cost Storage Cost Network Cost Public IP Cost Total Price
4 Cost N/A $1.09/hr $0.0292/hr $0.15/GB $0.01/hr $1.1292/hr
4 Performance Amazon $1.62/hr $0.0292/hr $0.15/GB $0.01/hr $1.6592/hr
4 Performance Google $1.38/hr $0.0292/hr $0.15/GB $0.01/hr $1.4192/hr
33 Cost N/A $8.92/hr $0.1693/hr $0.15/GB $0.01/hr $9.0993/hr
33 Performance Amazon $13.22/hr $0.1693/hr $0.15/GB $0.01/hr $13.3993/hr
33 Performance Google $11.24/hr $0.1693/hr $0.15/GB $0.01/hr $11.4193/hr
65 Cost N/A $17.56/hr $0.324/hr $0.15/GB $0.01/hr $17.894/hr
65 Performance Amazon $26.02/hr $0.324/hr $0.15/GB $0.01/hr $26.354/hr
65 Performance Google $22.12/hr $0.324/hr $0.15/GB $0.01/hr $22.454/hr

How to Setup:


Here is the process for setting up Nested ESXi on the Ravello platform. The process consists of installing a single Nested ESXi VM and "preparing" it so that it can then be used later to deploy additional unique Nested ESXi instances from the Ravello Library.

Step 1 - Upload either an ESXi 5.x or 6.0 ISO to the Library using the Ravello VM Uploader tool which you will be prompted to install.

Screen Shot 2015-04-08 at 8.43.14 PM
Step 2 - Deploy the empty Ravello ESXi VM Template from the Library which has already been prepared with the required CPU ID

<ns1:cpuIds value="0000000768747541444d416369746e65" index="f00d”/>

Adding the above CPU ID will enable the emulation of Intel VT-x/AMD-V. If you decide to create your own Ravello VM Template, you will need to perform this operation yourself which is only available today via their REST API today, you can find more details here.

Step 3 - Add a CD-ROM device to the Nested ESXi VM by highlighting the ESXi VM and under "Disks" (yes, this was not intuitive for me either)

Screen Shot 2015-04-08 at 8.48.40 PM
Once you have added the CD-ROM, you will want to mount the ESXi ISO.

Step 4 - Power on the Nested ESXi VM and perform a regular installation of ESXi as you normally would.

At this point, you have now successfully installed Nested ESXi on Ravello! The next series of step is to "prepare" this ESXi image so that it can be duplicated (cloned) to deploy additional instances without causing conflicts, else you would have to perform this step N-number of times for additional nodes which I am sure many of you would not want to do. The steps outlined here will be following the process which I have documented in my How to properly clone a Nested ESXi VM? article.

Step 5 - Login to the console of ESXi VM and run the following ESXCLI command:

esxcli system settings advanced set -o /Net/FollowHardwareMac -i 1

Note: If you wish to connect to the ESXi VM directly for ease of use versus going through the remote console. You can go to "Services" tab for the VM and enable external access as seen in the screenshot below.

ravello-networking
Step 6 - Edit /etc/vmware/esx.conf and remove the uuid entry and then run /sbin/auto-backup.sh to ensure the changes have been saved.

At this point, you have prepared a vanilla Nested ESXi VM. You can save this image into the Ravello Library and you can deploy additional instances and by default Ravello platform is set for DHCP. You can of course change it to DHCP reservations so you get a particular IP Address or specifying a static IP Address assignment.

If you wish to prepare the Nested ESXi VM for use with VSAN, then you will need to run through these additional steps:

  • Create a claim rule to mark the 4GB VMDK as SSD
  • Enable VSAN traffic type on vmk0

Step 7 - I have also enabled remote logging as well as suppress any shell warnings and you just need to run the snippet below within the ESXi Shell

DEVICE=$(esxcli storage core device list  | grep -iE '(   Display Name: |   Size: )' | grep -B1 4096 | grep mpx | awk -F '(' '{print $2}' | sed 's/)//g');esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d $DEVICE -o enable_ssd;esxcli storage core claiming reclaim -d $DEVICE;esxcli vsan network ipv4 add -i vmk0;esxcli system syslog config set --loghost=10.0.0.100;esxcli system settings advanced set -o /UserVars/SuppressShellWarning -i 1

Step 8 -

If you wish to setup 32 Nodes with VSAN 1.0, then you will need to run this additional command:

esxcli system settings advanced set -o /CMMDS/goto11 -i 1

If you wish to setup 64 Nodes with VSAN 6.0, then you will need to run this additional command:

esxcli system settings advanced set -o /VSAN/goto11 -i 1

At this point, you have completed preparing your Nested ESXi VM. You can now save your image to the Ravello Library and once that has been done, you can now easily clone additional Nested ESXi instances by simply dragging/dropping into your canvas from the Ravello Library. For vCenter Server, if you are setting up a vSphere 5.x environment you will need to upload the VCSA and go through the normal configuration using the VAMI UI. For vCenter Server 6.0, you will not be able to use the VCSA 6.0 because there is a limitation in the platform today that does not support it. At this time, you will need to deploy and install a Windows VM and then install the vCenter Server 6.0 installation.

I of course had some fun with the Ravello platform and below are some screenshots of running both a 32 Node VSAN Cluster (vSphere 5.5) as well as a 64 Node VSAN Cluster (vSphere 6.0) Overall, I thought it was a pretty good experience. There were definitely some sluggishness while installing vCenter Server bits and navigating through the vSphere Web Client. It took a little over 40min which was almost double the amount of time that I have seen in my home lab. I was told that VNC might perform better than RDP, though RDP is what Ravello folks recommend for connecting to a Windows based desktop. It is great to see another option for running vSphere home labs and I think the performance is probably acceptable for most people and hopefully it will continue to improve in the future. I definitely recommend giving Ravello a try and who knows, it might be the platform of choice for your vSphere home lab.

Nested ESXi 5.5 running 32 Node VSAN Cluster:

vghetto-nested-esxi-5.5-32-node-cluster-ravello-1

vghetto-nested-esxi-5.5-32-node-cluster-ravello-0

Nested ESXi 6.0 running 64 Node VSAN Cluster:

vghetto-nested-esxi-64-node-cluster-ravello-1

vghetto-nested-esxi-64-node-cluster-ravello-0

Categories // ESXi, Home Lab, Nested Virtualization, vSphere Tags // homelab, intel vt, nested, nested virtualization, ravello

  • « Previous Page
  • 1
  • …
  • 42
  • 43
  • 44
  • 45
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...