WilliamLam.com

  • About
    • About
    • Privacy
  • VMware Cloud Foundation
  • VKS
  • Homelab
    • Resources
    • Nested Virtualization
  • VMware Nostalgia
  • Apple

Search Results for: NUC

Quick Tip - iPerf now available on ESXi

03.15.2016 by William Lam // 25 Comments

The other day I was looking to get a baseline of the built-in ethernet adapter of my recently upgraded vSphere home lab running on the Intel NUC. I decided to use iPerf for my testing which is a commonly used command-line tool to help measure network performance. I also found a couple of articles from well known VMware community members: Erik Bussink and Raphael Schitz on this topic as well which were quite helpful. Erik's article here outlines how to run the iPerf Client/Server using a pair of Virtual Machines running on top of two ESXi hosts. Although the overhead of the VMs should be negligible, I was looking for a way to benchmark the ESXi hosts directly. Raphael's article here looked promising as he found a way to create a custom iPerf VIB which can run directly on ESXi.

I was about to download the custom VIB and I had just remembered that the VSAN Health Check plugin in the vSphere Web Client also provides some proactive network performance tests to be run on your environment. I was curious on what tool was being leveraged for this capability and in doing a quick search on the ESXi filesystem, I found that it was actually iPerf. The iPerf binary is located in /usr/lib/vmware/vsan/bin/iperf and looks to have been bundled as part of ESXi starting with the vSphere 6.0 release from what I can tell.

UPDATE (10/19/23) - As of ESXi 8.x, you may run into following error when attempting to start running iperf:

iperf3: running in appDom(30): ipAddr = ::, port = 5201: Access denied by vmkernel access control policy

To workaround this, you can run the following command:

esxcli system secpolicy domain set -n appDom -l disabled

and re-enable (use "enforcing" as the value) once you have completed your iperf test both on the server and client ESXi host.

UPDATE (09/20/22) - As of ESXi 7.0 Update 3 20036589 (possibly earlier) and later, you no longer need to make a copy of the iperf3 utility. You can simply run it from /usr/lib/vmware/vsan/bin/iperf3 and you also do NOT have to lower the security by changing ESXi Advanced Setting execInstalledOnly to FALSE

UPDATE (10/02/18) - It looks like iPerf3 is now back in both ESXi 6.5 Update 2 as well as the upcoming ESXi 6.7 Update 1 release. You can find the iPerf binary under /usr/lib/vmware/vsan/bin/iperf3

One interesting thing that I found when trying to run iPerf in "server" mode is that you would always get the following error:

bind failed: Operation not permitted

The only way I found to fix this issue was to basically copy the iPerf binary to another file like iperf3.copy which it would then allow me to start iPerf in "server" mode. You can do so by running the following command on the ESXi Shell:

cp /usr/lib/vmware/vsan/bin/iperf3 /usr/lib/vmware/vsan/bin/iperf3.copy

Running iPerf in "Client" mode works as expected and the copy is only needed when running in "server" mode. To perform the test, I used both my Apple Mac Mini and the Intel NUC which had ESXi running with no VMs.

I ran the iPerf "Server" on the Intel NUC by running the following command:

/usr/lib/vmware/vsan/bin/iperf3.copy -s -B [IPERF-SERVER-IP]

Note: If you have multiple network interfaces, you can specify which interface to use with the -B option and passing the IP Address of that interface.

I ran the iPerf "Client" on the Mac Mini by running the following command and specifying the address of the iPerf "Server":

/usr/lib/vmware/vsan/bin/iperf3 -n 800M -c [IPERF-SERVER]

I also disabled the ESXi firewall before running the test, which you can do by running the following command:

esxcli network firewall set --enabled false

Here is a screenshot of my iPerf test running between my Mac Mini and Intel NUC. Hopefully this will come in handy for anyone needing to run some basic network performance tests between two ESXi hosts without having to setup additional VMs.

esxi-iperf

Categories // ESXi, vSphere 6.0 Tags // ESXi, iperf, network, performance, vSphere 6.0 Update 1, vSphere 6.0 Update 2

vSphere 6.0 Update 2 hints at Nested ESXi support for Paravirtual SCSI (PVSCSI) in the future

03.14.2016 by William Lam // 6 Comments

Although Nested ESXi (running ESXi in a Virtual Machine) is not officially supported today, VMware Engineering continues to enhance this widely used feature by making it faster, more reliable and easier to consume for our customers. I still remember that it was not too long ago that if you wanted to run Nested ESXi, several non-trivial and manual tweaks to the VM's VMX file were required. This made the process of consuming Nested ESXi potentially very error prone and provide a less than ideal user experience.

Things have definitely been improved since the early days and here are just some of the visible improvements over the last few years:

  • Prior to vSphere 5.1, enabling Virtual Hardware Assisted Virtualization (VHV) required manual edits to the VMX file and even earlier versions required several VMX entries. VHV can now be easily enabled using either the vSphere Web Client or the vSphere API.
  • Prior to vSphere 5.1, only the e1000{e} networking driver was supported with Nested ESXi VMs and although it was functional, it also limited the types of use cases you might have for Nested ESXi. A Native Driver for VMXNET3 was added in vSphere 5.1 which not only increased the performance that came with using the optimized VMXNET3 driver but it also enabled new use cases such testing SMP-FT as it was now possible to get 10Gbe interface to Nested ESXi VM versus the traditional 1GBe with e1000{e} driver.
  • Prior to vSphere 6.0, selection of ESXi GuestOS was not available in the "Create VM" wizard which meant you had to resort to re-editing the VM after initial creation or using the vSphere API. You can now select the specific ESXi GuestOS type directly in the vSphere Web/C# Client.
  • Prior to vSphere 6.0, the only way to cleanly shutdown or power cycle a Nested ESXi VM was to perform the operation from within the system as there was no VMware Tools support. This changed with the development of a VMware Tools daemon specifically for Nested ESXi which started out as a VMware Fling. With vSphere 6.0, the VMware Tools for Nested ESXi was pre-installed by default and would automatically startup when it detected that it ran as a VM. In addition to power operations provided by VMware Tools, it also enabled the use of the Guest Operations API which was quite popular from an Automation standpoint.

Yesterday while working in my new vSphere 6.0 Update 2 home lab, I needed to create a new Nested ESXi VM and noticed something really interesting. I used the vSphere Web Client like I normally would and when I went to select the GuestOS type, I discovered an interesting new option which you can see from the screenshot below.

nested-esxi-changes-in-vsphere60u2-3
It is not uncommon to see VMware to add experimental support for potentially new Guest Operating Systems in vSphere. Of course, there are no guarantees that these OSes would ever be supported or even released for that matter.

What I found that was even more interesting was that when select this new ESXi GuestOS type (vmkernel65) is what was recommended as the default virtual hardware configuration for the VM. For the network adapter, it looks like the VMXNET3 driver is now recommended over the e1000e and for the storage adapter the VMware Paravirtual (PVSCSI) adapter is now recommended over the LSI Logic Parallel type. This is really interesting as it is currently not possible today to get the optimized and low overhead of the PVSCSI adapter working with Nested ESXi and this seems to indicate that PVSCSI might actually be possible in the future! 🙂

nested-esxi-changes-in-vsphere60u2-1
I of course tried to install the latest ESXi 6.0 Update 2 (not yet GA'ed) using this new ESXi GuestOS type and to no surprise, the ESXi installer was not able to detect any storage devices. I guess for now, we will just have to wait and see ...

Categories // ESXi, Nested Virtualization, Not Supported, vSphere 6.0 Tags // ESXi, nested, nested virtualization, pvscsi, vmxnet3, vSphere 6.0 Update 2

The future of the ESXi Embedded Host Client

03.04.2016 by William Lam // 14 Comments

As many of you know, the ESXi Embedded Host Client project is something that is very near and dear to my heart. I have always felt that we needed a simple web interface that customers can just point their web browser to an ESXi host after a new installation and be able to quickly get started. One of the biggest benefit in addition to simplicity is that it is also very intuitive from a user experience standpoint which I believe is very important in a world where things can quickly get complex. In addition, it can also provide an interface for basic troubleshooting and support greenfield deployments where vCenter Server has not been deployed yet.

It has truly been amazing to follow the Embedded Host Client development from the initial idea to the first prototype built by VMware Engineers Kevin Christopher and Jehad Affoneh to its current implementation lead by Etienne Le Sueur and the ESXi team. I have really been fortunate to have had the opportunity to be so involved in this project. It is hard to imagine that in just little over 6 months, we have had had 5 releases of the Embedded Host Client Fling, all of which, produced with high quality development and rich feature sets.

You can click on the links below to get more details about each release.

esxi-embedded-host-client-history

  • 08/11/15 - EHC Fling v1 released 
  • 08/26/15 - EHC Fling v2 released
  • 10/23/15 - EHC Fling v3 released
  • 12/21/15 - EHC Fling v4 released
  • 02/07/16 - EHC Fling v5 released

I think its an understatement to say that customers are genuinely excited about this project as well, just look at some of the comments left on the Flings page here. Interestingly, this excitement has also been felt internally at VMware as well and I think this goes to show that the team has built something really special that affects anyone who works with VMware's ESXi Hypervisor.

So where to do we go from here? Are we done? Far from it ...

For those of you who follow me on Twitter know that I had recently refreshed my personal vSphere home lab from a Apple Mac Mini to latest Intel NUC running the yet to be release VSAN 6.2 (vSphere 6.0 Update 2). I was pleasantly surprised to see that ESXi Embedded Host Client (EHC) is now included out of the box with ESXi! Although this has been said by a few folks including myself, it is another thing to actually see it in person 🙂

vsan62-esxi-embedded-host-client
Although the VMware Flings program is a great way to share and engage with our customers to get early feedback, it may not always be a viable option. As some of you may know, Flings are not officially supported and this sometimes prevents some of our customers from engaging with us and really putting the Flings through its paces. By making EHC out of the box, not only are we officially supporting it but it will also make it easier for customers to try out this new interface.

UPDATE (03/04/16) - It looks like I made a mistake and that the ESXi Embedded Host Client will NOT be released as a "Tech Preview" as previously mentioned but rather it will be officially GA'ed with vSphere 6.0 Update 2. EHC is a fully supported feature of ESXi.

Although EHC is very close to parity with the vSphere C# Client, it is still not 100% there. We will continue to improve its capabilities and if you have any feedback when trying out the EHC, do not hesitate and leave feedback or file a Feature Request through GSS. For those looking to live on the "edge" a bit more, we will still continue to release updates to the EHC Fling but if you want something that is stable, you can stick with the stock EHC included in ESXi 6.0 Update 2. We will still ship the legacy Windows vSphere C# Client, so you will not be forced to use this interface. However, it is no secret that VMware wants to get rid of the vSphere C# Client and that EHC is the future interface to standalone ESXi hosts.

One feature that I know that many of you have been asking about is Free ESXi. Well, I am please to say that support for Free ESXi has been added in the latest version of EHC included with the upcoming ESXi 6.0 Update 2 release and below is a screenshot demonstrating that it is fully functional.

esxi-embedded-host-client-free-esxi-support
Lastly, I just want to say that EHC has really morphed beyond just a "simple UI" for managing standalone ESXi hosts and has also enabled other teams at VMware to do some really amazing things and create new experiences with this interface. As I said earlier, this is just the beginning 😀 Happy Friday!

Here are some additional cool capabilities provided by EHC

  • Neat way of installing or updating any VIB using just the ESXi Embedded Host Client
  • How to bootstrap the VCSA using the ESXi Embedded Host Client?

Categories // ESXi, vSphere 6.0 Tags // embedded host client, ESXi 6.0, vSphere 6.0 Update 2

  • « Previous Page
  • 1
  • …
  • 41
  • 42
  • 43
  • 44
  • 45
  • Next Page »

Search

Thank Author

Author

William is Distinguished Platform Engineering Architect in the VMware Cloud Foundation (VCF) Division at Broadcom. His primary focus is helping customers and partners build, run and operate a modern Private Cloud using the VMware Cloud Foundation (VCF) platform.

Connect

  • Bluesky
  • Email
  • GitHub
  • LinkedIn
  • Mastodon
  • Reddit
  • RSS
  • Twitter
  • Vimeo

Recent

  • Programmatically accessing the Broadcom Compatibility Guide (BCG) 05/06/2025
  • Quick Tip - Validating Broadcom Download Token  05/01/2025
  • Supported chipsets for the USB Network Native Driver for ESXi Fling 04/23/2025
  • vCenter Identity Federation with Authelia 04/16/2025
  • vCenter Server Identity Federation with Kanidm 04/10/2025

Advertisment

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy

Copyright WilliamLam.com © 2025

 

Loading Comments...