The other day I was looking to get a baseline of the built-in ethernet adapter of my recently upgraded vSphere home lab running on the Intel NUC. I decided to use iPerf for my testing which is a commonly used command-line tool to help measure network performance. I also found a couple of articles from well known VMware community members: Erik Bussink and Raphael Schitz on this topic as well which were quite helpful. Erik's article here outlines how to run the iPerf Client/Server using a pair of Virtual Machines running on top of two ESXi hosts. Although the overhead of the VMs should be negligible, I was looking for a way to benchmark the ESXi hosts directly. Raphael's article here looked promising as he found a way to create a custom iPerf VIB which can run directly on ESXi.
I was about to download the custom VIB and I had just remembered that the VSAN Health Check plugin in the vSphere Web Client also provides some proactive network performance tests to be run on your environment. I was curious on what tool was being leveraged for this capability and in doing a quick search on the ESXi filesystem, I found that it was actually iPerf. The iPerf binary is located in /usr/lib/vmware/vsan/bin/iperf and looks to have been bundled as part of ESXi starting with the vSphere 6.0 release from what I can tell.
UPDATE (09/20/22) - As of ESXi 7.0 Update 3 20036589 (possibly earlier), you no longer need to make a copy of the iperf3 utility. You can simply run it from /usr/lib/vmware/vsan/bin/iperf3 and you also do NOT have to lower the security by changing ESXi Advanced Setting execInstalledOnly to FALSE
UPDATE (10/02/18) - It looks like iPerf3 is now back in both ESXi 6.5 Update 2 as well as the upcoming ESXi 6.7 Update 1 release. You can find the iPerf binary under /usr/lib/vmware/vsan/bin/iperf3
One interesting thing that I found when trying to run iPerf in "server" mode is that you would always get the following error:
bind failed: Operation not permitted
The only way I found to fix this issue was to basically copy the iPerf binary to another file like iperf3.copy which it would then allow me to start iPerf in "server" mode. You can do so by running the following command on the ESXi Shell:
cp /usr/lib/vmware/vsan/bin/iperf3 /usr/lib/vmware/vsan/bin/iperf3.copy
Running iPerf in "Client" mode works as expected and the copy is only needed when running in "server" mode. To perform the test, I used both my Apple Mac Mini and the Intel NUC which had ESXi running with no VMs.
I ran the iPerf "Server" on the Intel NUC by running the following command:
/usr/lib/vmware/vsan/bin/iperf3.copy -s -B [IPERF-SERVER-IP]
Note: If you have multiple network interfaces, you can specify which interface to use with the -B option and passing the IP Address of that interface.
I ran the iPerf "Client" on the Mac Mini by running the following command and specifying the address of the iPerf "Server":
/usr/lib/vmware/vsan/bin/iperf3 -n 800M -c [IPERF-SERVER]
I also disabled the ESXi firewall before running the test, which you can do by running the following command:
esxcli network firewall set --enabled false
Here is a screenshot of my iPerf test running between my Mac Mini and Intel NUC. Hopefully this will come in handy for anyone needing to run some basic network performance tests between two ESXi hosts without having to setup additional VMs.
Nice, that was useful. Helped me noticed I missed a MTU setting between my freenas iSCSI and my ESXi host.
Dan (@Casper042) says
Wondering if you can SSH in twice and then run both the Server and the Client but bind them each to a different vmK NIC to do a simple burn in test and verify your NICs are good (Switch ports potentially too)
Hi - I'm seeing that iperf runs with a TCP listen on 5001, but the firewall rule that I can enable for VSAN health opens 5001 UDP. I get a timeout from a client to the server. Is there any way to enable a new firewall rule for TCP 5001 allow via the vSphere UI, or is the only way to set that using editing of the service.xml file on the host?
William Lam says
You would need to edit the service.xml to also enable TCP and you can either manually tweak it via init script to ensure it persists or create a custom VIB which adds in the new firewall rule
Hmm ... so I went through that, rule shows up as SRC and DST for port 5001 on TCP, server runs (shows listening on 5001), client that connects to other iperf instances (Windows 10, tested against Avamar / Data Domain works fine) fails to connect. I can run a server on the Windows 10 system and connect from the ESX host, but going the other direction (or from another iperf equipped VM on the flat network) fails. Firewall rules look like:
Firewall refreshed, and when using the esxcli network ip connection list | grep -i listen command the server shows listening on 5001 (TCP):
[[email protected]:/usr/lib/vmware/vsan/bin] esxcli network ip connection list | grep -i listen
tcp 0 0 0.0.0.0:5001 0.0.0.0:0 LISTEN 954798 newreno iperf.usable
Nice Post and i was checking the iperf command in 6.0 GA I'm not able to find it and I see iperf is in 6.5.Do i need to install any additional vib for it
Unfortunately, iperf is only available with ESXi 6.0 and is not available anymore with ESXi 6.5..
Edward Burlakov says
Thank you for smart and clear description of installation process . Very Helpfull !
Pusheng Woo says
Hi, excuse me ~ How could I change the color of input command line to yellow in the termial window like u. Tx !
Gaganpreet Singh says
Can we use iperf to measure network performance and impact Post
disabling RSS LB feature.
Ulaganathan Mahadevan says
Thanks for the simple and clear QuickTip.
I'm troubleshooting the network speed/bandwidth in an VMWare infrastructure.
The ESXi hosts has 10G connection. However, The iperf results shows 7Gbs between any two hosts.
Also the Windows 2012 VMs (on the same esxi host) shows 4 Gbps and for VMs (on different esxi host) shows 2 to 3 Gbps. What all we can check to troubleshoot further?
is this iperf3 copy capable of accurately testing/benchmarking 100G Mellanox cards? Never seen over 25-31Gbit testing through this channel, which presumably goes out the management vmnic? I'd be curious to know what other people with 100G are getting from their VMware cluster.
Frank Denneman says
By looking at your numbers I expect Iperf3 is a single threaded process, as a result it is utilizing one core to push packets. If you want to flood a 100GbE pipe, you need more CPU power than a single core. Maybe iperf can spawn more treads. This should also be the case at the RX side.
Totie Bash says
I don't see -m option on 6.7 Update3 so I can't check mtu, any idea why?
James Kilby says
Hi - I'm unable to get this to work on a VMK that's part of the vMotion stack. Is this a known issue/any workaround ? The error is "error - unable to start listener for connections: Cannot assign requested address
iperf3: exiting". This is on 7.0.3 20036589. It works fine on my management/storage interfaces.
I'm facing the same issue 🙁