Iperf Windows Gui
When you buy an Internet link, you want to see if you get what you’re paying for. If you want to do that at home, you can use many speed test services. Popular examples are the Speedtest by Ookla, by Telstra, or by Verizon. However, those methods are not reliable enough for an enterprise. Furthermore, you can only test Internet speed, and not speed on internal/private circuits. That’s why, at the enterprise level, we use iperf (actually iperf3), a powerful and free tool. In this tutorial, we will see how to use iperf to verify the speed of any link, reliably.
- Iperf Gui Windows 10
- Iperf Windows Guide
- Iperf Windows Guide
- Iperf3 Windows Gui
- Iperf Windows Guide
- Iperf Windows Gui
Getting started with iperf
What is iperf?
Iperf is a command-line tool that allows you to test the bandwidth, any way you like. Unlike online speed tests, you have to provide both server and client. In other words, when doing an online speed test, you connect to a server on the Internet, owned by the provider of the test (like Ookla). Then, the application measures the network performance between you and such a server. Instead, with iperf, you have to set up your own iperf server. Don’t worry, you don’t need special hardware, you only need to run a command from your prompt. In fact, you can have both the server and the client on the same computer.
Allows to evaluate all features of IPERF User Interface, including the command history, export results, real-time charting in sliding window or full history mode. Trial version operates upto 10Mbps and 2 threads, without QoS. No Iperf3 launch. False positive alert md5 hash: 8D03E73BEF335D44FF2DFB4F04744CC4. On the iperf-web github page for this image, you'll find a file called docker-iperf-web.service. Copy this file into /usr/lib/systemd/system. The service file expects the working directory (containing the docker-compose.yml file) to be located under /data/iperf-web.
About an easier way to use iPerf services, using a Jperf GUI interface. An example of a command string typed into a client terminal window, iperf -c 10.1.10.120 -P 1 -i 1 -p 5001 -f m -t 10. This video will show you how to use iperf to test your local network (LAN) speed in Windows 10. You will need 2 computers on the same network, and the IP Add.
The advantage of running your own server is predictability. You know where the server is, and you may repeat the test in the future. That’s not the case with online services, where they allocate you a server dynamically. This means you won’t be able to reproduce the same test in the future if you do an online speed test. Since you place the server whenever you want, you can also have it on your internal network and thus test internal links.
Getting iperf
The sable version of iperf is iperf3. It is free software (BSD license) you can download from iperf.fr. In a rush? Here is a quick link to the download page. You will find an iperf for any operating system and architecture you need. In this guide, we are using iperf3 on Windows (64 bit), but the tutorial on how to use iperf is the same for any OS.
If you are on Windows like me, you will get a compressed ZIP file. Extract it, and if you want to do things faster copy its content into C:WindowsSystem32
. This way, you will always have iperf3 at hand as a command on the prompt. If you don’t do that, you will need to move to the folder where you have iperf before you can give the command.
How to use iperf
Once we have iperf, we need to learn how to use it. As we mentioned above, we need to run both server and client. The server will keep listening, accepting client connections. Thus, this is the first thing we need to do. Running the server is as simple as writing iperf3 -s
in the prompt (-s
stands for server). The first time you do that, on Windows, it will ask you network permission. Of course, flag the permissions and click Allow access.
Once you enable the access, a simple message will appear on the prompt, telling you that the server is ready to accept connections. By default, iperf3 listens on port 5201.
Now, we will leave the server be. This server will accept all our connections. However, we will be able to tweak the tests and even pilot the server from the iperf client. That’s where the real deal is.
A simple speed test with iperf
Now, how to use iperf3 to run a simple speed test? You can simply use the iperf3 -c
command, of course replacing “<server IP>” with the IP of your server. However, we want to have a better test. Thus, we want to give TCP all the time it needs to expand the window size: better to run a test for some more seconds. We can do that with the -t
option, followed by the number of seconds. This will tell iperf3 how long to run the test. Generally, one to two minutes are enough (just to show, we will use 5 seconds). Since our server is running on the same PC, the target IP will be localhost at 127.0.0.1
, but that’s just the case of this demonstration.
If we want to know how to use iperf, we need to know how analyze the output. The standard output is a table with four columns. Furthermore, the last two rows of the table (after the dash line) represent the totals.
- ID is the ID of the iperf operation
- Interval is the time span the row refers to
- Transfer is the amount of data exchanged between client and server. In the end, speed test is about transferring files and measuring how long it took.
- Bandwidth is the measured bandwidth
Why don’t we have a single total/summary row? Because we want to see the different performances between sending and receiving. In fact, on the far right of the summary rows, you will see sender and receiver bandwidth. In our case, the sender is the client so it means the upload speed. Instead, receiver means download speed. The two might not be always the same, in case of asymmetric bandwidth.
Other cool options
So far so good. In fact, you know how to use iperf by simply using the commands above. However, you may want to use some options to tweak the measurement at your liking. If you want to know all options, use --help
. Here, we will see the most useful ones.
-P
creates N parallel connections, useful to push links to their limit-R
runs in reverse mode: the server will send and the client will receive-b
indicate a limit bandwidth (for example 10k, 5mb, 1gb). The test won’t go much beyond this value, even if the link has higher capacity.-J
generates the output in JSON once the operation finishes, useful if you want to use iperf in scripts.
Optisystem 13 cracker. Try to experiment combining all the options you’d like together. You will achieve enterprise-grade professional bandwidth testing.
Conclusion (and TL;DR)
Iperf3 is a powerful tool to run custom and reliable bandwidth tests. In fact, it is perfect for enterprise-grade requirements. If you want to know how to use iperf, you need only two basic commands. First, run a server on a device with iperf3 -s
. Then, on another device run iperf3 -t 60 -c
. This will measure the bandwidth between the two on a period of 60 seconds, which is large enough to get a real measurement.
What do you think about iperf? Do you see yourself using it, instead of using the tools available online? Let me know your opinions in the comments.
6 min read
Stanley Soman, Senior Security Cloud Support Engineer, Team Lead
Troubleshoot speed and throughput issues with iPerf
Two of the most common network characteristics we look at when investigating network-related concerns are speed and throughput. You may have experienced the following scenario yourself: You just provisioned a new bad-boy server with a gigabit connection in a data center on the opposite side of the globe. You begin to upload your data. To your shock, you see “Time Remaining: 10 Hours.”
“What’s wrong with the network?” you wonder. The traceroute and MTR look fine—but where’s the performance and bandwidth you’re paying for?
This issue is all too common and it has nothing to do with the network. In fact, the culprits are none other than TCP and the laws of physics.
In data transmission, TCP sends a certain amount of data and then pauses. To ensure proper delivery of data, it doesn’t send more until it receives an acknowledgment from the remote host that all data was received. This is called the “TCP Window.” Data travels at the speed of light, and typically, most hosts are fairly close together. This “windowing” happens so fast that we don’t even notice it. But as the distance between two hosts increases, the speed of light remains constant. Thus, the further away the two hosts, the longer it takes for the sender to receive the acknowledgment from the remote host, reducing overall throughput. This effect is called “Bandwidth Delay Product,” or BDP.
We can overcome BDP to some degree by sending more data at a time. We do this by adjusting the “TCP Window”—telling TCP to send more data per flow than the default parameters. Each OS is different and the default values will vary, but most all operating systems allow tweaking of the TCP stack and/or using parallel data streams.
So what is iPerf, and how does it fit into all of this?
What is iPerf?
iPerf is a simple, open source, command-line, network diagnostic tool that you install on two endpoints which can run on Linux, BSD, or Windows platforms. One side runs in a “server” mode, listening for requests; the other end runs “client” mode, sending data. When activated, it tries to send as much data down your pipe as it can, spitting out transfer statistics as it does. What’s so cool about iPerf is that you can test in real time any number of TCP window settings—even using parallel streams. There’s even a Java-based GUI you can install that runs on top of it called JPerf (JPerf is beyond the scope of this article, but I recommend looking into it). What’s even cooler is that because iPerf resides in memory, there are no files to clean up.
How do I use iPerf?
You can quickly download iPerf here. It uses port 5001 by default, and the bandwidth it displays is from the client to the server. Each test runs for 10 seconds by default, but virtually every setting is adjustable. Once installed, simply bring up the command line on both of the hosts and run these commands.
On the server side:
On the client side:
The output on the client side will look like this:
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 0.0.0.0 port 46956 connected with 168.192.1.10 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 10.0 sec 10.0 MBytes 1.00 Mbits/sec
There are a lot of things we can do to make this output better, with more meaningful data. For example, let’s say we want the test to run for 20 seconds instead of 10 (-t 20
), and we want to display transfer data every 2 seconds instead of the default of 10 (-i 2
), and we want to test on port 8000 instead of 5001 (-p 8000
). For the purposes of this exercise, let’s use those customizations as our baseline. This is what the command string would look like on both ends:
Client side:
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 8000
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.10.10.10 port 46956 connected with 10.10.10.5 port 8000
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 2.0 sec 6.00 MBytes 25.2 Mbits/sec
[ 3] 2.0- 4.0 sec 7.12 MBytes 29.9 Mbits/sec
[ 3] 4.0- 6.0 sec 7.00 MBytes 29.4 Mbits/sec
[ 3] 6.0- 8.0 sec 7.12 MBytes 29.9 Mbits/sec
[ 3] 8.0-10.0 sec 7.25 MBytes 30.4 Mbits/sec
[ 3] 10.0-12.0 sec 7.00 MBytes 29.4 Mbits/sec
[ 3] 12.0-14.0 sec 7.12 MBytes 29.9 Mbits/sec
[ 3] 14.0-16.0 sec 7.25 MBytes 30.4 Mbits/sec
[ 3] 16.0-18.0 sec 6.88 MBytes 28.8 Mbits/sec
[ 3] 18.0-20.0 sec 7.25 MBytes 30.4 Mbits/sec
[ 3] 0.0-20.0 sec 70.1 MBytes 29.4 Mbits/sec
Server side:
Iperf Gui Windows 10
------------------------------------------------------------
Server listening on TCP port 8000
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[852] local 10.10.10.5 port 8000 connected with 10.10.10.10 port 58316
[ ID] Interval Transfer Bandwidth
[ 4] 0.0- 2.0 sec 6.05 MBytes 25.4 Mbits/sec
[ 4] 2.0- 4.0 sec 7.19 MBytes 30.1 Mbits/sec
[ 4] 4.0- 6.0 sec 6.94 MBytes 29.1 Mbits/sec
[ 4] 6.0- 8.0 sec 7.19 MBytes 30.2 Mbits/sec
[ 4] 8.0-10.0 sec 7.19 MBytes 30.1 Mbits/sec
[ 4] 10.0-12.0 sec 6.95 MBytes 29.1 Mbits/sec
[ 4] 12.0-14.0 sec 7.19 MBytes 30.2 Mbits/sec
[ 4] 14.0-16.0 sec 7.19 MBytes 30.2 Mbits/sec
[ 4] 16.0-18.0 sec 6.95 MBytes 29.1 Mbits/sec
[ 4] 18.0-20.0 sec 7.19 MBytes 30.1 Mbits/sec
[ 4] 0.0-20.0 sec 70.1 MBytes 29.4 Mbits/sec
There are many, many other parameters you can set that are beyond the scope of this article, but for our purposes, the main use is to prove out our bandwidth. This is where we’ll use the TCP window options and parallel streams. To set a new TCP window, you use the -w
switch, and you can set the parallel streams by using -P
.
Increased TCP window commands:
Server side:
Client side:
Here are the iPerf results from two IBM Cloud file servers: one in Washington, D.C., acting as client, the other in Seattle acting as server:
Client side:
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 8000
TCP window size: 1.00 MByte (WARNING: requested 1.00 MByte)
------------------------------------------------------------
[ 3] local 10.10.10.10 port 53903 connected with 10.10.10.5 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec
[ 3] 2.0- 4.0 sec 28.5 MBytes 120 Mbits/sec
[ 3] 4.0- 6.0 sec 28.4 MBytes 119 Mbits/sec
[ 3] 6.0- 8.0 sec 28.9 MBytes 121 Mbits/sec
[ 3] 8.0-10.0 sec 28.0 MBytes 117 Mbits/sec
[ 3] 10.0-12.0 sec 29.0 MBytes 122 Mbits/sec
Iperf Windows Guide
[ 3] 12.0-14.0 sec 28.0 MBytes 117 Mbits/sec
[ 3] 14.0-16.0 sec 29.0 MBytes 122 Mbits/sec
[ 3] 16.0-18.0 sec 27.9 MBytes 117 Mbits/sec
[ 3] 18.0-20.0 sec 29.0 MBytes 122 Mbits/sec
Iperf Windows Guide
[ 3] 0.0-20.0 sec 283 MBytes 118 Mbits/sec
Server side:
------------------------------------------------------------
Server listening on TCP port 8000
TCP window size: 1.00 MByte
------------------------------------------------------------
[ 4] local 10.10.10.5 port 8000 connected with 10.10.10.10 port 53903
[ ID] Interval Transfer Bandwidth
[ 4] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec
[ 4] 2.0- 4.0 sec 28.6 MBytes 120 Mbits/sec
[ 4] 4.0- 6.0 sec 28.3 MBytes 119 Mbits/sec
[ 4] 6.0- 8.0 sec 28.9 MBytes 121 Mbits/sec
[ 4] 8.0-10.0 sec 28.0 MBytes 117 Mbits/sec
[ 4] 10.0-12.0 sec 29.0 MBytes 121 Mbits/sec
[ 4] 12.0-14.0 sec 28.0 MBytes 117 Mbits/sec
[ 4] 14.0-16.0 sec 29.0 MBytes 122 Mbits/sec
[ 4] 16.0-18.0 sec 28.0 MBytes 117 Mbits/sec
[ 4] 18.0-20.0 sec 29.0 MBytes 121 Mbits/sec
[ 4] 0.0-20.0 sec 283 MBytes 118 Mbits/sec
We see here that by increasing the TCP window from the default value to 1MB (1024k), we achieved around a 400% increase in throughput over our baseline. Unfortunately, this is the limit of this OS in terms of window size. So what more can we do? Parallel streams! With multiple simultaneous streams, we can fill the pipe close to its maximum usable amount.
Parallel stream command:
Client side:
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 8000
TCP window size: 1.00 MByte (WARNING: requested 1.00 MByte)
------------------------------------------------------------
[ ID] Interval Transfer Bandwidth
[ 9] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec
[ 4] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec
[ 7] 0.0- 2.0 sec 25.6 MBytes 107 Mbits/sec
[ 8] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec
[ 5] 0.0- 2.0 sec 25.8 MBytes 108 Mbits/sec
[ 3] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec
[ 6] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec
[SUM] 0.0- 2.0 sec 178 MBytes 746 Mbits/sec
[ 7] 18.0-20.0 sec 28.2 MBytes 118 Mbits/sec
[ 8] 18.0-20.0 sec 28.8 MBytes 121 Mbits/sec
[ 5] 18.0-20.0 sec 28.0 MBytes 117 Mbits/sec
Iperf3 Windows Gui
[ 4] 18.0-20.0 sec 28.0 MBytes 117 Mbits/sec
[ 3] 18.0-20.0 sec 28.9 MBytes 121 Mbits/sec
[ 9] 18.0-20.0 sec 28.8 MBytes 121 Mbits/sec
[ 6] 18.0-20.0 sec 28.9 MBytes 121 Mbits/sec
[SUM] 18.0-20.0 sec 200 MBytes 837 Mbits/sec
[SUM] 0.0-20.0 sec 1.93 GBytes 826 Mbits/sec
Server side:
------------------------------------------------------------
Server listening on TCP port 8000
TCP window size: 1.00 MByte
------------------------------------------------------------
[ 4] local 10.10.10.10 port 8000 connected with 10.10.10.5 port 53903
[ ID] Interval Transfer Bandwidth
[ 5] 0.0- 2.0 sec 25.7 MBytes 108 Mbits/sec
[ 8] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec
[ 4] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec
[ 9] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec
[ 10] 0.0- 2.0 sec 25.9 MBytes 108 Mbits/sec
[ 7] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec
[ 6] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec
[SUM] 0.0- 2.0 sec 178 MBytes 747 Mbits/sec
[ 4] 18.0-20.0 sec 28.8 MBytes 121 Mbits/sec
[ 5] 18.0-20.0 sec 28.3 MBytes 119 Mbits/sec
[ 7] 18.0-20.0 sec 28.8 MBytes 121 Mbits/sec
[ 10] 18.0-20.0 sec 28.1 MBytes 118 Mbits/sec
[ 9] 18.0-20.0 sec 28.0 MBytes 118 Mbits/sec
[ 8] 18.0-20.0 sec 28.8 MBytes 121 Mbits/sec
Iperf Windows Guide
[ 6] 18.0-20.0 sec 29.0 MBytes 121 Mbits/sec
[SUM] 18.0-20.0 sec 200 MBytes 838 Mbits/sec
[SUM] 0.0-20.1 sec 1.93 GBytes 825 Mbits/sec
As you can see from the tests above, we increased throughput from 29Mb/s with a single stream and the default TCP Window to 824Mb/s using a higher window and parallel streams. On a gigabit link, this about the maximum throughput one could hope to achieve before saturating the link and causing packet loss. We proved out the network and verified bandwidth capacity was not an issue. From that conclusion, we focused on tweaking TCP to get the most out of the network.
You can also do UDP tests using iPerf for circumstances that require it. To utilize UDP instead of TCP for iPerf testing, you would have to simply use the -u
flag. It is to be used with the -b
flag for UDP Bandwidth. The UDP bandwidth would be sent at bits/sec. To test a 1000Mbps NIC, you can use -b
flag with a value of 1000M to set max UDP bandwidth at 1000 Mbit/sec or 1 Gbit/sec. The default is 1 Mbit/sec.
Here is an example:
We will never get 100% out of any link. Typically, 90% utilization is about the real-world maximum anyone will achieve. If you get any more, you’ll begin to saturate the link and incur packet loss. IBM Cloud doesn’t directly support iPerf, so it’s up to you install and play around with. It’s such a versatile and easy-to-use little piece of software that we think is invaluable.
Iperf Windows Gui
Original Article written by Andrew Tyler. Updated by Stanley Soman.
Follow IBM Cloud
Be the first to hear about news, product updates, and innovation from IBM Cloud.
Email subscribeRSS