Category Archives: Bandwidth/Latency/TCP/UDP

How to use Iperf

iPerf is an open-source tool that can provide standardized results of any network’s performance measurement.

The tool has a client and server functionality and can produce TCP or UDP based data streams.

iperx09232019.PNG

Server mode
Receiver of unidirectional traffic.
Produces the actual results of the performance test.

Client mode
Sender of unidirectional traffic.
Throughput-affecting parameters usually adjusted on client.

UDP TEST
Server – iperf -s -u -p 5209 -i 2
Client – iperf -c 10.10.10.1 -b 2M -i 2 -t 200 -p 5209

TCP TEST
Server – iperf -s -p 5209 -B 10.10.10.1 -i 2 -w 128k
Client – iperf -c 10.10.10.1 -p 5209 -i 2 -t 200 -w 128k –P 2

SERVER:
iperf -s -i 10 -w 64k -p 5003
CLIENT:
iperf3 -c 10.10.10.2 -i 10 -t 300 -w 64k -p 5003

Suggested iPerf versions:
iPerf version 2.0.5
iPerf version 2.0.9
iPerf version 3.1.3

TCP THROUGHPUT CALCULATOR:
https://www.switch.ch/network/tools/tcp_throughput/

LAB:
perftest0920929.PNG

Link:
https://www.slashroot.in/iperf-how-test-network-speedperformancebandwidth

Advertisements

TCP

So how MSS and Window side is related?

  1. MSS is not per definition limited to 1460, but it’s a nice fit to Ethernet networks. Standard Ethernet uses frames with up to 1500 octets (bytes if you wish) of payload. Subtract the headers for IP and TCP and you’ll get to something like 1460. You can go larger with MSS, but that could result in fragmentation in the lower layers (IP in this case), hurting performance.
  2. The unscaled TCP window size is 65.535. It has no relation to the MSS. The MSS is related to the transport of data over the network, using the services of the IP layer. The window size has to do with the sending and receiving hosts themselves. How many resources (memory) can they spend on this TCP session for storing data for retransmission and handing off to the receiving application.
  3. TCP window scaling is a solution to the problem of growing network speeds. I’ll spare you the math but the original TCP header option to communicate TCP window size with (that maximum of 65.535) just wasn’t big enough any more. So they came up with a multiplication factor.

The TCP window size is generally independent of the maximum segment size which depends on the maximum transfer unit which in turn depends on the maximum frame size.

Let’s start low.

The maximum frame size is the largest frame a network (segment) can transport. For Ethernet, this is 1518 bytes by definition.

The frame encapsulates an IP packet, so the largest packet – the maximum transfer unit MTU – is the maximum frame size minus the frame overhead. For Ethernet, that’s 1518 – 18 = 1500 bytes.

The IP packet encapsulates a TCP segment, so the maximum segment size MSS is the MTU minus the IP overhead minus the TCP overhead (MSS doesn’t include the TCP header). For Ethernet and TCP over IPv4 without options, this is 1500 – 20 (IPv4 overhead) – 20 (TCP overhead) ) = 1460 bytes.

Now, TCP is a transport protocol that presents itself as a stream socket to the application. That means that an application can just transmit any arbitrarily sized amount of data across that socket. For that, TCP splits the data stream into said segments (0 to MSS bytes long {1}), transmits each segment over IP, and puts them back together at the destination.

TCP segments are acknowledged by the destination to guarantee delivery. Imagine the source node would only send a single segment, wait for acknowledgment, and then send the next segment. Regardless of the actual bandwidth, the throughput of this TCP connection would be limited by the round-trip time (RTT, the time it takes for a packet to travel from source to destination and back again).

So, if you had a 1 Gbit/s connection between two nodes with an RTT of 10 ms, you could effectively send 1460 bytes every 10 ms or 146 kB/s. That’s not very satisfying.

TCP therefore uses a send window – multiple segments that can be “in flight” at the same time, being sent out and awaiting acknowledgment. It’s also called a sliding window as it advanced each time the segment at the beginning of the window is acknowledged, triggering the sending of the next segment that the window advanced to. This way, the segment size doesn’t matter. With a window of traditionally 64 KiB we can have that amount in-flight and accordingly, transport 64 KiB in each 10 ms = 6.5 MB/s. Better, but still not really satisfying for a gigabit connection.

Modern TCP uses the window scale option that can increase the send window exponentially to up to 2 GiB, providing for some future growth.

But why isn’t all data just sent at once and why do we need this send window? If you send everything as fast as you – locally – can and there’s (very likely) a slower link somewhere in the path to the destination, significant amounts of data would need to be queued. No switch or router is able to buffer more than a few MB (if at all), so the excess traffic would need to be dropped. Failing acknowledgment, it would need to be resent, the excess being dropped again. This would be highly inefficient and it would severely congest the network. TCP handles this problem with its congestion control, adjusting the window size according to effective bandwidth and current round-trip time in a complex algorithm.

{1} Empty segments can be used to prevent connection timeouts using the keepalive option. Thx Deduplicator

Basics of Reliability, TX & RX

“The overall reliability or load of an interface at a given point in time can be measured by the txload/rxload a fractional ( 255/255 = 100% ) calculation over a default average of 5 minutes. This 5 minute time interval is the default on most if not all Cisco devices, however it can be changed or tuned if necessary. We always want to see the overall reliability at 255/255 which basically means all is good. The thing to remember with regards to txload/rxload reliability is that they both make up the same 255. For example you wouldn’t see the txload at let say 200/255 and the rxload 60/255 that would equal an overall reliability of 260.”

The total load on a given interface can be measured by a the txload/255 + rxload/255 together never exceeding 255 or 100% of the overall interface. For example lets say we had an interface that was completely saturated at 100% capacity over a given period of time. Let’s say that during this given period of time the rxload was at 124/255 or 49% of the received interface utilization. Due to the fact that the interface is currently at 100% capacity this would mean that the txload would have to be ( 255 – 124 = 131 ) 131/255 or 51% of the total transmit interface utilization.

( 255 – 124 = 131 ) 131/255 = 0.513 or roughly 51%.

Note: Keep in mind that the bandwidth command modifies only the perceived bandwidth of the interface: it has no effect on the actual speed at which packets are transmitted or received.

## Modifying default interval.
Router#configure terminal
Router(config)#interface gigabitethernet0/0
Router(config-if)#load-interval 30
Router(config-if)#end

http://packetlife.net/blog/2011/jul/8/evaluating-txload-and-rxload/
https://www.thepacket.net/load/

NETWORK PERFORMANCE: LINKS BETWEEN LATENCY, THROUGHPUT AND PACKET LOSS

3 major network performance indicators / Base for network troubleshooting:
– Latency is the time required to vehiculate a packet across a network.
Latency may be measured in many different ways: round trip, one way, etc…
Latency may be impacted by any element in the chain which is used to vehiculate data: workstation, WAN links, routers, local area network, server… and ultimately it may be limited, for large networks, by the speed of light.

– Throughput is defined as the quantity of data being sent/received by unit of time.

– Packet loss reflects the number of packets lost per 100 of packets sent by a host.

http://blog.performancevision.com/eng/earl/links-between-latency-throughput-and-packet-loss
https://www.cisco.com/c/en/us/about/press/internet-protocol-journal/back-issues/table-contents-16/gigabit-tcp.html