...
Code Block |
---|
root@PC:~# ip netns add isolated_ns root@PC:~# ip link set ens1f0 netns isolated_ns root@PC:~# ip netns exec isolated_ns ip addr flush ens1f0 root@PC:~# ip netns exec isolated_ns ip addr add 192.168.10.1/24 dev ens1f0 root@PC:~# ip netns exec isolated_ns ip addr add 192.168.11.1/24link set dev ens1f0 up root@PC:~# ip netns exec isolated_ns ip route add 192.168.30.0/24 via 192.168.10.2 root@PC:~# ip netns exec isolated_ns ip route add 192.168.31.0/24 via 192.168.11.2 root@PC:~# ip addr flush ens1f1 root@PC:~# ip address add 192.168.30.2/24 dev ens1f1 root@PC:~# ip addresslink add 192.168.31.2/24 set dev ens1f1 up root@PC:~# ip route add 192.168.10.0/24 via 192.168.30.1 root@PC:~# ip route add 192.168.11.0/24 via 192.168.31.1 |
...
whle_ls1046_1
Code Block |
---|
root@whle-ls1046a:~# ip address flush eth1 root@whle-ls1046a:~# ip address flush eth2 root@whle-ls1046a:~# ip address flush eth3 root@whle-ls1046a:~# ip address flush eth5 root@whle-ls1046a:~# ip address flush eth4 root@whle-ls1046a:~# ip addr add 192.168.10.2/24 dev eth5 root@whle-ls1046a:~# ip addr add 192.168.11.2/24 dev eth5 root@whle-ls1046a:~# ip addr add 192.168.30.1/24 dev eth4 root@whle-ls1046a:~# ip addrlink add 192.168.31.1/24 set dev eth4 up root@whle-ls1046a:~# ip link set dev eth4eth5 up root@whle-ls1046a:~# ipecho link set dev eth5 up root@whle-ls1046a:~# echo 1 1 > /proc/sys/net/ipv4/ip_forward |
...
Code Block |
---|
root@PC:~# ip netns exec isolated_ns iperf3 -s -p 5201 & root@PC:~# ip netns exec isolated_ns iperf3 -s -p 5202 & root@PC:~# ip netns exec isolated_ns iperf3 -s -p 5203 & root@PC:~# ip netns exec isolated_ns iperf3 -s -p 5204 & |
Iperf clients
Launching iperf3
clients is a bit more involved as the reproducibility requires controlling the source port, which is an ephemeral port assigned by the system from a specific range, usually 32768-60999 for Linux. This can achieved by temporarily narrowing the range to just a single port such that the iperf3
client has no choice but to use only one available. Use the following helper script wrapping the iperf3
call in the ephemeral ports range setup:
iperfc.sh
:
...
Launch four instances of iperf3
simultaneously.
PC
Code Block |
---|
root@PC:~# ( iperf3 -c 192.168.10.1 --port 5201 --cport 55000 --time 0 --omit 5 --title A & iperf3 -c 192.168.10.1 --port 5202 --cport 55002 --time 0 --titleomit $35 --omittitle 5B & sleep 1 echo "32768 60999"iperf3 > /proc/sys/net/ipv4/ip_local_port_range |
Run the clients simultaneously:
PC
Code Block |
---|
root@PC:~# ( ./iperfc.sh -c 192.168.1110.1 5201--port A5203 --cport 55006 --time ./iperfc.sh 192.168.10.1 5202 B0 --omit 5 --title C & ./iperfc.shiperf3 -c 192.168.1110.1 5203--port C5204 --cport 55001 --time ./iperfc.sh 192.168.11.1 52040 --omit 5 --title D & ) ... BA: [Connecting to 5] 0.00-1.00 sec 281 MBytes 2.36 Gbits/sec 322 382 KBytes host 192.168.10.1, port 5201 B: Connecting to host 192.168.10.1, port 5202 C: Connecting to host 192.168.10.1, port 5203 D: Connecting to host 192.168.10.1, port 5204 C: [ 5] local 4.00-5.00 sec 241 MBytes 2.02 Gbits/sec 523 546 KBytes (omitted) 192.168.30.2 port 55006 connected to 192.168.10.1 port 5203 B: [ 5] local 192.168.30.2 port 55002 connected to 192.168.10.1 port 5202 D: [ 5] local 3.00-4.00 sec 278 MBytes 2.33 Gbits/sec 399 404 KBytes (omitted) A192.168.30.2 port 55001 connected to 192.168.10.1 port 5204 A: [ 5] local 192.168.30.2 port 55000 connected to 192.168.10.1 port 5201 B: [ 5ID] Interval 2.00-3.00 sec 270 MBytes 2.27 Gbits/secTransfer 327 Bitrate 443 KBytes Retr BCwnd C: [ ID] 5]Interval 1.00-2.00 sec 258 MBytes Transfer 2.16 Gbits/sec 292 Bitrate 399 KBytes Retr Cwnd CB: [ 5] 0.00-1.00 sec 358268 MBytes 32.0025 Gbits/sec 348 56 556337 KBytes (omitted) DC: [ 5] 40.00-51.00 sec 236285 MBytes 12.9839 Gbits/sec 234152 485433 KBytes (omitted) A: [ ID] 5]Interval 3.00-4.00 sec 304 MBytes Transfer 2.55 Gbits/sec 1248 Bitrate 508 KBytes Retr Cwnd BD: [ 5ID] Interval 2.00-3.00 sec 220 MBytes 1.85 Gbits/secTransfer 877 Bitrate 393 KBytes Retr CCwnd D: [ 5] 10.00-21.00 sec 346258 MBytes 2.9016 Gbits/sec 437195 380370 KBytes (omitted) A: [ 5] 0... |
The addresses and ports are picked in such a way that every iperf3 flow is handled by a different core on whle_ls1046_1
(cores 0
, 2
, 1
, 3
, respectively).
Stop all the clients after some time.
PC
Code Block |
---|
root@PC:~# kill $(ps a | grep 'iperf3 -[c]' | awk '{ print $1; }') |
Code Block |
---|
... C00-1.00 sec 285 MBytes 2.39 Gbits/sec 162 584 KBytes (omitted) B: [ 5] 181 1.00-1822.00 sec 188294 MBytes 12.5746 Gbits/sec 8115 444741 KBytes D: [ 5] 180.00-181.00 sec 372 MBytes 3.12 Gbits/sec 180 707 KBytes D: [ 5] 181.00-181.55 sec 198 MBytes 2.99 Gbits/sec 165 687 KBytes D: - - - - - - - - - - - - - - - - - - - - - - - - - C: [ 5] 182.00-182.56 sec 152 MBytes 2.27 Gbits/sec 241 505(omitted) ... |
The port numbers are picked in such a way that every iperf3 flow is handled by a different core on whle_ls1046_1
- cores 0
, 2
, 1
, 3
, respectively. (The iperf3 calls fix the source port used for the data transfer connection (the --cport
parameter). There is a small chance some of them are already used in the system. In this case it’s necessary to locate these processes with netstat -tnp
and kill them.)
Stop all the clients after some time.
PC
Code Block |
---|
root@PC:~# kill $(ps a | grep 'iperf3 -[c]' | awk '{ print $1; }') |
Code Block |
---|
... B: [ 5] 53.00-53.34 sec 125 MBytes 3.13 Gbits/sec 0 1014 KBytes DA: [ ID 5] Interval 53.00-53.33 sec 124 MBytes 3.13 Gbits/sec Transfer 0 Bitrate 732 KBytes Retr AC: [ 5] 184 53.00-18453.5834 sec 62.5 125 MBytes 1.8056 Gbits/sec 453 0 716 472 KBytes CB: - - - - - - - - - - - - - - - - - - - - - - - - - D: [ 5] 053.00-18153.5533 sec 4961.62 GBytesMBytes 21.3555 Gbits/sec 68979 0 454 KBytes sender A: - - - - - - - - - - - - - - - - - - - - - - - - - C: [- ID]- Interval- - - - - - - - - - - Transfer- - - - - Bitrate- - - - - - - Retr DB: [ ID] 5]Interval 0.00-181.55 sec 0.00 Bytes 0.00 bits/sec Transfer Bitrate receiverRetr AD: [- ID]- Interval- - - - - - - - - - - Transfer- - - - - Bitrate- - - - - - - Retr BC: [ 5ID] 183.00-183.57 secInterval 156 MBytes 2.28 Gbits/sec 323 Transfer 515 KBytes Bitrate Retr BA: [ ID] Interval - - - - - - - - - - -Transfer - - - - -Bitrate - - - - - - - - -Retr CB: [ 5] 0.00-18253.5634 sec 5014.41 GBytes 2.3729 Gbits/sec 669512306 sender AD: [ 5ID] Interval 0.00-184.58 sec 50.1 GBytes 2.33 Gbits/sec 71061Transfer Bitrate sender iperf3: interrupt - the client has terminated Retr C: [ 5] 0.00-18253.5634 sec 014.001 BytesGBytes 02.0037 bitsGbits/sec 2890 receiversender A: [ 5] 0.00-18453.5833 sec 015.002 BytesGBytes 02.0033 bitsGbits/sec 2889 sender receiver B: [ ID 5] Interval 0.00-53.34 sec 0.00 Bytes 0.00 bits/sec Transfer Bitrate Retrreceiver BD: [ 5] 0.00-18353.5733 sec 4814.91 GBytes 2.2935 Gbits/sec 673452636 sender BC: [ 5] 0.00-18353.5734 sec 0.00 Bytes 0.00 bits/sec receiver iperf3A: interrupt -[ the client has5] terminated iperf3: interrupt - the client has terminated iperf3: interrupt - the client has terminated |
Sum the values from the lines with "sender" at the end
Code Block |
---|
0.00-53.33 sec 0.00 Bytes 0.00 bits/sec receiver D: [ 5] 0.00-18153.5533 sec 490.600 GBytesBytes 20.3500 Gbitsbits/sec 68979 sender ...receiver Ciperf3: interrupt [- the 5]client has 0.00-182.56 sec 50.4 GBytes 2.37 Gbits/sec 66951 sender ... Aterminated iperf3: interrupt - the client has terminated iperf3: interrupt - the client has terminated iperf3: interrupt - the client has terminated |
Sum the values from the lines with "sender" at the end.
Code Block |
---|
B: [ 5] 0.00-18453.5834 sec 5014.1 GBytes 2.3329 Gbits/sec 710612306 sender ... BC: [ 5] 0.00-18353.5734 sec 4814.91 GBytes 2.2937 Gbits/sec 673452890 sender |
The total bandwidth achieved is 2.35 + 2.37 + 2.33 + 2.29 = 9.34 Gb/s. This is the upper limit for the TCP protocol on a 10 Gb/s physical link, proving that WHLE-LS1046A board is able to handle routing at its network interface's limit using standard kernel drivers.
WHLE work analysis
Consider the snapshot from the top command ran on whle_ls1046_1
during the performance test:
...
The si column shows the CPU time spent in software interrupts, in this case the network interrupts almost exclusively. Nearly zero time spent by the system or user shows that the routing task is carried out in the interrupts alone. The load spread evenly at ~73% between all cores stems from picking the right parameters (ip source address, ip dest address, tcp source port, tcp dest port) defining four data flows assigned by driver's RSS to four separate CPUs. The idle time id at ~25% shows that WHLE operates at 75% capacity, providing a decent margin to account for more realistic routing tasks, with bigger routing tables and less than perfectly CPU-even traffic.
L2 Bridge
Connection diagram
...
Network Setup
PC
Code Block |
---|
root@PC:~# ip netns add isolated_ns
root@PC:~# ip link set ens1f0 netns isolated_ns
root@PC:~# ip netns exec isolated_ns ip addr flush ens1f0
root@PC:~# ip netns exec isolated_ns ip addr add 192.168.30.1/24 dev ens1f0
root@PC:~# ip addr flush ens1f1
root@PC:~# ip address add 192.168.30.2/24 dev ens1f1 |
whle_ls1046_1
Code Block |
---|
root@whle-ls1046a:~# ip address flush eth4
root@whle-ls1046a:~# ip address flush eth5
root@whle-ls1046a:~# ip link set dev eth4 down
root@whle-ls1046a:~# ip link set dev eth5 down
root@whle-ls1046a:~# brctl addbr br0
root@whle-ls1046a:~# brctl addif br0 eth4
root@whle-ls1046a:~# brctl addif br0 eth5
root@whle-ls1046a:~# ip link set dev br0 up
root@whle-ls1046a:~# ip link set dev eth4 up
root@whle-ls1046a:~# ip link set dev eth5 up |
Tests
Iperf servers
On PC
launch two instances of iperf3 servers, listening on ports 5201
and 5202
. The ip netns exec
command requires root access.
PC
Code Block |
---|
root@PC:~# ip netns exec isolated_ns iperf3 -s -p 5201 &
root@PC:~# ip netns exec isolated_ns iperf3 -s -p 5202 & |
Iperf clients
Run two clients simultaneously (use the iperfc.sh
script from https://conclusive.atlassian.net/wiki/spaces/CW/pages/edit-v2/398721025#Iperf-clients):
PC
Code Block |
---|
root@PC:~# ( ./iperfc.sh 192.168.30.1 5201 A ./iperfc.sh 192.168.30.1 5202 B ) ... A: [ 5] 3.00-4.00 sec 559 MBytes 4.69 Gbits/sec 283 588 KBytes (omitted) B ... A: [ 5] 0.00-53.33 sec 15.2 GBytes 2.33 Gbits/sec 2889 sender ... D: [ 5] 0.00-53.33 sec 14.1 GBytes 2.35 Gbits/sec 2636 sender |
The total bandwidth achieved is 2.29 + 2.37 + 2.33 + 2.35 = 9.34 Gb/s. This is the upper limit for the TCP protocol on a 10 Gb/s physical link, proving that WHLE-LS1046A board is able to handle routing at its network interface's limit using standard kernel drivers.
WHLE work analysis
Consider the snapshot from the top command ran on whle_ls1046_1
during the performance test:
...
The si column shows the CPU time spent in software interrupts, in this case the network interrupts almost exclusively. Nearly zero time spent by the system or user shows that the routing task is carried out in the interrupts alone. The load spread evenly at ~73% between all cores stems from picking the right parameters (ip source address, ip dest address, tcp source port, tcp dest port) defining four data flows assigned by driver's RSS to four separate CPUs. The idle time id at ~25% shows that WHLE operates at 75% capacity, providing a decent margin to account for more realistic routing tasks, with bigger routing tables and less than perfectly CPU-even traffic.
L2 Bridge
Connection diagram
...
Network Setup
PC
Code Block |
---|
root@PC:~# ip netns add isolated_ns
root@PC:~# ip link set ens1f0 netns isolated_ns
root@PC:~# ip netns exec isolated_ns ip addr flush ens1f0
root@PC:~# ip netns exec isolated_ns ip addr add 192.168.30.1/24 dev ens1f0
root@PC:~# ip addr flush ens1f1
root@PC:~# ip address add 192.168.30.2/24 dev ens1f1 |
whle_ls1046_1
Code Block |
---|
root@whle-ls1046a:~# ip address flush eth4
root@whle-ls1046a:~# ip address flush eth5
root@whle-ls1046a:~# ip link set dev eth4 down
root@whle-ls1046a:~# ip link set dev eth5 down
root@whle-ls1046a:~# brctl addbr br0
root@whle-ls1046a:~# brctl addif br0 eth4
root@whle-ls1046a:~# brctl addif br0 eth5
root@whle-ls1046a:~# ip link set dev br0 up
root@whle-ls1046a:~# ip link set dev eth4 up
root@whle-ls1046a:~# ip link set dev eth5 up |
Tests
Iperf servers
On PC
launch four instances of iperf3 servers, listening on ports 5201
to 5204
. The ip netns exec
command requires root access.
PC
Code Block |
---|
root@PC:~# ip netns exec isolated_ns iperf3 -s -p 5201 &
root@PC:~# ip netns exec isolated_ns iperf3 -s -p 5202 &
root@PC:~# ip netns exec isolated_ns iperf3 -s -p 5203 &
root@PC:~# ip netns exec isolated_ns iperf3 -s -p 5204 & |
Iperf clients
Run four clients simultaneously:
PC
Code Block |
---|
root@PC:~# (
iperf3 -c 192.168.30.1 --port 5201 --cport 55000 --time 0 --title A &
iperf3 -c 192.168.30.1 --port 5202 --cport 55002 --time 0 --title B &
iperf3 -c 192.168.30.1 --port 5203 --cport 55004 --time 0 --title C &
iperf3 -c 192.168.30.1 --port 5204 --cport 55003 --time 0 --title D &
)
A: Connecting to host 192.168.30.1, port 5201
B: Connecting to host 192.168.30.1, port 5202
C: Connecting to host 192.168.30.1, port 5203
D: Connecting to host 192.168.30.1, port 5204
B: [ 5] local 192.168.30.2 port 55002 connected to 192.168.30.1 port 5202
D: [ 5] local 192.168.30.2 port 55003 connected to 192.168.30.1 port 5204
A: [ 5] local 192.168.30.2 port 55000 connected to 192.168.30.1 port 5201
C: [ 5] local 192.168.30.2 port 55004 connected to 192.168.30.1 port 5203
B: [ ID] Interval Transfer Bitrate Retr Cwnd
B: [ 5] 0.00-1.00 sec 243 MBytes 2.04 Gbits/sec 148 386 KBytes
C: [ ID] Interval Transfer Bitrate Retr Cwnd
C: [ 5] 0.00-1.00 sec 382 MBytes 3.21 Gbits/sec 243 331 KBytes
D: [ ID] Interval Transfer Bitrate Retr Cwnd
D: [ 5] 0.00-1.00 sec 251 MBytes 2.11 Gbits/sec 214 250 KBytes
A: [ ID] Interval Transfer Bitrate Retr Cwnd
A: [ 5] 0.00-1.00 sec 249 MBytes 2.09 Gbits/sec 83 370 KBytes
B: [ 5] 1.00-2.00 sec 210 MBytes 1.76 Gbits/sec 404 454 KBytes
A: [ 5] 1.00-2.00 sec 470 MBytes 3.95 Gbits/sec 173 551 KBytes
C: [ 5] 1.00-2.00 sec 224 MBytes 1.88 Gbits/sec 5 539 KBytes
D: [ 5] 1.00-2.00 sec 218 MBytes 1.83 Gbits/sec 23 362 KBytes
B: [ 5] 2.00-3.00 sec 229 MBytes 1.92 Gbits/sec 422 609 KBytes
... |
The addresses and ports are picked in such a way that each iperf3 flow is handled by a different core on whle_ls1046_1
- cores 3
, 1
, 0
, 2
, respectively.
Stop all the clients after some time.
Code Block |
---|
root@PC:~# kill $(ps a | grep 'iperf3 -[c]' | awk '{ print $1; }') |
Code Block |
---|
... D: [ 5] 139.00-140.00 sec 280 MBytes 2.35 Gbits/sec 168 611 KBytes D: [ 5] 140.00-140.95 sec 348 MBytes 3.06 Gbits/sec 108 617 KBytes B: [ 5] 140.00-140.96 sec 272 MBytes 2.39 Gbits/sec 940 516 KBytes D: - - - - - - - - - - - - - - - - - - - - - - - - - D: [ ID] Interval Transfer Bitrate Retr B: - - - - - - - - - - - - - - - - - - - - - - - - - A: [ 5] 140.00-140.95 sec 246 MBytes 2.17 Gbits/sec 754 598 KBytes B: [ ID] Interval Transfer Bitrate Retr D: [ 5] 20.00-3140.00 95 sec 40.3 552GBytes MBytes 42.6445 Gbits/sec 28332702 595 KBytes (omitted) A: [ 5]sender A: 4.00-5.00 - - sec- - - 561- MBytes- - 4.71 Gbits/sec 352 580 KBytes (omitted) B- - - - - - - - - - - - - - - - A: [ 5ID] Interval 3.00-4.00 sec 550 MBytes 4.61 Gbits/sec 248Transfer Bitrate 584 KBytes (omitted)Retr AB: [ 5] 0.00-1140.0096 sec 560 MBytes37.4 GBytes 42.7028 Gbits/sec 56664 304 406 KBytes sender BD: [ 5] 40.00-5140.0095 sec 0.00 552Bytes MBytes 40.6300 Gbitsbits/sec 282 413 KBytes (omitted)receiver A: [ 5] 10.00-2140.0095 sec 560 MBytes37.0 GBytes 42.7025 Gbits/sec 64981 254 443 KBytes sender B: [ 5] 0.00-1140.00 96 sec 555 MBytes 4.66 Gbits/sec 119 602 KBytes ... |
The addresses and ports are picked in such a way that each iperf3 flow is handled by a different core on whle_ls1046_1
(cores 0
, 1
, respectively).
Stop all the clients after some time.
Code Block |
---|
root@PC:~# kill $(ps a | grep 'iperf3 -[c]' | awk '{ print $1; }') |
Code Block |
---|
... B0.00 Bytes 0.00 bits/sec receiver C: [ 5] 202140.00-203140.0095 sec 548195 MBytes 41.5972 Gbits/sec 250290 594461 KBytes AC: [- - 5]- 204.00-205.00 sec- - - 562- MBytes- - 4.72 Gbits/sec 268 420 KBytes B- - - - - - - - - - - - - - - C: [ ID] Interval 5] 203.00-204.00 sec 552 MBytes 4.63 Gbits/sec Transfer 289 Bitrate 409 KBytes Retr BC: [ 5] 204 0.00-204140.4995 sec 38.9 268GBytes MBytes 42.6237 Gbits/sec 15134875 286 KBytes sender AC: [ 5] 2050.00-205140.5095 sec 0.00 280Bytes MBytes 40.6900 Gbitsbits/sec 72 409 KBytes receiver Biperf3: interrupt - - - - - - - - - - - - - - - - - - - - - - - - - A: - - - - - - - - - - - - - - - - - - - - - - - - - B: [ ID] Interval Transfer Bitratethe client has terminated A: [ 5] 0.00-140.95 sec 0.00 Bytes 0.00 bits/sec receiver iperf3: interrupt - the client has terminated iperf3: interrupt - the client has terminated iperf3: interrupt - the client has terminated |
Sum the values from the lines with "sender" at the end
Code Block |
---|
D: [ 5] 0.00-140.95 sec 40.3 GBytes 2.45 Gbits/sec 32702 sender Retr... AB: [ ID5] Interval 0.00-140.96 sec 37.4 GBytes 2.28 Gbits/sec Transfer56664 Bitrate sender Retr... A: [ 5] 0.00-205140.5095 sec 37.0 112 GBytes 42.6825 Gbits/sec 5345664981 sender A... C: [ 5] 0.00-205140.5095 sec 038.009 BytesGBytes 02.0037 bitsGbits/sec 34875 receiver B: [ 5] 0.00-204.49 sec 110 GBytes 4.63 Gbits/sec 51300 sender iperf3: interrupt - the client has terminated B: [ 5] 0.00-204.49 sec 0.00 Bytes 0.00 bits/sec receiver iperf3: interrupt - the client has terminated ... |
Sum the values from the lines with "sender" at the end
Code Block |
---|
A: [ 5] 0.00-205.50 sec 112 GBytes 4.68 Gbits/sec 53456 sender
...
B: [ 5] 0.00-204.49 sec 110 GBytes 4.63 Gbits/sec 51300 sender |
The total bandwidth achieved is 4.68 + 4.63 = 9.31 Gb/s. This is the upper limit for the TCP protocol on a 10 Gb/s physical link, proving that WHLE-LS1046A board is able to handle bridging at network interface's limit using standard kernel drivers.
WHLE work analysis
A snapshot from the top command ran on whle_ls1046_1
during the performance test:
...
sender |
The total bandwidth achieved is 2.45 + 2.28 + 2.25 + 2.37 = 9.35 Gb/s. This is the upper limit for the TCP protocol on a 10 Gb/s physical link, proving that WHLE-LS1046A board is able to handle bridging at network interface's limit using standard kernel drivers.
WHLE work analysis
Consider the snapshot from the top command ran on whle_ls1046_1
during the performance test:
...
Just like in the case of router (https://conclusive.atlassian.net/wiki/spaces/CW/pages/edit-v2/398721025#WHLE-work-analysis) the only meaningful columns are id (idle) and si (software interrupt). Unlike with the router, however, the CPU load in the bridge mode has a high variance and thus a single top command snapshot can be misleading. It’s useful to record the numbers for a minute or so:
whle_ls1046_1
Code Block |
---|
top -d 0.5 -b \
| grep -e ^%Cpu \
| sed -e 's/[,:]/ /g' \
| awk '{print $1 "\t" $8 "\t" $14}' \
| tee cpu-load-id-si-per-5-ds.log |
Plotting them, along with the averages, would obtain a graph similar to this one:
...
From this graph it’s clear that every core’s idle time oscillates at 30% average, leaving healthy margin to account for more realistic bridging scenarios with less than perfectly CPU-even traffic.