...
Connection Diagram
Inc drawio | ||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Setup
Network Setup
...
Code Block |
---|
root@whle-ls1046a:~# chmod +x cgroups-setup.sh root@whle-ls1046a:~# ./cgroups-setup.sh |
Iperf Setup
PC
Two iperf3
streams will be created, with servers launched on PC
and clients on whle_ls1046
, with the default client → server data flow direction.
The direction of the transfer is important in this experiment. The notion of queue prioritization in the DPAA architecture (or any other mqprio
architecture for that matter) is only applicable to the egress traffic. Sending data to the remote location 192.168.3.1
from isolated namespace implies the following order of processing for the majority of iperf3
traffic:
whle_ls1046
’s CPU,whle_ls1046
’seth1
interface (egress),PC
’senxc84d4423262e
interface (ingress),PC
’s CPU.
Given that the maximum throughput of 1 Gb/s for the whole connection leaves plenty of space on whle_ls1046
’s CPU (let alone PC
’s), the enxc84d4423262e
- eth1
link becomes the bottleneck, with packets congesting at the eth1
funnel where the DPAA prioritization can come into play. Having the transfer go the other way, eg. with clients run on PC
and servers on whle_ls1046
, the funnel would form at the testing machine’s enxc84d4423262e
interface.
Given the peculiarities of setting up iperf3
process' priority on whle_ls1046
it’s easier to track data transfer speed on PC
’s side by launching iperf3
in blocking mode instead of as a daemon.
PC, console 1
Code Block |
---|
user@PC~$ iperf3 --server --port 5201 |
Code Block |
---|
-----------------------------------------------------------
Server listening on 5201
----------------------------------------------------------- |
PC, console 2
Code Block |
---|
user@PC~$ iperf3 --server --port 5202 |
Code Block |
---|
-----------------------------------------------------------
Server listening on 5202
----------------------------------------------------------- |
whle_ls1046
Launching iperf3
clients on the WHLE board must follow a specific protocol:
Clean any
mqprio
qdiscs on theeth1
interface.Run
iperf3
client.Obtain the client’s PID
Assign a specific
net_prio
priority to the given PID, using cgroups.Define the
mqprio
qdisc on theeth1
interface.
While the (2) < (3) < (4) ordering is pretty obvious, the rest may not be so. Practice has shown that changing packets priority of a process with an ongoing connection while the mqprio
is already set up leads to inconsistent results, with the change sometimes reflected on the wire and sometimes not. In contrast, setting mqprio
qdisc while all the traffic is already set up and running results in consistent behavior.
Because of this it’s useful to define some bash procedures that would implement the above ordering. First a launch_iperf_with_priority
function will be defined which starts the iperf3
client and assigns it a specific priority.
whle_ls1046
Code Block | ||
---|---|---|
| ||
root@whle-ls1046a:~# launch_iperf_with_priority() { local port=$1 local prio=$2 local iperf_time=$3 echo "Launching iperf3, port ${port}, priority ${prio}" iperf3 --port "${port}" -c-client 192.168.3.1 --time "${iperf_time}" > /dev/null & local pid=$(pgrep -f "iperf3 --port ${port}") echo "${pid}" echo "${pid}" > "/sys/fs/cgroup/net_prio/prio-${prio}/cgroup.procs" } |
...
The opposite operation will be realized by the kill_iperf
procedure.
whle_ls1046
Code Block | ||
---|---|---|
| ||
root@whle-ls1046a:~#
kill_iperf() {
local port=$1
pkill -f "iperf3 --port ${port}"
} |
Example uage:
whle_ls1046
Code Block |
---|
root@whle-ls1046a:~# launch_iperf_with_priority 5201 0 10
Launching iperf3, port 5201, priority 0
[1] 493
root@whle-ls1046a:~# kill_iperf 5201 |
This would create a connection with at server at 192.168.3.1
, port 5201
, for 10 seconds, with the packets sent having skb priority 0
. Then it will be killed without waiting for it to finish.
Building on this a third, final procedure will be defined, which coordinates launching two iperf3 streams with different priorities, for the same time period.
whle_ls1046
Code Block | ||
---|---|---|
| ||
root@whle-ls1046a:~#
test_iperf() {
local port1=$1
local prio1=$2
local port2=$3
local prio2=$4
local iperf_time=$5
kill_iperf "${port1}"
kill_iperf "${port2}"
tc qdisc del dev eth1 root handle 1:
launch_iperf_with_priority "${port1}" "${prio1}" "${iperf_time}"
launch_iperf_with_priority "${port2}" "${prio2}" "${iperf_time}"
tc qdisc add dev eth1 root handle 1: mqprio num_tc 4 \
map 0 0 0 0 1 1 1 1 2 2 2 2 3 3 3 3 hw 1
sleep ${iperf_time}
} |
The example usage will be given below.
Tests
Same priority
Assuming that iperf3
servers at ports 5201
, 5202
are running on PC
run the following command on WHLE:
whle_ls1046
Code Block |
---|
root@whle-ls1046a:~# test_iperf 5201 4 5202 4 6
Launching iperf3, port 5201, priority 4
[1] 735
Launching iperf3, port 5202, priority 4
[2] 738 |
This would create two iperf3
streams with the same skb priority 4, mapping to the traffic class 1
. Meanwhile, on the PC
side:
PC, console 1
Code Block |
---|
Accepted connection from 192.168.3.2, port 54202
[ 5] local 192.168.3.1 port 5201 connected to 192.168.3.2 port 54208
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 55.0 MBytes 462 Mbits/sec
[ 5] 1.00-2.00 sec 56.1 MBytes 471 Mbits/sec
[ 5] 2.00-3.00 sec 56.1 MBytes 471 Mbits/sec
[ 5] 3.00-4.00 sec 56.1 MBytes 471 Mbits/sec
[ 5] 4.00-5.00 sec 56.1 MBytes 471 Mbits/sec
[ 5] 5.00-6.00 sec 56.1 MBytes 471 Mbits/sec
[ 5] 6.00-6.04 sec 2.46 MBytes 467 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-6.04 sec 338 MBytes 469 Mbits/sec receiver
-----------------------------------------------------------
Server listening on 5201
----------------------------------------------------------- |
PC, console 2
Code Block |
---|
Accepted connection from 192.168.3.2, port 46446
[ 5] local 192.168.3.1 port 5202 connected to 192.168.3.2 port 46458
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 53.9 MBytes 452 Mbits/sec
[ 5] 1.00-2.00 sec 56.1 MBytes 471 Mbits/sec
[ 5] 2.00-3.00 sec 56.1 MBytes 471 Mbits/sec
[ 5] 3.00-4.00 sec 56.1 MBytes 471 Mbits/sec
[ 5] 4.00-5.00 sec 56.1 MBytes 471 Mbits/sec
[ 5] 5.00-6.00 sec 56.1 MBytes 471 Mbits/sec
[ 5] 6.00-6.04 sec 3.69 MBytes 793 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-6.04 sec 338 MBytes 470 Mbits/sec receiver
-----------------------------------------------------------
Server listening on 5202
----------------------------------------------------------- |
The experiment shows that the link’s throughput is shared evenly for traffic in the same class. Similar results would be obtained with calls:
Code Block |
---|
test_iperf 5201 0 5202 0 6
test_iperf 5201 8 5202 8 6
test_iperf 5201 12 5202 12 6 |
(That would cover all 4 traffic classes defined by tc, with skb priorities different from 0, 4, 8, 12 resulting in the same classes set.)