Davide Caratti says:
====================
net/sched: remove spinlock from 'csum' action
Similarly to what has been done earlier with other actions [1][2], this
series tries to improve the performance of 'csum' tc action, removing a
spinlock in the data path. Patch 1 lets act_csum use per-CPU counters;
patch 2 removes spin_{,un}lock_bh() calls from the act() method.
test procedure (using pktgen from https://github.com/netoptimizer):
# ip link add name eth1 type dummy
# ip link set dev eth1 up
# tc qdisc add dev eth1 root handle 1: prio
# for a in pass drop; do
> tc filter del dev eth1 parent 1: pref 10 matchall action csum udp
> tc filter add dev eth1 parent 1: pref 10 matchall action csum udp $a
> for n in 2 4; do
> ./pktgen_bench_xmit_mode_queue_xmit.sh -v -s 64 -t $n -n
1000000 -i eth1
> done
> done
test results:
| | before patch | after patch
$a | $n | avg. pps/thread | avg. pps/thread
-----+----+-----------------+----------------
pass | 2 |
1671463 ± 4% |
1920789 ± 3%
pass | 4 | 648797 ± 1% | 738190 ± 1%
drop | 2 |
3212692 ± 2% |
3719811 ± 2%
drop | 4 |
1078824 ± 1% |
1328099 ± 1%
references:
[1] https://www.spinics.net/lists/netdev/msg334760.html
[2] https://www.spinics.net/lists/netdev/msg465862.html
v3 changes:
- use rtnl_dereference() in place of rcu_dereference() in tcf_csum_dump()
v2 changes:
- add 'drop' test, it produces more contentions
- use RCU-protected struct to store 'action' and 'update_flags', to avoid
reading the values from subsequent configurations
====================
Signed-off-by: David S. Miller <davem@davemloft.net>