Merge branch 'Remove-rtnl-lock-dependency-from-flow_action-infra'
Vlad Buslov says:
====================
Remove rtnl lock dependency from flow_action infra
Currently, TC flow_action infrastructure code obtain rtnl lock before
accessing action state in tc_setup_flow_action() function and releases
it afterwards. This behavior is not supposed to impact TC filter
insertion rate because filling flow_action representation is only a
small part of creating new filter and expensive operations (hardware
offload callbacks, classifiers, cls API code that creates chains and
classifiers instances) already support unlocked execution. However,
typical vswitch implementation might need to also dump TC filters
concurrently, for example to age out unused flows or update flow
counters. TC dump is fully serialized and holds rtnl lock during its
whole execution in kernel space. As such, it can significantly impact
concurrent tasks that try to intermittently obtain rtnl lock when
filling intermediate representation for new filter offload (performance
evaluation at the end of this mail).
Refactor flow_action cls API infrastructure and its dependencies to not
rely on rtnl lock for synchronization. Patch set overview:
- Refactor tc_setup_flow_action() to obtain action tcf_lock when
accessing action state. Fix its dependencies to not obtain tcf_lock
themselves and assume that caller already holds it (needs to be done
in same patch to prevent deadlock) and not to call sleeping functions
(needs to be done in same patch to prevent "sleeping while atomic"
dmesg warnings).
- Refactor action helper functions to require tcf_lock instead of rtnl.
Internally, all of the actions already use tcf_lock for
synchronization to accommodate unlocked classifier API, so this change
relies on already existing functionality.
- Remove rtnl lock and "rtnl_held" argument from tc_setup_flow_action()
function.
To test the change, multiple concurrent TC instances are invoked with
following command:
time ls add* | xargs -n 1 -P 100 sudo tc -b
Ten batch files with following typical rules (100k each) are used:
filter add dev ens1f0_0 protocol ip ingress prio 1 handle 1 flower
src_mac e4:11:0:0:0:0 dst_mac e4:12:0:0:0:0 src_ip 192.168.111.1
dst_ip 192.168.111.2 ip_proto udp dst_port 1 src_port 1 action
tunnel_key set id 1 src_ip 2.2.2.2 dst_ip 2.2.2.3 dst_port 4789
no_percpu action mirred egress redirect dev vxlan1 no_percpu
TC dump of same device is called in infinite loop from five concurrent
instances:
while true do tc -s filter show dev $NIC ingress >/dev/null done
Results obtained on current net-next commit
9f68e3655aae ("Merge tag
'drm-next-2020-01-30' of git://anongit.freedesktop.org/drm/drm"):
| net-next | this change
---------------+----------+-------------
TC add | 6.3s | 6.3s
TC add + dump | 29.3s | 6.8s
Test results confirm significant impact of concurrent TC dump. The
impact is almost fully mitigated by proposed change (differences can be
attributed to contention for chain and tp locks between add and dump TC
instances).
====================
Signed-off-by: David S. Miller <davem@davemloft.net>