net: sched: shrink struct Qdisc
authorPaolo Abeni <pabeni@redhat.com>
Fri, 25 May 2018 14:28:44 +0000 (16:28 +0200)
committerDavid S. Miller <davem@davemloft.net>
Tue, 29 May 2018 03:13:39 +0000 (23:13 -0400)
The struct Qdisc has a lot of holes, especially after commit
a53851e2c321 ("net: sched: explicit locking in gso_cpu fallback"),
which as a side effect, moved the fields just after 'busylock'
on a new cacheline.

Since both 'padded' and 'refcnt' are not updated frequently, and
there is a hole before 'gso_skb', we can move such fields there,
saving a cacheline without any performance side effect.

Before this commit:

pahole -C Qdisc net/sche/sch_generic.o
# ...
        /* size: 384, cachelines: 6, members: 25 */
        /* sum members: 236, holes: 3, sum holes: 92 */
        /* padding: 56 */

After this commit:
pahole -C Qdisc net/sche/sch_generic.o
# ...
/* size: 320, cachelines: 5, members: 25 */
/* sum members: 236, holes: 2, sum holes: 28 */
/* padding: 56 */

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
include/net/sch_generic.h

index 98c10a28cd01c39ebeeb3d0090f15537f2aec429..827a3711dc688f4f22336d800573d1093c6ba569 100644 (file)
@@ -85,6 +85,8 @@ struct Qdisc {
        struct net_rate_estimator __rcu *rate_est;
        struct gnet_stats_basic_cpu __percpu *cpu_bstats;
        struct gnet_stats_queue __percpu *cpu_qstats;
+       int                     padded;
+       refcount_t              refcnt;
 
        /*
         * For performance sake on SMP, we put highly modified fields at the end
@@ -97,8 +99,6 @@ struct Qdisc {
        unsigned long           state;
        struct Qdisc            *next_sched;
        struct sk_buff_head     skb_bad_txq;
-       int                     padded;
-       refcount_t              refcnt;
 
        spinlock_t              busylock ____cacheline_aligned_in_smp;
        spinlock_t              seqlock;