locking/spinlock, netfilter: Fix nf_conntrack_lock() barriers
authorPeter Zijlstra <peterz@infradead.org>
Tue, 24 May 2016 13:00:38 +0000 (15:00 +0200)
committerIngo Molnar <mingo@kernel.org>
Tue, 14 Jun 2016 09:55:16 +0000 (11:55 +0200)
Even with spin_unlock_wait() fixed, nf_conntrack_lock{,_all}() is
borken as it misses a bunch of memory barriers to order the whole
global vs local locks scheme.

Even x86 (and other TSO archs) are affected.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
[ Updated the comments. ]
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: David S. Miller <davem@davemloft.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
net/netfilter/nf_conntrack_core.c

index db2312eeb2a47c44db0f0ac5a529a10a0a8f8d2f..b8c5501d3872c2a9a0e4406d9932ac17d3a6d7af 100644 (file)
@@ -83,6 +83,13 @@ void nf_conntrack_lock(spinlock_t *lock) __acquires(lock)
        spin_lock(lock);
        while (unlikely(nf_conntrack_locks_all)) {
                spin_unlock(lock);
+
+               /*
+                * Order the 'nf_conntrack_locks_all' load vs. the
+                * spin_unlock_wait() loads below, to ensure
+                * that 'nf_conntrack_locks_all_lock' is indeed held:
+                */
+               smp_rmb(); /* spin_lock(&nf_conntrack_locks_all_lock) */
                spin_unlock_wait(&nf_conntrack_locks_all_lock);
                spin_lock(lock);
        }
@@ -128,6 +135,14 @@ static void nf_conntrack_all_lock(void)
        spin_lock(&nf_conntrack_locks_all_lock);
        nf_conntrack_locks_all = true;
 
+       /*
+        * Order the above store of 'nf_conntrack_locks_all' against
+        * the spin_unlock_wait() loads below, such that if
+        * nf_conntrack_lock() observes 'nf_conntrack_locks_all'
+        * we must observe nf_conntrack_locks[] held:
+        */
+       smp_mb(); /* spin_lock(&nf_conntrack_locks_all_lock) */
+
        for (i = 0; i < CONNTRACK_LOCKS; i++) {
                spin_unlock_wait(&nf_conntrack_locks[i]);
        }
@@ -135,7 +150,13 @@ static void nf_conntrack_all_lock(void)
 
 static void nf_conntrack_all_unlock(void)
 {
-       nf_conntrack_locks_all = false;
+       /*
+        * All prior stores must be complete before we clear
+        * 'nf_conntrack_locks_all'. Otherwise nf_conntrack_lock()
+        * might observe the false value but not the entire
+        * critical section:
+        */
+       smp_store_release(&nf_conntrack_locks_all, false);
        spin_unlock(&nf_conntrack_locks_all_lock);
 }