arm64: turn flush_dcache_mmap_lock into a no-op
authorMatthew Wilcox <mawilcox@microsoft.com>
Tue, 10 Apr 2018 23:36:36 +0000 (16:36 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Wed, 11 Apr 2018 17:28:39 +0000 (10:28 -0700)
ARM64 doesn't walk the VMA tree in its flush_dcache_page()
implementation, so has no need to take the tree_lock.

Link: http://lkml.kernel.org/r/20180313132639.17387-4-willy@infradead.org
Signed-off-by: Matthew Wilcox <mawilcox@microsoft.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Cc: Darrick J. Wong <darrick.wong@oracle.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Jeff Layton <jlayton@kernel.org>
Cc: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
arch/arm64/include/asm/cacheflush.h

index 7dfcec4700fef0355372cbf7d4c2f11176c914d6..0094c6653b06b44ac1172688fac240bae37fe24d 100644 (file)
@@ -140,10 +140,8 @@ static inline void __flush_icache_all(void)
        dsb(ish);
 }
 
-#define flush_dcache_mmap_lock(mapping) \
-       spin_lock_irq(&(mapping)->tree_lock)
-#define flush_dcache_mmap_unlock(mapping) \
-       spin_unlock_irq(&(mapping)->tree_lock)
+#define flush_dcache_mmap_lock(mapping)                do { } while (0)
+#define flush_dcache_mmap_unlock(mapping)      do { } while (0)
 
 /*
  * We don't appear to need to do anything here.  In fact, if we did, we'd