arm64: memory: fix flipped VA space fallout
authorMark Rutland <mark.rutland@arm.com>
Wed, 14 Aug 2019 13:28:47 +0000 (14:28 +0100)
committerWill Deacon <will@kernel.org>
Wed, 14 Aug 2019 16:05:11 +0000 (17:05 +0100)
VA_START used to be the start of the TTBR1 address space, but now it's a
point midway though. In a couple of places we still use VA_START to get
the start of the TTBR1 address space, so let's fix these up to use
PAGE_OFFSET instead.

Fixes: 14c127c957c1c607 ("arm64: mm: Flip kernel VA space")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
arch/arm64/mm/dump.c
arch/arm64/mm/fault.c

index 6ec75305828e0c8c161d434a8565d96a42656363..8e10b4ba215a36ced6a146121a2ee55605d0baee 100644 (file)
@@ -400,7 +400,7 @@ void ptdump_check_wx(void)
                .check_wx = true,
        };
 
-       walk_pgd(&st, &init_mm, VA_START);
+       walk_pgd(&st, &init_mm, PAGE_OFFSET);
        note_page(&st, 0, 0, 0);
        if (st.wx_pages || st.uxn_pages)
                pr_warn("Checked W+X mappings: FAILED, %lu W+X pages found, %lu non-UXN pages found\n",
index 75eff57bd9efc6813c8852ab1f867a241f91a6b1..bb4e4f3fffd8809f8cbb96e80dd19958312e45b6 100644 (file)
@@ -109,7 +109,7 @@ static inline bool is_ttbr0_addr(unsigned long addr)
 static inline bool is_ttbr1_addr(unsigned long addr)
 {
        /* TTBR1 addresses may have a tag if KASAN_SW_TAGS is in use */
-       return arch_kasan_reset_tag(addr) >= VA_START;
+       return arch_kasan_reset_tag(addr) >= PAGE_OFFSET;
 }
 
 /*