arm64: memory: Ensure address tag is masked in conversion macros
authorWill Deacon <will@kernel.org>
Tue, 13 Aug 2019 15:26:54 +0000 (16:26 +0100)
committerWill Deacon <will@kernel.org>
Wed, 14 Aug 2019 12:04:46 +0000 (13:04 +0100)
commit577c2b35283fbadcc9ce4b56304ccea3ec8a5ca1
tree8cef1e56b90a19ce79989f0b606ae88e1397343e
parent68dd8ef321626f14ae9ef2039b7a03c707149489
arm64: memory: Ensure address tag is masked in conversion macros

When converting a linear virtual address to a physical address, pfn or
struct page *, we must make sure that the tag bits are masked before the
calculation otherwise we end up with corrupt pointers when running with
CONFIG_KASAN_SW_TAGS=y:

  | Unable to handle kernel paging request at virtual address 0037fe0007580d08
  | [0037fe0007580d08] address between user and kernel address ranges

Mask out the tag in __virt_to_phys_nodebug() and virt_to_page().

Reported-by: Qian Cai <cai@lca.pw>
Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Tested-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Fixes: 9cb1c5ddd2c4 ("arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and VMEMMAP_START")
Signed-off-by: Will Deacon <will@kernel.org>
arch/arm64/include/asm/memory.h