arm64: SW PAN: Update saved ttbr0 value on enter_lazy_tlb
authorWill Deacon <will.deacon@arm.com>
Wed, 6 Dec 2017 10:51:12 +0000 (10:51 +0000)
committerWill Deacon <will.deacon@arm.com>
Wed, 6 Dec 2017 18:28:10 +0000 (18:28 +0000)
commitd96cc49bff5a7735576cc6f6f111f875d101cec8
tree414b9a833517440b52162e784b3c0ddd96ad6915
parent0adbdfde8cfc9415aeed2a4955d2d17b3bd9bf13
arm64: SW PAN: Update saved ttbr0 value on enter_lazy_tlb

enter_lazy_tlb is called when a kernel thread rides on the back of
another mm, due to a context switch or an explicit call to unuse_mm
where a call to switch_mm is elided.

In these cases, it's important to keep the saved ttbr value up to date
with the active mm, otherwise we can end up with a stale value which
points to a potentially freed page table.

This patch implements enter_lazy_tlb for arm64, so that the saved ttbr0
is kept up-to-date with the active mm for kernel threads.

Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Vinayak Menon <vinmenon@codeaurora.org>
Cc: <stable@vger.kernel.org>
Fixes: 39bc88e5e38e9b21 ("arm64: Disable TTBR0_EL1 during normal kernel execution")
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Reported-by: Vinayak Menon <vinmenon@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
arch/arm64/include/asm/mmu_context.h