x86/exceptions: Enable IST guard pages
authorThomas Gleixner <tglx@linutronix.de>
Sun, 14 Apr 2019 15:59:56 +0000 (17:59 +0200)
committerBorislav Petkov <bp@suse.de>
Wed, 17 Apr 2019 13:05:32 +0000 (15:05 +0200)
All usage sites which expected that the exception stacks in the CPU entry
area are mapped linearly are fixed up. Enable guard pages between the
IST stacks.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Borislav Petkov <bp@suse.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Sean Christopherson <sean.j.christopherson@intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86-ml <x86@kernel.org>
Link: https://lkml.kernel.org/r/20190414160145.349862042@linutronix.de
arch/x86/include/asm/cpu_entry_area.h

index 310eeb62d4184a83ebe606bf909501a10661aa69..9c96406e6d2b877845c6607b4d93c9113640cad2 100644 (file)
@@ -26,13 +26,9 @@ struct exception_stacks {
        ESTACKS_MEMBERS(0)
 };
 
-/*
- * The effective cpu entry area mapping with guard pages. Guard size is
- * zero until the code which makes assumptions about linear mappings is
- * cleaned up.
- */
+/* The effective cpu entry area mapping with guard pages. */
 struct cea_exception_stacks {
-       ESTACKS_MEMBERS(0)
+       ESTACKS_MEMBERS(PAGE_SIZE)
 };
 
 /*