Skip to content

Commit

Permalink
x86/asm: Pin sensitive CR0 bits
Browse files Browse the repository at this point in the history
With sensitive CR4 bits pinned now, it's possible that the WP bit for
CR0 might become a target as well.

Following the same reasoning for the CR4 pinning, pin CR0's WP
bit. Contrary to the cpu feature dependend CR4 pinning this can be done
with a constant value.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: kernel-hardening@lists.openwall.com
Link: https://lkml.kernel.org/r/20190618045503.39105-4-keescook@chromium.org
  • Loading branch information
Kees Cook authored and Thomas Gleixner committed Jun 22, 2019
1 parent 873d50d commit 8dbec27
Showing 1 changed file with 14 additions and 1 deletion.
15 changes: 14 additions & 1 deletion arch/x86/include/asm/special_insns.h
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,20 @@ static inline unsigned long native_read_cr0(void)

static inline void native_write_cr0(unsigned long val)
{
asm volatile("mov %0,%%cr0": : "r" (val), "m" (__force_order));
unsigned long bits_missing = 0;

set_register:
asm volatile("mov %0,%%cr0": "+r" (val), "+m" (__force_order));

if (static_branch_likely(&cr_pinning)) {
if (unlikely((val & X86_CR0_WP) != X86_CR0_WP)) {
bits_missing = X86_CR0_WP;
val |= bits_missing;
goto set_register;
}
/* Warn after we've set the missing bits. */
WARN_ONCE(bits_missing, "CR0 WP bit went missing!?\n");
}
}

static inline unsigned long native_read_cr2(void)
Expand Down

0 comments on commit 8dbec27

Please sign in to comment.