Skip to content

Commit

Permalink
x86/boot/compressed/64: Fix moving page table out of trampoline memory
Browse files Browse the repository at this point in the history
cleanup_trampoline() relocates the top-level page table out of
trampoline memory. We use 'top_pgtable' as our new top-level page table.

But if the 'top_pgtable' would be referenced from C in a usual way,
the address of the table will be calculated relative to RIP.
After kernel gets relocated, the address will be in the middle of
decompression buffer and the page table may get overwritten.
This leads to a crash.

We calculate the address of other page tables relative to the relocation
address. It makes them safe. We should do the same for 'top_pgtable'.

Calculate the address of 'top_pgtable' in assembly and pass down to
cleanup_trampoline().

Move the page table to .pgtable section where the rest of page tables
are. The section is @nobits so we save 4k in kernel image.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Hugh Dickins <hughd@google.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Fixes: e9d0e63 ("x86/boot/compressed/64: Prepare new top-level page table for trampoline")
Link: http://lkml.kernel.org/r/20180516080131.27913-3-kirill.shutemov@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
  • Loading branch information
Kirill A. Shutemov authored and Ingo Molnar committed May 16, 2018
1 parent 5c9b0b1 commit 589bb62
Show file tree
Hide file tree
Showing 2 changed files with 14 additions and 11 deletions.
11 changes: 11 additions & 0 deletions arch/x86/boot/compressed/head_64.S
Original file line number Diff line number Diff line change
Expand Up @@ -389,10 +389,14 @@ trampoline_return:
/*
* cleanup_trampoline() would restore trampoline memory.
*
* RDI is address of the page table to use instead of page table
* in trampoline memory (if required).
*
* RSI holds real mode data and needs to be preserved across
* this function call.
*/
pushq %rsi
leaq top_pgtable(%rbx), %rdi
call cleanup_trampoline
popq %rsi

Expand Down Expand Up @@ -691,3 +695,10 @@ boot_stack_end:
.balign 4096
pgtable:
.fill BOOT_PGT_SIZE, 1, 0

/*
* The page table is going to be used instead of page table in the trampoline
* memory.
*/
top_pgtable:
.fill PAGE_SIZE, 1, 0
14 changes: 3 additions & 11 deletions arch/x86/boot/compressed/pgtable_64.c
Original file line number Diff line number Diff line change
Expand Up @@ -22,14 +22,6 @@ struct paging_config {
/* Buffer to preserve trampoline memory */
static char trampoline_save[TRAMPOLINE_32BIT_SIZE];

/*
* The page table is going to be used instead of page table in the trampoline
* memory.
*
* It must not be in BSS as BSS is cleared after cleanup_trampoline().
*/
static char top_pgtable[PAGE_SIZE] __aligned(PAGE_SIZE) __section(.data);

/*
* Trampoline address will be printed by extract_kernel() for debugging
* purposes.
Expand Down Expand Up @@ -134,7 +126,7 @@ struct paging_config paging_prepare(void)
return paging_config;
}

void cleanup_trampoline(void)
void cleanup_trampoline(void *pgtable)
{
void *trampoline_pgtable;

Expand All @@ -145,8 +137,8 @@ void cleanup_trampoline(void)
* if it's there.
*/
if ((void *)__native_read_cr3() == trampoline_pgtable) {
memcpy(top_pgtable, trampoline_pgtable, PAGE_SIZE);
native_write_cr3((unsigned long)top_pgtable);
memcpy(pgtable, trampoline_pgtable, PAGE_SIZE);
native_write_cr3((unsigned long)pgtable);
}

/* Restore trampoline memory */
Expand Down

0 comments on commit 589bb62

Please sign in to comment.