Skip to content

Commit

Permalink
x86, mm: Don't clear page table if range is ram
Browse files Browse the repository at this point in the history
After we add code use buffer in BRK to pre-map buf for page table in
following patch:
	x86, mm: setup page table in top-down
it should be safe to remove early_memmap for page table accessing.
Instead we get panic with that.

It turns out that we clear the initial page table wrongly for next range
that is separated by holes.
And it only happens when we are trying to map ram range one by one.

We need to check if the range is ram before clearing page table.

We change the loop structure to remove the extra little loop and use
one loop only, and in that loop will caculate next at first, and check if
[addr,next) is covered by E820_RAM.

-v2: E820_RESERVED_KERN is treated as E820_RAM. EFI one change some E820_RAM
     to that, so next kernel by kexec will know that range is used already.

Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1353123563-3103-20-git-send-email-yinghai@kernel.org
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
  • Loading branch information
Yinghai Lu authored and H. Peter Anvin committed Nov 17, 2012
1 parent aeebe84 commit eceb363
Showing 1 changed file with 19 additions and 21 deletions.
40 changes: 19 additions & 21 deletions arch/x86/mm/init_64.c
Original file line number Diff line number Diff line change
Expand Up @@ -363,20 +363,20 @@ static unsigned long __meminit
phys_pte_init(pte_t *pte_page, unsigned long addr, unsigned long end,
pgprot_t prot)
{
unsigned pages = 0;
unsigned long pages = 0, next;
unsigned long last_map_addr = end;
int i;

pte_t *pte = pte_page + pte_index(addr);

for(i = pte_index(addr); i < PTRS_PER_PTE; i++, addr += PAGE_SIZE, pte++) {

for (i = pte_index(addr); i < PTRS_PER_PTE; i++, addr = next, pte++) {
next = (addr & PAGE_MASK) + PAGE_SIZE;
if (addr >= end) {
if (!after_bootmem) {
for(; i < PTRS_PER_PTE; i++, pte++)
set_pte(pte, __pte(0));
}
break;
if (!after_bootmem &&
!e820_any_mapped(addr & PAGE_MASK, next, E820_RAM) &&
!e820_any_mapped(addr & PAGE_MASK, next, E820_RESERVED_KERN))
set_pte(pte, __pte(0));
continue;
}

/*
Expand Down Expand Up @@ -419,16 +419,15 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long address, unsigned long end,
pte_t *pte;
pgprot_t new_prot = prot;

next = (address & PMD_MASK) + PMD_SIZE;
if (address >= end) {
if (!after_bootmem) {
for (; i < PTRS_PER_PMD; i++, pmd++)
set_pmd(pmd, __pmd(0));
}
break;
if (!after_bootmem &&
!e820_any_mapped(address & PMD_MASK, next, E820_RAM) &&
!e820_any_mapped(address & PMD_MASK, next, E820_RESERVED_KERN))
set_pmd(pmd, __pmd(0));
continue;
}

next = (address & PMD_MASK) + PMD_SIZE;

if (pmd_val(*pmd)) {
if (!pmd_large(*pmd)) {
spin_lock(&init_mm.page_table_lock);
Expand Down Expand Up @@ -497,13 +496,12 @@ phys_pud_init(pud_t *pud_page, unsigned long addr, unsigned long end,
pmd_t *pmd;
pgprot_t prot = PAGE_KERNEL;

if (addr >= end)
break;

next = (addr & PUD_MASK) + PUD_SIZE;

if (!after_bootmem && !e820_any_mapped(addr, next, 0)) {
set_pud(pud, __pud(0));
if (addr >= end) {
if (!after_bootmem &&
!e820_any_mapped(addr & PUD_MASK, next, E820_RAM) &&
!e820_any_mapped(addr & PUD_MASK, next, E820_RESERVED_KERN))
set_pud(pud, __pud(0));
continue;
}

Expand Down

0 comments on commit eceb363

Please sign in to comment.