Skip to content

Commit

Permalink
xen: efficiently support a holey p2m table
Browse files Browse the repository at this point in the history
When using sparsemem and memory hotplug, the kernel's pseudo-physical
address space can be discontigious.  Previously this was dealt with by
having the upper parts of the radix tree stubbed off.  Unfortunately,
this is incompatible with save/restore, which requires a complete p2m
table.

The solution is to have a special distinguished all-invalid p2m leaf
page, which we can point all the hole areas at.  This allows the tools
to see a complete p2m table, but it only costs a page for all memory
holes.

It also simplifies the code since it removes a few special cases.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
  • Loading branch information
Jeremy Fitzhardinge authored and Thomas Gleixner committed May 27, 2008
1 parent 8006ec3 commit cf0923e
Showing 1 changed file with 12 additions and 6 deletions.
18 changes: 12 additions & 6 deletions arch/x86/xen/mmu.c
Original file line number Diff line number Diff line change
Expand Up @@ -57,8 +57,17 @@
#include "mmu.h"

#define P2M_ENTRIES_PER_PAGE (PAGE_SIZE / sizeof(unsigned long))
#define TOP_ENTRIES (MAX_DOMAIN_PAGES / P2M_ENTRIES_PER_PAGE)

static unsigned long *p2m_top[MAX_DOMAIN_PAGES / P2M_ENTRIES_PER_PAGE];
/* Placeholder for holes in the address space */
static unsigned long p2m_missing[P2M_ENTRIES_PER_PAGE]
__attribute__((section(".data.page_aligned"))) =
{ [ 0 ... P2M_ENTRIES_PER_PAGE-1 ] = ~0UL };

/* Array of pointers to pages containing p2m entries */
static unsigned long *p2m_top[TOP_ENTRIES]
__attribute__((section(".data.page_aligned"))) =
{ [ 0 ... TOP_ENTRIES - 1] = &p2m_missing[0] };

static inline unsigned p2m_top_index(unsigned long pfn)
{
Expand Down Expand Up @@ -92,9 +101,6 @@ unsigned long get_phys_to_machine(unsigned long pfn)
return INVALID_P2M_ENTRY;

topidx = p2m_top_index(pfn);
if (p2m_top[topidx] == NULL)
return INVALID_P2M_ENTRY;

idx = p2m_index(pfn);
return p2m_top[topidx][idx];
}
Expand All @@ -110,7 +116,7 @@ static void alloc_p2m(unsigned long **pp)
for(i = 0; i < P2M_ENTRIES_PER_PAGE; i++)
p[i] = INVALID_P2M_ENTRY;

if (cmpxchg(pp, NULL, p) != NULL)
if (cmpxchg(pp, p2m_missing, p) != p2m_missing)
free_page((unsigned long)p);
}

Expand All @@ -129,7 +135,7 @@ void set_phys_to_machine(unsigned long pfn, unsigned long mfn)
}

topidx = p2m_top_index(pfn);
if (p2m_top[topidx] == NULL) {
if (p2m_top[topidx] == p2m_missing) {
/* no need to allocate a page to store an invalid entry */
if (mfn == INVALID_P2M_ENTRY)
return;
Expand Down

0 comments on commit cf0923e

Please sign in to comment.