Skip to content

Commit

Permalink
x86: c_p_a() fix: reorder TLB / cache flushes to follow Intel recomme…
Browse files Browse the repository at this point in the history
…ndation

Intel recommends to first flush the TLBs and then the caches
on caching attribute changes. c_p_a() previously did it the
other way round. Reorder that.

The procedure is still not fully compliant to the Intel documentation
because Intel recommends a all CPU synchronization step between
the TLB flushes and the cache flushes.

However on all new Intel CPUs this is now meaningless anyways
because they support Self-Snoop and can skip the cache flush
step anyway.

[ mingo@elte.hu: decoupled from clflush and ported it to x86.git ]

Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
  • Loading branch information
Andi Kleen authored and Ingo Molnar committed Jan 30, 2008
1 parent 6ba9b7d commit 3c86882
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 7 deletions.
12 changes: 6 additions & 6 deletions arch/x86/mm/pageattr_32.c
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,12 @@ static void flush_kernel_map(void *arg)
struct list_head *lh = (struct list_head *)arg;
struct page *p;

/*
* Flush all to work around Errata in early athlons regarding
* large page flushing.
*/
__flush_tlb_all();

/* High level code is not ready for clflush yet */
if (0 && cpu_has_clflush) {
list_for_each_entry(p, lh, lru)
Expand All @@ -95,12 +101,6 @@ static void flush_kernel_map(void *arg)
if (boot_cpu_data.x86_model >= 4)
wbinvd();
}

/*
* Flush all to work around Errata in early athlons regarding
* large page flushing.
*/
__flush_tlb_all();
}

static void set_pmd_pte(pte_t *kpte, unsigned long address, pte_t pte)
Expand Down
3 changes: 2 additions & 1 deletion arch/x86/mm/pageattr_64.c
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,8 @@ static void flush_kernel_map(void *arg)
struct list_head *l = (struct list_head *)arg;
struct page *pg;

__flush_tlb_all();

/* When clflush is available always use it because it is
much cheaper than WBINVD. */
/* clflush is still broken. Disable for now. */
Expand All @@ -94,7 +96,6 @@ static void flush_kernel_map(void *arg)
clflush_cache_range(addr, PAGE_SIZE);
}
}
__flush_tlb_all();
}

static inline void flush_map(struct list_head *l)
Expand Down

0 comments on commit 3c86882

Please sign in to comment.