Skip to content

Commit

Permalink
[PATCH] x86_64: prefetch the mmap_sem in the fault path
Browse files Browse the repository at this point in the history
In a micro-benchmark that stresses the pagefault path, the down_read_trylock
on the mmap_sem showed up quite high on the profile. Turns out this lock is
bouncing between cpus quite a bit and thus is cache-cold a lot. This patch
prefetches the lock (for write) as early as possible (and before some other
somewhat expensive operations). With this patch, the down_read_trylock
basically fell out of the top of profile.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
  • Loading branch information
Arjan van de Ven authored and Linus Torvalds committed Mar 25, 2006
1 parent 4bc32c4 commit a9ba9a3
Showing 1 changed file with 4 additions and 2 deletions.
6 changes: 4 additions & 2 deletions arch/x86_64/mm/fault.c
Original file line number Diff line number Diff line change
Expand Up @@ -314,11 +314,13 @@ asmlinkage void __kprobes do_page_fault(struct pt_regs *regs,
unsigned long flags;
siginfo_t info;

tsk = current;
mm = tsk->mm;
prefetchw(&mm->mmap_sem);

/* get the address */
__asm__("movq %%cr2,%0":"=r" (address));

tsk = current;
mm = tsk->mm;
info.si_code = SEGV_MAPERR;


Expand Down

0 comments on commit a9ba9a3

Please sign in to comment.