Skip to content

Commit

Permalink
Merge tag 'pr-20141223-x86-vdso' of git://git.kernel.org/pub/scm/linu…
Browse files Browse the repository at this point in the history
…x/kernel/git/luto/linux into x86/urgent

Pull VDSO fix from Andy Lutomirski:

 "This is hopefully the last vdso fix for 3.19.  It should be very
  safe (it just adds a volatile).

  I don't think it fixes an actual bug (the __getcpu calls in the
  pvclock code may not have been needed in the first place), but
  discussion on that point is ongoing.

  It also fixes a big performance issue in 3.18 and earlier in which
  the lsl instructions in vclock_gettime got hoisted so far up the
  function that they happened even when the function they were in was
  never called.  n 3.19, the performance issue seems to be gone due to
  the whims of my compiler and some interaction with a branch that's
  now gone.

  I'll hopefully have a much bigger overhaul of the pvclock code
  for 3.20, but it needs careful review."

Signed-off-by: Ingo Molnar <mingo@kernel.org>
  • Loading branch information
Ingo Molnar committed Jan 1, 2015
2 parents 280dbc5 + 1ddf0b1 commit 2aba73a
Showing 1 changed file with 4 additions and 2 deletions.
6 changes: 4 additions & 2 deletions arch/x86/include/asm/vgtod.h
Original file line number Diff line number Diff line change
Expand Up @@ -80,9 +80,11 @@ static inline unsigned int __getcpu(void)

/*
* Load per CPU data from GDT. LSL is faster than RDTSCP and
* works on all CPUs.
* works on all CPUs. This is volatile so that it orders
* correctly wrt barrier() and to keep gcc from cleverly
* hoisting it out of the calling function.
*/
asm("lsl %1,%0" : "=r" (p) : "r" (__PER_CPU_SEG));
asm volatile ("lsl %1,%0" : "=r" (p) : "r" (__PER_CPU_SEG));

return p;
}
Expand Down

0 comments on commit 2aba73a

Please sign in to comment.