Skip to content

Commit

Permalink
xen: use stronger barrier after unlocking lock
Browse files Browse the repository at this point in the history
We need to have a stronger barrier between releasing the lock and
checking for any waiting spinners.  A compiler barrier is not sufficient
because the CPU's ordering rules do not prevent the read xl->spinners
from happening before the unlock assignment, as they are different
memory locations.

We need to have an explicit barrier to enforce the write-read ordering
to different memory locations.

Because of it, I can't bring up > 4 HVM guests on one SMP machine.

[ Code and commit comments expanded -J ]

[ Impact: avoid deadlock when using Xen PV spinlocks ]

Signed-off-by: Yang Xiaowei <xiaowei.yang@intel.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
  • Loading branch information
Yang Xiaowei authored and Jeremy Fitzhardinge committed Sep 9, 2009
1 parent 4d576b5 commit 2496afb
Showing 1 changed file with 7 additions and 2 deletions.
9 changes: 7 additions & 2 deletions arch/x86/xen/spinlock.c
Original file line number Diff line number Diff line change
Expand Up @@ -326,8 +326,13 @@ static void xen_spin_unlock(struct raw_spinlock *lock)
smp_wmb(); /* make sure no writes get moved after unlock */
xl->lock = 0; /* release lock */

/* make sure unlock happens before kick */
barrier();
/*
* Make sure unlock happens before checking for waiting
* spinners. We need a strong barrier to enforce the
* write-read ordering to different memory locations, as the
* CPU makes no implied guarantees about their ordering.
*/
mb();

if (unlikely(xl->spinners))
xen_spin_unlock_slow(xl);
Expand Down

0 comments on commit 2496afb

Please sign in to comment.