Skip to content

Commit

Permalink
xen: only enable interrupts while actually blocking for spinlock
Browse files Browse the repository at this point in the history
Where possible we enable interrupts while waiting for a spinlock to
become free, in order to reduce big latency spikes in interrupt handling.

However, at present if we manage to pick up the spinlock just before
blocking, we'll end up holding the lock with interrupts enabled for a
while.  This will cause a deadlock if we recieve an interrupt in that
window, and the interrupt handler tries to take the lock too.

Solve this by shrinking the interrupt-enabled region to just around the
blocking call.

[ Impact: avoid race/deadlock when using Xen PV spinlocks ]

Reported-by: "Yang, Xiaowei" <xiaowei.yang@intel.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
  • Loading branch information
Jeremy Fitzhardinge committed Sep 9, 2009
1 parent 577eebe commit 4d576b5
Showing 1 changed file with 11 additions and 8 deletions.
19 changes: 11 additions & 8 deletions arch/x86/xen/spinlock.c
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,6 @@ static noinline int xen_spin_lock_slow(struct raw_spinlock *lock, bool irq_enabl
struct xen_spinlock *prev;
int irq = __get_cpu_var(lock_kicker_irq);
int ret;
unsigned long flags;
u64 start;

/* If kicker interrupts not initialized yet, just spin */
Expand All @@ -199,16 +198,12 @@ static noinline int xen_spin_lock_slow(struct raw_spinlock *lock, bool irq_enabl
/* announce we're spinning */
prev = spinning_lock(xl);

flags = __raw_local_save_flags();
if (irq_enable) {
ADD_STATS(taken_slow_irqenable, 1);
raw_local_irq_enable();
}

ADD_STATS(taken_slow, 1);
ADD_STATS(taken_slow_nested, prev != NULL);

do {
unsigned long flags;

/* clear pending */
xen_clear_irq_pending(irq);

Expand All @@ -228,6 +223,12 @@ static noinline int xen_spin_lock_slow(struct raw_spinlock *lock, bool irq_enabl
goto out;
}

flags = __raw_local_save_flags();
if (irq_enable) {
ADD_STATS(taken_slow_irqenable, 1);
raw_local_irq_enable();
}

/*
* Block until irq becomes pending. If we're
* interrupted at this point (after the trylock but
Expand All @@ -238,13 +239,15 @@ static noinline int xen_spin_lock_slow(struct raw_spinlock *lock, bool irq_enabl
* pending.
*/
xen_poll_irq(irq);

raw_local_irq_restore(flags);

ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
} while (!xen_test_irq_pending(irq)); /* check for spurious wakeups */

kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));

out:
raw_local_irq_restore(flags);
unspinning_lock(xl, prev);
spin_time_accum_blocked(start);

Expand Down

0 comments on commit 4d576b5

Please sign in to comment.