Skip to content

Commit

Permalink
rcu: Stop chasing QS if another CPU did it for us
Browse files Browse the repository at this point in the history
When a CPU is idle and others CPUs handled its extended
quiescent state to complete grace periods on its behalf,
it will catch up with completed grace periods numbers
when it wakes up.

But at this point there might be no more grace period to
complete, but still the woken CPU always keeps its stale
qs_pending value and will then continue to chase quiescent
states even if its not needed anymore.

This results in clusters of spurious softirqs until a new
real grace period is started. Because if we continue to
chase quiescent states but we have completed every grace
periods, rcu_report_qs_rdp() is puzzled and makes that
state run into infinite loops.

As suggested by Lai Jiangshan, just reset qs_pending if
someone completed every grace periods on our behalf.

Suggested-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
  • Loading branch information
Frederic Weisbecker authored and Paul E. McKenney committed Dec 17, 2010
1 parent e27fc96 commit 20377f3
Showing 1 changed file with 8 additions and 0 deletions.
8 changes: 8 additions & 0 deletions kernel/rcutree.c
Original file line number Diff line number Diff line change
Expand Up @@ -678,6 +678,14 @@ __rcu_process_gp_end(struct rcu_state *rsp, struct rcu_node *rnp, struct rcu_dat

/* Remember that we saw this grace-period completion. */
rdp->completed = rnp->completed;

/*
* If another CPU handled our extended quiescent states and
* we have no more grace period to complete yet, then stop
* chasing quiescent states.
*/
if (rdp->completed == rnp->gpnum)
rdp->qs_pending = 0;
}
}

Expand Down

0 comments on commit 20377f3

Please sign in to comment.