Skip to content

Commit

Permalink
---
Browse files Browse the repository at this point in the history
yaml
---
r: 322614
b: refs/heads/master
c: ee378aa
h: refs/heads/master
v: v3
  • Loading branch information
Lai Jiangshan authored and Tejun Heo committed Sep 10, 2012
1 parent e8d3cfa commit 6835d8a
Show file tree
Hide file tree
Showing 2 changed files with 37 additions and 2 deletions.
2 changes: 1 addition & 1 deletion [refs]
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
---
refs/heads/master: 552a37e9360a293cd20e7f8ff1fb326a244c5f1e
refs/heads/master: ee378aa49b594da9bda6a2c768cc5b2ad585f911
37 changes: 36 additions & 1 deletion trunk/kernel/workqueue.c
Original file line number Diff line number Diff line change
Expand Up @@ -1825,10 +1825,45 @@ static bool manage_workers(struct worker *worker)
struct worker_pool *pool = worker->pool;
bool ret = false;

if (!mutex_trylock(&pool->manager_mutex))
if (pool->flags & POOL_MANAGING_WORKERS)
return ret;

pool->flags |= POOL_MANAGING_WORKERS;

/*
* To simplify both worker management and CPU hotplug, hold off
* management while hotplug is in progress. CPU hotplug path can't
* grab %POOL_MANAGING_WORKERS to achieve this because that can
* lead to idle worker depletion (all become busy thinking someone
* else is managing) which in turn can result in deadlock under
* extreme circumstances. Use @pool->manager_mutex to synchronize
* manager against CPU hotplug.
*
* manager_mutex would always be free unless CPU hotplug is in
* progress. trylock first without dropping @gcwq->lock.
*/
if (unlikely(!mutex_trylock(&pool->manager_mutex))) {
spin_unlock_irq(&pool->gcwq->lock);
mutex_lock(&pool->manager_mutex);
/*
* CPU hotplug could have happened while we were waiting
* for manager_mutex. Hotplug itself can't handle us
* because manager isn't either on idle or busy list, and
* @gcwq's state and ours could have deviated.
*
* As hotplug is now excluded via manager_mutex, we can
* simply try to bind. It will succeed or fail depending
* on @gcwq's current state. Try it and adjust
* %WORKER_UNBOUND accordingly.
*/
if (worker_maybe_bind_and_lock(worker))
worker->flags &= ~WORKER_UNBOUND;
else
worker->flags |= WORKER_UNBOUND;

ret = true;
}

pool->flags &= ~POOL_MANAGE_WORKERS;

/*
Expand Down

0 comments on commit 6835d8a

Please sign in to comment.