Skip to content

Commit

Permalink
Merge tag 'wq-for-6.9' of git://git.kernel.org/pub/scm/linux/kernel/g…
Browse files Browse the repository at this point in the history
…it/tj/wq

Pull workqueue updates from Tejun Heo:
 "This cycle, a lot of workqueue changes including some that are
  significant and invasive.

   - During v6.6 cycle, unbound workqueues were updated so that they are
     more topology aware and flexible, which among other things improved
     workqueue behavior on modern multi-L3 CPUs. In the process, commit
     636b927 ("workqueue: Make unbound workqueues to use per-cpu
     pool_workqueues") switched unbound workqueues to use per-CPU
     frontend pool_workqueues as a part of increasing front-back mapping
     flexibility.

     An unwelcome side effect of this change was that this made max
     concurrency enforcement per-CPU blowing up the maximum number of
     allowed concurrent executions. I incorrectly assumed that this
     wouldn't cause practical problems as most unbound workqueue users
     are self-regulate max concurrency; however, there definitely are
     which don't (e.g. on IO paths) and the drastic increase in the
     allowed max concurrency led to noticeable perf regressions in some
     use cases.

     This is now addressed by separating out max concurrency enforcement
     to a separate struct - wq_node_nr_active - which makes @max_active
     consistently mean system-wide max concurrency regardless of the
     number of CPUs or (finally) NUMA nodes. This is a rather invasive
     and, in places, a bit clunky; however, the clunkiness rises from
     the the inherent requirement to handle the disagreement between the
     execution locality domain and max concurrency enforcement domain on
     some modern machines.

     See commit 5797b1c ("workqueue: Implement system-wide
     nr_active enforcement for unbound workqueues") for more details.

   - BH workqueue support is added.

     They are similar to per-CPU workqueues but execute work items in
     the softirq context. This is expected to replace tasklet. However,
     currently, it's missing the ability to disable and enable work
     items which is needed to convert many tasklet users. To avoid
     crowding this merge window too much, this will be included in the
     next merge window. A separate pull request will be sent for the
     couple conversion patches that are currently pending.

   - Waiman plugged a long-standing hole in workqueue CPU isolation
     where ordered workqueues didn't follow wq_unbound_cpumask updates.
     Ordered workqueues now follow the same rules as other unbound
     workqueues.

   - More CPU isolation improvements: Juri fixed another deficit in
     workqueue isolation where unbound rescuers don't respect
     wq_unbound_cpumask. Leonardo fixed delayed_work timers firing on
     isolated CPUs.

   - Other misc changes"

* tag 'wq-for-6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq: (54 commits)
  workqueue: Drain BH work items on hot-unplugged CPUs
  workqueue: Introduce from_work() helper for cleaner callback declarations
  workqueue: Control intensive warning threshold through cmdline
  workqueue: Make @flags handling consistent across set_work_data() and friends
  workqueue: Remove clear_work_data()
  workqueue: Factor out work_grab_pending() from __cancel_work_sync()
  workqueue: Clean up enum work_bits and related constants
  workqueue: Introduce work_cancel_flags
  workqueue: Use variable name irq_flags for saving local irq flags
  workqueue: Reorganize flush and cancel[_sync] functions
  workqueue: Rename __cancel_work_timer() to __cancel_timer_sync()
  workqueue: Use rcu_read_lock_any_held() instead of rcu_read_lock_held()
  workqueue: Cosmetic changes
  workqueue, irq_work: Build fix for !CONFIG_IRQ_WORK
  workqueue: Fix queue_work_on() with BH workqueues
  async: Use a dedicated unbound workqueue with raised min_active
  workqueue: Implement workqueue_set_min_active()
  workqueue: Fix kernel-doc comment of unplug_oldest_pwq()
  workqueue: Bind unbound workqueue rescuer to wq_unbound_cpumask
  kernel/workqueue: Let rescuers follow unbound wq cpumask changes
  ...
  • Loading branch information
Linus Torvalds committed Mar 11, 2024
2 parents 8ede842 + 1acd92d commit ff887eb
Show file tree
Hide file tree
Showing 11 changed files with 1,690 additions and 530 deletions.
9 changes: 9 additions & 0 deletions Documentation/admin-guide/kernel-parameters.txt
Original file line number Diff line number Diff line change
Expand Up @@ -7244,6 +7244,15 @@
threshold repeatedly. They are likely good
candidates for using WQ_UNBOUND workqueues instead.

workqueue.cpu_intensive_warning_thresh=<uint>
If CONFIG_WQ_CPU_INTENSIVE_REPORT is set, the kernel
will report the work functions which violate the
intensive_threshold_us repeatedly. In order to prevent
spurious warnings, start printing only after a work
function has violated this threshold number of times.

The default is 4 times. 0 disables the warning.

workqueue.power_efficient
Per-cpu workqueues are generally preferred because
they show better performance thanks to cache
Expand Down
43 changes: 29 additions & 14 deletions Documentation/core-api/workqueue.rst
Original file line number Diff line number Diff line change
Expand Up @@ -77,10 +77,12 @@ wants a function to be executed asynchronously it has to set up a work
item pointing to that function and queue that work item on a
workqueue.

Special purpose threads, called worker threads, execute the functions
off of the queue, one after the other. If no work is queued, the
worker threads become idle. These worker threads are managed in so
called worker-pools.
A work item can be executed in either a thread or the BH (softirq) context.

For threaded workqueues, special purpose threads, called [k]workers, execute
the functions off of the queue, one after the other. If no work is queued,
the worker threads become idle. These worker threads are managed in
worker-pools.

The cmwq design differentiates between the user-facing workqueues that
subsystems and drivers queue work items on and the backend mechanism
Expand All @@ -91,6 +93,12 @@ for high priority ones, for each possible CPU and some extra
worker-pools to serve work items queued on unbound workqueues - the
number of these backing pools is dynamic.

BH workqueues use the same framework. However, as there can only be one
concurrent execution context, there's no need to worry about concurrency.
Each per-CPU BH worker pool contains only one pseudo worker which represents
the BH execution context. A BH workqueue can be considered a convenience
interface to softirq.

Subsystems and drivers can create and queue work items through special
workqueue API functions as they see fit. They can influence some
aspects of the way the work items are executed by setting flags on the
Expand All @@ -106,7 +114,7 @@ unless specifically overridden, a work item of a bound workqueue will
be queued on the worklist of either normal or highpri worker-pool that
is associated to the CPU the issuer is running on.

For any worker pool implementation, managing the concurrency level
For any thread pool implementation, managing the concurrency level
(how many execution contexts are active) is an important issue. cmwq
tries to keep the concurrency at a minimal but sufficient level.
Minimal to save resources and sufficient in that the system is used at
Expand Down Expand Up @@ -164,6 +172,17 @@ resources, scheduled and executed.
``flags``
---------

``WQ_BH``
BH workqueues can be considered a convenience interface to softirq. BH
workqueues are always per-CPU and all BH work items are executed in the
queueing CPU's softirq context in the queueing order.

All BH workqueues must have 0 ``max_active`` and ``WQ_HIGHPRI`` is the
only allowed additional flag.

BH work items cannot sleep. All other features such as delayed queueing,
flushing and canceling are supported.

``WQ_UNBOUND``
Work items queued to an unbound wq are served by the special
worker-pools which host workers which are not bound to any
Expand Down Expand Up @@ -237,15 +256,11 @@ may queue at the same time. Unless there is a specific need for
throttling the number of active work items, specifying '0' is
recommended.

Some users depend on the strict execution ordering of ST wq. The
combination of ``@max_active`` of 1 and ``WQ_UNBOUND`` used to
achieve this behavior. Work items on such wq were always queued to the
unbound worker-pools and only one work item could be active at any given
time thus achieving the same ordering property as ST wq.

In the current implementation the above configuration only guarantees
ST behavior within a given NUMA node. Instead ``alloc_ordered_workqueue()`` should
be used to achieve system-wide ST behavior.
Some users depend on strict execution ordering where only one work item
is in flight at any given time and the work items are processed in
queueing order. While the combination of ``@max_active`` of 1 and
``WQ_UNBOUND`` used to achieve this behavior, this is no longer the
case. Use ``alloc_ordered_queue()`` instead.


Example Execution Scenarios
Expand Down
1 change: 1 addition & 0 deletions include/linux/async.h
Original file line number Diff line number Diff line change
Expand Up @@ -120,4 +120,5 @@ extern void async_synchronize_cookie(async_cookie_t cookie);
extern void async_synchronize_cookie_domain(async_cookie_t cookie,
struct async_domain *domain);
extern bool current_is_async(void);
extern void async_init(void);
#endif
141 changes: 99 additions & 42 deletions include/linux/workqueue.h
Original file line number Diff line number Diff line change
Expand Up @@ -22,20 +22,54 @@
*/
#define work_data_bits(work) ((unsigned long *)(&(work)->data))

enum {
enum work_bits {
WORK_STRUCT_PENDING_BIT = 0, /* work item is pending execution */
WORK_STRUCT_INACTIVE_BIT= 1, /* work item is inactive */
WORK_STRUCT_PWQ_BIT = 2, /* data points to pwq */
WORK_STRUCT_LINKED_BIT = 3, /* next work is linked to this one */
WORK_STRUCT_INACTIVE_BIT, /* work item is inactive */
WORK_STRUCT_PWQ_BIT, /* data points to pwq */
WORK_STRUCT_LINKED_BIT, /* next work is linked to this one */
#ifdef CONFIG_DEBUG_OBJECTS_WORK
WORK_STRUCT_STATIC_BIT = 4, /* static initializer (debugobjects) */
WORK_STRUCT_COLOR_SHIFT = 5, /* color for workqueue flushing */
#else
WORK_STRUCT_COLOR_SHIFT = 4, /* color for workqueue flushing */
WORK_STRUCT_STATIC_BIT, /* static initializer (debugobjects) */
#endif
WORK_STRUCT_FLAG_BITS,

/* color for workqueue flushing */
WORK_STRUCT_COLOR_SHIFT = WORK_STRUCT_FLAG_BITS,
WORK_STRUCT_COLOR_BITS = 4,

/*
* When WORK_STRUCT_PWQ is set, reserve 8 bits off of pwq pointer w/
* debugobjects turned off. This makes pwqs aligned to 256 bytes (512
* bytes w/ DEBUG_OBJECTS_WORK) and allows 16 workqueue flush colors.
*
* MSB
* [ pwq pointer ] [ flush color ] [ STRUCT flags ]
* 4 bits 4 or 5 bits
*/
WORK_STRUCT_PWQ_SHIFT = WORK_STRUCT_COLOR_SHIFT + WORK_STRUCT_COLOR_BITS,

/*
* data contains off-queue information when !WORK_STRUCT_PWQ.
*
* MSB
* [ pool ID ] [ OFFQ flags ] [ STRUCT flags ]
* 1 bit 4 or 5 bits
*/
WORK_OFFQ_FLAG_SHIFT = WORK_STRUCT_FLAG_BITS,
WORK_OFFQ_CANCELING_BIT = WORK_OFFQ_FLAG_SHIFT,
WORK_OFFQ_FLAG_END,
WORK_OFFQ_FLAG_BITS = WORK_OFFQ_FLAG_END - WORK_OFFQ_FLAG_SHIFT,

/*
* When a work item is off queue, the high bits encode off-queue flags
* and the last pool it was on. Cap pool ID to 31 bits and use the
* highest number to indicate that no pool is associated.
*/
WORK_OFFQ_POOL_SHIFT = WORK_OFFQ_FLAG_SHIFT + WORK_OFFQ_FLAG_BITS,
WORK_OFFQ_LEFT = BITS_PER_LONG - WORK_OFFQ_POOL_SHIFT,
WORK_OFFQ_POOL_BITS = WORK_OFFQ_LEFT <= 31 ? WORK_OFFQ_LEFT : 31,
};

enum work_flags {
WORK_STRUCT_PENDING = 1 << WORK_STRUCT_PENDING_BIT,
WORK_STRUCT_INACTIVE = 1 << WORK_STRUCT_INACTIVE_BIT,
WORK_STRUCT_PWQ = 1 << WORK_STRUCT_PWQ_BIT,
Expand All @@ -45,35 +79,14 @@ enum {
#else
WORK_STRUCT_STATIC = 0,
#endif
};

enum wq_misc_consts {
WORK_NR_COLORS = (1 << WORK_STRUCT_COLOR_BITS),

/* not bound to any CPU, prefer the local CPU */
WORK_CPU_UNBOUND = NR_CPUS,

/*
* Reserve 8 bits off of pwq pointer w/ debugobjects turned off.
* This makes pwqs aligned to 256 bytes and allows 16 workqueue
* flush colors.
*/
WORK_STRUCT_FLAG_BITS = WORK_STRUCT_COLOR_SHIFT +
WORK_STRUCT_COLOR_BITS,

/* data contains off-queue information when !WORK_STRUCT_PWQ */
WORK_OFFQ_FLAG_BASE = WORK_STRUCT_COLOR_SHIFT,

__WORK_OFFQ_CANCELING = WORK_OFFQ_FLAG_BASE,

/*
* When a work item is off queue, its high bits point to the last
* pool it was on. Cap at 31 bits and use the highest number to
* indicate that no pool is associated.
*/
WORK_OFFQ_FLAG_BITS = 1,
WORK_OFFQ_POOL_SHIFT = WORK_OFFQ_FLAG_BASE + WORK_OFFQ_FLAG_BITS,
WORK_OFFQ_LEFT = BITS_PER_LONG - WORK_OFFQ_POOL_SHIFT,
WORK_OFFQ_POOL_BITS = WORK_OFFQ_LEFT <= 31 ? WORK_OFFQ_LEFT : 31,

/* bit mask for work_busy() return values */
WORK_BUSY_PENDING = 1 << 0,
WORK_BUSY_RUNNING = 1 << 1,
Expand All @@ -83,12 +96,10 @@ enum {
};

/* Convenience constants - of type 'unsigned long', not 'enum'! */
#define WORK_OFFQ_CANCELING (1ul << __WORK_OFFQ_CANCELING)
#define WORK_OFFQ_CANCELING (1ul << WORK_OFFQ_CANCELING_BIT)
#define WORK_OFFQ_POOL_NONE ((1ul << WORK_OFFQ_POOL_BITS) - 1)
#define WORK_STRUCT_NO_POOL (WORK_OFFQ_POOL_NONE << WORK_OFFQ_POOL_SHIFT)

#define WORK_STRUCT_FLAG_MASK ((1ul << WORK_STRUCT_FLAG_BITS) - 1)
#define WORK_STRUCT_WQ_DATA_MASK (~WORK_STRUCT_FLAG_MASK)
#define WORK_STRUCT_PWQ_MASK (~((1ul << WORK_STRUCT_PWQ_SHIFT) - 1))

#define WORK_DATA_INIT() ATOMIC_LONG_INIT((unsigned long)WORK_STRUCT_NO_POOL)
#define WORK_DATA_STATIC_INIT() \
Expand Down Expand Up @@ -347,7 +358,8 @@ static inline unsigned int work_static(struct work_struct *work) { return 0; }
* Workqueue flags and constants. For details, please refer to
* Documentation/core-api/workqueue.rst.
*/
enum {
enum wq_flags {
WQ_BH = 1 << 0, /* execute in bottom half (softirq) context */
WQ_UNBOUND = 1 << 1, /* not bound to any cpu */
WQ_FREEZABLE = 1 << 2, /* freeze during suspend */
WQ_MEM_RECLAIM = 1 << 3, /* may be used for memory reclaim */
Expand Down Expand Up @@ -386,11 +398,22 @@ enum {
__WQ_DRAINING = 1 << 16, /* internal: workqueue is draining */
__WQ_ORDERED = 1 << 17, /* internal: workqueue is ordered */
__WQ_LEGACY = 1 << 18, /* internal: create*_workqueue() */
__WQ_ORDERED_EXPLICIT = 1 << 19, /* internal: alloc_ordered_workqueue() */

/* BH wq only allows the following flags */
__WQ_BH_ALLOWS = WQ_BH | WQ_HIGHPRI,
};

enum wq_consts {
WQ_MAX_ACTIVE = 512, /* I like 512, better ideas? */
WQ_UNBOUND_MAX_ACTIVE = WQ_MAX_ACTIVE,
WQ_DFL_ACTIVE = WQ_MAX_ACTIVE / 2,

/*
* Per-node default cap on min_active. Unless explicitly set, min_active
* is set to min(max_active, WQ_DFL_MIN_ACTIVE). For more details, see
* workqueue_struct->min_active definition.
*/
WQ_DFL_MIN_ACTIVE = 8,
};

/*
Expand Down Expand Up @@ -420,6 +443,9 @@ enum {
* they are same as their non-power-efficient counterparts - e.g.
* system_power_efficient_wq is identical to system_wq if
* 'wq_power_efficient' is disabled. See WQ_POWER_EFFICIENT for more info.
*
* system_bh[_highpri]_wq are convenience interface to softirq. BH work items
* are executed in the queueing CPU's BH context in the queueing order.
*/
extern struct workqueue_struct *system_wq;
extern struct workqueue_struct *system_highpri_wq;
Expand All @@ -428,16 +454,43 @@ extern struct workqueue_struct *system_unbound_wq;
extern struct workqueue_struct *system_freezable_wq;
extern struct workqueue_struct *system_power_efficient_wq;
extern struct workqueue_struct *system_freezable_power_efficient_wq;
extern struct workqueue_struct *system_bh_wq;
extern struct workqueue_struct *system_bh_highpri_wq;

void workqueue_softirq_action(bool highpri);
void workqueue_softirq_dead(unsigned int cpu);

/**
* alloc_workqueue - allocate a workqueue
* @fmt: printf format for the name of the workqueue
* @flags: WQ_* flags
* @max_active: max in-flight work items per CPU, 0 for default
* @max_active: max in-flight work items, 0 for default
* remaining args: args for @fmt
*
* Allocate a workqueue with the specified parameters. For detailed
* information on WQ_* flags, please refer to
* For a per-cpu workqueue, @max_active limits the number of in-flight work
* items for each CPU. e.g. @max_active of 1 indicates that each CPU can be
* executing at most one work item for the workqueue.
*
* For unbound workqueues, @max_active limits the number of in-flight work items
* for the whole system. e.g. @max_active of 16 indicates that that there can be
* at most 16 work items executing for the workqueue in the whole system.
*
* As sharing the same active counter for an unbound workqueue across multiple
* NUMA nodes can be expensive, @max_active is distributed to each NUMA node
* according to the proportion of the number of online CPUs and enforced
* independently.
*
* Depending on online CPU distribution, a node may end up with per-node
* max_active which is significantly lower than @max_active, which can lead to
* deadlocks if the per-node concurrency limit is lower than the maximum number
* of interdependent work items for the workqueue.
*
* To guarantee forward progress regardless of online CPU distribution, the
* concurrency limit on every node is guaranteed to be equal to or greater than
* min_active which is set to min(@max_active, %WQ_DFL_MIN_ACTIVE). This means
* that the sum of per-node max_active's may be larger than @max_active.
*
* For detailed information on %WQ_* flags, please refer to
* Documentation/core-api/workqueue.rst.
*
* RETURNS:
Expand All @@ -460,8 +513,7 @@ alloc_workqueue(const char *fmt, unsigned int flags, int max_active, ...);
* Pointer to the allocated workqueue on success, %NULL on failure.
*/
#define alloc_ordered_workqueue(fmt, flags, args...) \
alloc_workqueue(fmt, WQ_UNBOUND | __WQ_ORDERED | \
__WQ_ORDERED_EXPLICIT | (flags), 1, ##args)
alloc_workqueue(fmt, WQ_UNBOUND | __WQ_ORDERED | (flags), 1, ##args)

#define create_workqueue(name) \
alloc_workqueue("%s", __WQ_LEGACY | WQ_MEM_RECLAIM, 1, (name))
Expand All @@ -471,6 +523,9 @@ alloc_workqueue(const char *fmt, unsigned int flags, int max_active, ...);
#define create_singlethread_workqueue(name) \
alloc_ordered_workqueue("%s", __WQ_LEGACY | WQ_MEM_RECLAIM, name)

#define from_work(var, callback_work, work_fieldname) \
container_of(callback_work, typeof(*var), work_fieldname)

extern void destroy_workqueue(struct workqueue_struct *wq);

struct workqueue_attrs *alloc_workqueue_attrs(void);
Expand Down Expand Up @@ -508,6 +563,8 @@ extern bool flush_rcu_work(struct rcu_work *rwork);

extern void workqueue_set_max_active(struct workqueue_struct *wq,
int max_active);
extern void workqueue_set_min_active(struct workqueue_struct *wq,
int min_active);
extern struct work_struct *current_work(void);
extern bool current_is_workqueue_rescuer(void);
extern bool workqueue_congested(int cpu, struct workqueue_struct *wq);
Expand Down
2 changes: 1 addition & 1 deletion init/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ config CONSTRUCTORS
bool

config IRQ_WORK
bool
def_bool y if SMP

config BUILDTIME_TABLE_SORT
bool
Expand Down
1 change: 1 addition & 0 deletions init/main.c
Original file line number Diff line number Diff line change
Expand Up @@ -1547,6 +1547,7 @@ static noinline void __init kernel_init_freeable(void)
sched_init_smp();

workqueue_init_topology();
async_init();
padata_init();
page_alloc_init_late();

Expand Down
17 changes: 16 additions & 1 deletion kernel/async.c
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,7 @@ static async_cookie_t next_cookie = 1;
static LIST_HEAD(async_global_pending); /* pending from all registered doms */
static ASYNC_DOMAIN(async_dfl_domain);
static DEFINE_SPINLOCK(async_lock);
static struct workqueue_struct *async_wq;

struct async_entry {
struct list_head domain_list;
Expand Down Expand Up @@ -174,7 +175,7 @@ static async_cookie_t __async_schedule_node_domain(async_func_t func,
spin_unlock_irqrestore(&async_lock, flags);

/* schedule for execution */
queue_work_node(node, system_unbound_wq, &entry->work);
queue_work_node(node, async_wq, &entry->work);

return newcookie;
}
Expand Down Expand Up @@ -345,3 +346,17 @@ bool current_is_async(void)
return worker && worker->current_func == async_run_entry_fn;
}
EXPORT_SYMBOL_GPL(current_is_async);

void __init async_init(void)
{
/*
* Async can schedule a number of interdependent work items. However,
* unbound workqueues can handle only upto min_active interdependent
* work items. The default min_active of 8 isn't sufficient for async
* and can lead to stalls. Let's use a dedicated workqueue with raised
* min_active.
*/
async_wq = alloc_workqueue("async", WQ_UNBOUND, 0);
BUG_ON(!async_wq);
workqueue_set_min_active(async_wq, WQ_DFL_ACTIVE);
}
Loading

0 comments on commit ff887eb

Please sign in to comment.