Skip to content

Commit

Permalink
---
Browse files Browse the repository at this point in the history
yaml
---
r: 263186
b: refs/heads/master
c: 8f6544e
h: refs/heads/master
v: v3
  • Loading branch information
Linus Torvalds committed Aug 22, 2011
1 parent 9765113 commit f31077e
Show file tree
Hide file tree
Showing 99 changed files with 1,992 additions and 500 deletions.
2 changes: 1 addition & 1 deletion [refs]
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
---
refs/heads/master: fe4c51b22080691792d3e28d86acb4d4ccb7e8e8
refs/heads/master: 8f6544edb2c7a7464fbbce1d86a4de414dc0cf95
71 changes: 71 additions & 0 deletions trunk/Documentation/block/cfq-iosched.txt
Original file line number Diff line number Diff line change
Expand Up @@ -43,3 +43,74 @@ If one sets slice_idle=0 and if storage supports NCQ, CFQ internally switches
to IOPS mode and starts providing fairness in terms of number of requests
dispatched. Note that this mode switching takes effect only for group
scheduling. For non-cgroup users nothing should change.

CFQ IO scheduler Idling Theory
===============================
Idling on a queue is primarily about waiting for the next request to come
on same queue after completion of a request. In this process CFQ will not
dispatch requests from other cfq queues even if requests are pending there.

The rationale behind idling is that it can cut down on number of seeks
on rotational media. For example, if a process is doing dependent
sequential reads (next read will come on only after completion of previous
one), then not dispatching request from other queue should help as we
did not move the disk head and kept on dispatching sequential IO from
one queue.

CFQ has following service trees and various queues are put on these trees.

sync-idle sync-noidle async

All cfq queues doing synchronous sequential IO go on to sync-idle tree.
On this tree we idle on each queue individually.

All synchronous non-sequential queues go on sync-noidle tree. Also any
request which are marked with REQ_NOIDLE go on this service tree. On this
tree we do not idle on individual queues instead idle on the whole group
of queues or the tree. So if there are 4 queues waiting for IO to dispatch
we will idle only once last queue has dispatched the IO and there is
no more IO on this service tree.

All async writes go on async service tree. There is no idling on async
queues.

CFQ has some optimizations for SSDs and if it detects a non-rotational
media which can support higher queue depth (multiple requests at in
flight at a time), then it cuts down on idling of individual queues and
all the queues move to sync-noidle tree and only tree idle remains. This
tree idling provides isolation with buffered write queues on async tree.

FAQ
===
Q1. Why to idle at all on queues marked with REQ_NOIDLE.

A1. We only do tree idle (all queues on sync-noidle tree) on queues marked
with REQ_NOIDLE. This helps in providing isolation with all the sync-idle
queues. Otherwise in presence of many sequential readers, other
synchronous IO might not get fair share of disk.

For example, if there are 10 sequential readers doing IO and they get
100ms each. If a REQ_NOIDLE request comes in, it will be scheduled
roughly after 1 second. If after completion of REQ_NOIDLE request we
do not idle, and after a couple of milli seconds a another REQ_NOIDLE
request comes in, again it will be scheduled after 1second. Repeat it
and notice how a workload can lose its disk share and suffer due to
multiple sequential readers.

fsync can generate dependent IO where bunch of data is written in the
context of fsync, and later some journaling data is written. Journaling
data comes in only after fsync has finished its IO (atleast for ext4
that seemed to be the case). Now if one decides not to idle on fsync
thread due to REQ_NOIDLE, then next journaling write will not get
scheduled for another second. A process doing small fsync, will suffer
badly in presence of multiple sequential readers.

Hence doing tree idling on threads using REQ_NOIDLE flag on requests
provides isolation from multiple sequential readers and at the same
time we do not idle on individual threads.

Q2. When to specify REQ_NOIDLE
A2. I would think whenever one is doing synchronous write and not expecting
more writes to be dispatched from same context soon, should be able
to specify REQ_NOIDLE on writes and that probably should work well for
most of the cases.
9 changes: 6 additions & 3 deletions trunk/Documentation/kernel-parameters.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1350,9 +1350,12 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
it is equivalent to "nosmp", which also disables
the IO APIC.

max_loop= [LOOP] Maximum number of loopback devices that can
be mounted
Format: <1-256>
max_loop= [LOOP] The number of loop block devices that get
(loop.max_loop) unconditionally pre-created at init time. The default
number is configured by BLK_DEV_LOOP_MIN_COUNT. Instead
of statically allocating a predefined number, loop
devices can be requested on-demand with the
/dev/loop-control interface.

mcatest= [IA-64]

Expand Down
4 changes: 2 additions & 2 deletions trunk/arch/sparc/kernel/pcic.c
Original file line number Diff line number Diff line change
Expand Up @@ -352,8 +352,8 @@ int __init pcic_probe(void)
strcpy(pbm->prom_name, namebuf);

{
extern volatile int t_nmi[1];
extern int pcic_nmi_trap_patch[1];
extern volatile int t_nmi[4];
extern int pcic_nmi_trap_patch[4];

t_nmi[0] = pcic_nmi_trap_patch[0];
t_nmi[1] = pcic_nmi_trap_patch[1];
Expand Down
4 changes: 2 additions & 2 deletions trunk/arch/x86/include/asm/xen/page.h
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ typedef struct xpaddr {
((unsigned long)((u64)CONFIG_XEN_MAX_DOMAIN_MEMORY * 1024 * 1024 * 1024 / PAGE_SIZE))

extern unsigned long *machine_to_phys_mapping;
extern unsigned int machine_to_phys_order;
extern unsigned long machine_to_phys_nr;

extern unsigned long get_phys_to_machine(unsigned long pfn);
extern bool set_phys_to_machine(unsigned long pfn, unsigned long mfn);
Expand Down Expand Up @@ -87,7 +87,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
if (xen_feature(XENFEAT_auto_translated_physmap))
return mfn;

if (unlikely((mfn >> machine_to_phys_order) != 0)) {
if (unlikely(mfn >= machine_to_phys_nr)) {
pfn = ~0;
goto try_override;
}
Expand Down
9 changes: 9 additions & 0 deletions trunk/arch/x86/pci/acpi.c
Original file line number Diff line number Diff line change
Expand Up @@ -360,6 +360,15 @@ struct pci_bus * __devinit pci_acpi_scan_root(struct acpi_pci_root *root)
}
}

/* After the PCI-E bus has been walked and all devices discovered,
* configure any settings of the fabric that might be necessary.
*/
if (bus) {
struct pci_bus *child;
list_for_each_entry(child, &bus->children, node)
pcie_bus_configure_settings(child, child->self->pcie_mpss);
}

if (!bus)
kfree(sd);

Expand Down
2 changes: 1 addition & 1 deletion trunk/arch/x86/xen/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ obj-y := enlighten.o setup.o multicalls.o mmu.o irq.o \
grant-table.o suspend.o platform-pci-unplug.o \
p2m.o

obj-$(CONFIG_FTRACE) += trace.o
obj-$(CONFIG_EVENT_TRACING) += trace.o

obj-$(CONFIG_SMP) += smp.o
obj-$(CONFIG_PARAVIRT_SPINLOCKS)+= spinlock.o
Expand Down
4 changes: 2 additions & 2 deletions trunk/arch/x86/xen/enlighten.c
Original file line number Diff line number Diff line change
Expand Up @@ -77,8 +77,8 @@ EXPORT_SYMBOL_GPL(xen_domain_type);

unsigned long *machine_to_phys_mapping = (void *)MACH2PHYS_VIRT_START;
EXPORT_SYMBOL(machine_to_phys_mapping);
unsigned int machine_to_phys_order;
EXPORT_SYMBOL(machine_to_phys_order);
unsigned long machine_to_phys_nr;
EXPORT_SYMBOL(machine_to_phys_nr);

struct start_info *xen_start_info;
EXPORT_SYMBOL_GPL(xen_start_info);
Expand Down
12 changes: 8 additions & 4 deletions trunk/arch/x86/xen/mmu.c
Original file line number Diff line number Diff line change
Expand Up @@ -1713,15 +1713,19 @@ static void __init xen_map_identity_early(pmd_t *pmd, unsigned long max_pfn)
void __init xen_setup_machphys_mapping(void)
{
struct xen_machphys_mapping mapping;
unsigned long machine_to_phys_nr_ents;

if (HYPERVISOR_memory_op(XENMEM_machphys_mapping, &mapping) == 0) {
machine_to_phys_mapping = (unsigned long *)mapping.v_start;
machine_to_phys_nr_ents = mapping.max_mfn + 1;
machine_to_phys_nr = mapping.max_mfn + 1;
} else {
machine_to_phys_nr_ents = MACH2PHYS_NR_ENTRIES;
machine_to_phys_nr = MACH2PHYS_NR_ENTRIES;
}
machine_to_phys_order = fls(machine_to_phys_nr_ents - 1);
#ifdef CONFIG_X86_32
if ((machine_to_phys_mapping + machine_to_phys_nr)
< machine_to_phys_mapping)
machine_to_phys_nr = (unsigned long *)NULL
- machine_to_phys_mapping;
#endif
}

#ifdef CONFIG_X86_64
Expand Down
4 changes: 2 additions & 2 deletions trunk/arch/x86/xen/smp.c
Original file line number Diff line number Diff line change
Expand Up @@ -521,8 +521,6 @@ static void __init xen_hvm_smp_prepare_cpus(unsigned int max_cpus)
native_smp_prepare_cpus(max_cpus);
WARN_ON(xen_smp_intr_init(0));

if (!xen_have_vector_callback)
return;
xen_init_lock_cpu(0);
xen_init_spinlocks();
}
Expand All @@ -546,6 +544,8 @@ static void xen_hvm_cpu_die(unsigned int cpu)

void __init xen_hvm_smp_init(void)
{
if (!xen_have_vector_callback)
return;
smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
smp_ops.smp_send_reschedule = xen_smp_send_reschedule;
smp_ops.cpu_up = xen_hvm_cpu_up;
Expand Down
10 changes: 10 additions & 0 deletions trunk/block/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,16 @@ config BLK_DEV_BSG

If unsure, say Y.

config BLK_DEV_BSGLIB
bool "Block layer SG support v4 helper lib"
default n
select BLK_DEV_BSG
help
Subsystems will normally enable this if needed. Users will not
normally need to manually enable this.

If unsure, say N.

config BLK_DEV_INTEGRITY
bool "Block layer data integrity support"
---help---
Expand Down
1 change: 1 addition & 0 deletions trunk/block/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ obj-$(CONFIG_BLOCK) := elevator.o blk-core.o blk-tag.o blk-sysfs.o \
blk-iopoll.o blk-lib.o ioctl.o genhd.o scsi_ioctl.o

obj-$(CONFIG_BLK_DEV_BSG) += bsg.o
obj-$(CONFIG_BLK_DEV_BSGLIB) += bsg-lib.o
obj-$(CONFIG_BLK_CGROUP) += blk-cgroup.o
obj-$(CONFIG_BLK_DEV_THROTTLING) += blk-throttle.o
obj-$(CONFIG_IOSCHED_NOOP) += noop-iosched.o
Expand Down
8 changes: 6 additions & 2 deletions trunk/block/blk-core.c
Original file line number Diff line number Diff line change
Expand Up @@ -1702,6 +1702,7 @@ EXPORT_SYMBOL_GPL(blk_rq_check_limits);
int blk_insert_cloned_request(struct request_queue *q, struct request *rq)
{
unsigned long flags;
int where = ELEVATOR_INSERT_BACK;

if (blk_rq_check_limits(q, rq))
return -EIO;
Expand All @@ -1718,7 +1719,10 @@ int blk_insert_cloned_request(struct request_queue *q, struct request *rq)
*/
BUG_ON(blk_queued_rq(rq));

add_acct_request(q, rq, ELEVATOR_INSERT_BACK);
if (rq->cmd_flags & (REQ_FLUSH|REQ_FUA))
where = ELEVATOR_INSERT_FLUSH;

add_acct_request(q, rq, where);
spin_unlock_irqrestore(q->queue_lock, flags);

return 0;
Expand Down Expand Up @@ -2275,7 +2279,7 @@ static bool blk_end_bidi_request(struct request *rq, int error,
* %false - we are done with this request
* %true - still buffers pending for this request
**/
static bool __blk_end_bidi_request(struct request *rq, int error,
bool __blk_end_bidi_request(struct request *rq, int error,
unsigned int nr_bytes, unsigned int bidi_bytes)
{
if (blk_update_bidi_request(rq, error, nr_bytes, bidi_bytes))
Expand Down
25 changes: 19 additions & 6 deletions trunk/block/blk-flush.c
Original file line number Diff line number Diff line change
Expand Up @@ -95,11 +95,12 @@ static unsigned int blk_flush_policy(unsigned int fflags, struct request *rq)
{
unsigned int policy = 0;

if (blk_rq_sectors(rq))
policy |= REQ_FSEQ_DATA;

if (fflags & REQ_FLUSH) {
if (rq->cmd_flags & REQ_FLUSH)
policy |= REQ_FSEQ_PREFLUSH;
if (blk_rq_sectors(rq))
policy |= REQ_FSEQ_DATA;
if (!(fflags & REQ_FUA) && (rq->cmd_flags & REQ_FUA))
policy |= REQ_FSEQ_POSTFLUSH;
}
Expand All @@ -122,7 +123,7 @@ static void blk_flush_restore_request(struct request *rq)

/* make @rq a normal request */
rq->cmd_flags &= ~REQ_FLUSH_SEQ;
rq->end_io = NULL;
rq->end_io = rq->flush.saved_end_io;
}

/**
Expand Down Expand Up @@ -300,9 +301,6 @@ void blk_insert_flush(struct request *rq)
unsigned int fflags = q->flush_flags; /* may change, cache */
unsigned int policy = blk_flush_policy(fflags, rq);

BUG_ON(rq->end_io);
BUG_ON(!rq->bio || rq->bio != rq->biotail);

/*
* @policy now records what operations need to be done. Adjust
* REQ_FLUSH and FUA for the driver.
Expand All @@ -311,6 +309,19 @@ void blk_insert_flush(struct request *rq)
if (!(fflags & REQ_FUA))
rq->cmd_flags &= ~REQ_FUA;

/*
* An empty flush handed down from a stacking driver may
* translate into nothing if the underlying device does not
* advertise a write-back cache. In this case, simply
* complete the request.
*/
if (!policy) {
__blk_end_bidi_request(rq, 0, 0, 0);
return;
}

BUG_ON(!rq->bio || rq->bio != rq->biotail);

/*
* If there's data but flush is not necessary, the request can be
* processed directly without going through flush machinery. Queue
Expand All @@ -319,6 +330,7 @@ void blk_insert_flush(struct request *rq)
if ((policy & REQ_FSEQ_DATA) &&
!(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH))) {
list_add_tail(&rq->queuelist, &q->queue_head);
blk_run_queue_async(q);
return;
}

Expand All @@ -329,6 +341,7 @@ void blk_insert_flush(struct request *rq)
memset(&rq->flush, 0, sizeof(rq->flush));
INIT_LIST_HEAD(&rq->flush.list);
rq->cmd_flags |= REQ_FLUSH_SEQ;
rq->flush.saved_end_io = rq->end_io; /* Usually NULL */
rq->end_io = flush_data_end_io;

blk_flush_complete_seq(rq, REQ_FSEQ_ACTIONS & ~policy, 0);
Expand Down
8 changes: 8 additions & 0 deletions trunk/block/blk-softirq.c
Original file line number Diff line number Diff line change
Expand Up @@ -124,6 +124,14 @@ void __blk_complete_request(struct request *req)
} else
ccpu = cpu;

/*
* If current CPU and requested CPU are in the same group, running
* softirq in current CPU. One might concern this is just like
* QUEUE_FLAG_SAME_FORCE, but actually not. blk_complete_request() is
* running in interrupt handler, and currently I/O controller doesn't
* support multiple interrupts, so current CPU is unique actually. This
* avoids IPI sending from current CPU to the first CPU of a group.
*/
if (ccpu == cpu || ccpu == group_cpu) {
struct list_head *list;
do_local:
Expand Down
4 changes: 2 additions & 2 deletions trunk/block/blk-throttle.c
Original file line number Diff line number Diff line change
Expand Up @@ -746,7 +746,7 @@ static bool tg_may_dispatch(struct throtl_data *td, struct throtl_grp *tg,
static void throtl_charge_bio(struct throtl_grp *tg, struct bio *bio)
{
bool rw = bio_data_dir(bio);
bool sync = bio->bi_rw & REQ_SYNC;
bool sync = rw_is_sync(bio->bi_rw);

/* Charge the bio to the group */
tg->bytes_disp[rw] += bio->bi_size;
Expand Down Expand Up @@ -1150,7 +1150,7 @@ int blk_throtl_bio(struct request_queue *q, struct bio **biop)

if (tg_no_rule_group(tg, rw)) {
blkiocg_update_dispatch_stats(&tg->blkg, bio->bi_size,
rw, bio->bi_rw & REQ_SYNC);
rw, rw_is_sync(bio->bi_rw));
rcu_read_unlock();
return 0;
}
Expand Down
2 changes: 2 additions & 0 deletions trunk/block/blk.h
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,8 @@ int blk_rq_append_bio(struct request_queue *q, struct request *rq,
struct bio *bio);
void blk_dequeue_request(struct request *rq);
void __blk_queue_free_tags(struct request_queue *q);
bool __blk_end_bidi_request(struct request *rq, int error,
unsigned int nr_bytes, unsigned int bidi_bytes);

void blk_rq_timed_out_timer(unsigned long data);
void blk_delete_timer(struct request *);
Expand Down
Loading

0 comments on commit f31077e

Please sign in to comment.