diff --git a/[refs] b/[refs] index 694d2437d894..e5c87c8128b4 100644 --- a/[refs] +++ b/[refs] @@ -1,2 +1,2 @@ --- -refs/heads/master: ee67e6cbe1121da1ae4eceb7b2bcb535c5cbf65e +refs/heads/master: af67c3a9e68ee0a9e30ee8582577408adba0e299 diff --git a/trunk/Documentation/feature-removal-schedule.txt b/trunk/Documentation/feature-removal-schedule.txt index 04e6c819b28a..89a47b5aff07 100644 --- a/trunk/Documentation/feature-removal-schedule.txt +++ b/trunk/Documentation/feature-removal-schedule.txt @@ -451,33 +451,3 @@ Why: OSS sound_core grabs all legacy minors (0-255) of SOUND_MAJOR will also allow making ALSA OSS emulation independent of sound_core. The dependency will be broken then too. Who: Tejun Heo - ----------------------------- - -What: Support for VMware's guest paravirtuliazation technique [VMI] will be - dropped. -When: 2.6.37 or earlier. -Why: With the recent innovations in CPU hardware acceleration technologies - from Intel and AMD, VMware ran a few experiments to compare these - techniques to guest paravirtualization technique on VMware's platform. - These hardware assisted virtualization techniques have outperformed the - performance benefits provided by VMI in most of the workloads. VMware - expects that these hardware features will be ubiquitous in a couple of - years, as a result, VMware has started a phased retirement of this - feature from the hypervisor. We will be removing this feature from the - Kernel too. Right now we are targeting 2.6.37 but can retire earlier if - technical reasons (read opportunity to remove major chunk of pvops) - arise. - - Please note that VMI has always been an optimization and non-VMI kernels - still work fine on VMware's platform. - Latest versions of VMware's product which support VMI are, - Workstation 7.0 and VSphere 4.0 on ESX side, future maintainence - releases for these products will continue supporting VMI. - - For more details about VMI retirement take a look at this, - http://blogs.vmware.com/guestosguide/2009/09/vmi-retirement.html - -Who: Alok N Kataria - ----------------------------- diff --git a/trunk/Documentation/filesystems/ext3.txt b/trunk/Documentation/filesystems/ext3.txt index 05d5cf1d743f..570f9bd9be2b 100644 --- a/trunk/Documentation/filesystems/ext3.txt +++ b/trunk/Documentation/filesystems/ext3.txt @@ -123,18 +123,10 @@ resuid=n The user ID which may use the reserved blocks. sb=n Use alternate superblock at this location. -quota These options are ignored by the filesystem. They -noquota are used only by quota tools to recognize volumes -grpquota where quota should be turned on. See documentation -usrquota in the quota-tools package for more details - (http://sourceforge.net/projects/linuxquota). - -jqfmt= These options tell filesystem details about quota -usrjquota= so that quota information can be properly updated -grpjquota= during journal replay. They replace the above - quota options. See documentation in the quota-tools - package for more details - (http://sourceforge.net/projects/linuxquota). +quota +noquota +grpquota +usrquota bh (*) ext3 associates buffer heads to data pages to nobh (a) cache disk block mapping information diff --git a/trunk/Documentation/sound/alsa/HD-Audio-Models.txt b/trunk/Documentation/sound/alsa/HD-Audio-Models.txt index 4c7f9aee5c4e..75fddb40f416 100644 --- a/trunk/Documentation/sound/alsa/HD-Audio-Models.txt +++ b/trunk/Documentation/sound/alsa/HD-Audio-Models.txt @@ -359,7 +359,6 @@ STAC9227/9228/9229/927x 5stack-no-fp D965 5stack without front panel dell-3stack Dell Dimension E520 dell-bios Fixes with Dell BIOS setup - volknob Fixes with volume-knob widget 0x24 auto BIOS setup (default) STAC92HD71B* diff --git a/trunk/MAINTAINERS b/trunk/MAINTAINERS index 6b057778477d..ff968842ce56 100644 --- a/trunk/MAINTAINERS +++ b/trunk/MAINTAINERS @@ -4076,13 +4076,6 @@ M: Peter Zijlstra M: Paul Mackerras M: Ingo Molnar S: Supported -F: kernel/perf_event.c -F: include/linux/perf_event.h -F: arch/*/*/kernel/perf_event.c -F: arch/*/include/asm/perf_event.h -F: arch/*/lib/perf_event.c -F: arch/*/kernel/perf_callchain.c -F: tools/perf/ PERSONALITY HANDLING M: Christoph Hellwig diff --git a/trunk/Makefile b/trunk/Makefile index 326791575b0a..927d7a32ed9e 100644 --- a/trunk/Makefile +++ b/trunk/Makefile @@ -179,9 +179,46 @@ SUBARCH := $(shell uname -m | sed -e s/i.86/i386/ -e s/sun4u/sparc64/ \ # Alternatively CROSS_COMPILE can be set in the environment. # Default value for CROSS_COMPILE is not to prefix executables # Note: Some architectures assign CROSS_COMPILE in their arch/*/Makefile +# +# To force ARCH and CROSS_COMPILE settings include kernel.* files +# in the kernel tree - do not patch this file. export KBUILD_BUILDHOST := $(SUBARCH) -ARCH ?= $(SUBARCH) -CROSS_COMPILE ?= + +# Kbuild save the ARCH and CROSS_COMPILE setting in kernel.* files. +# Restore these settings and check that user did not specify +# conflicting values. + +saved_arch := $(shell cat include/generated/kernel.arch 2> /dev/null) +saved_cross := $(shell cat include/generated/kernel.cross 2> /dev/null) + +ifneq ($(CROSS_COMPILE),) + ifneq ($(saved_cross),) + ifneq ($(CROSS_COMPILE),$(saved_cross)) + $(error CROSS_COMPILE changed from \ + "$(saved_cross)" to \ + to "$(CROSS_COMPILE)". \ + Use "make mrproper" to fix it up) + endif + endif +else + CROSS_COMPILE := $(saved_cross) +endif + +ifneq ($(ARCH),) + ifneq ($(saved_arch),) + ifneq ($(saved_arch),$(ARCH)) + $(error ARCH changed from \ + "$(saved_arch)" to "$(ARCH)". \ + Use "make mrproper" to fix it up) + endif + endif +else + ifneq ($(saved_arch),) + ARCH := $(saved_arch) + else + ARCH := $(SUBARCH) + endif +endif # Architecture as present in compile.h UTS_MACHINE := $(ARCH) @@ -446,6 +483,11 @@ ifeq ($(config-targets),1) include $(srctree)/arch/$(SRCARCH)/Makefile export KBUILD_DEFCONFIG KBUILD_KCONFIG +# save ARCH & CROSS_COMPILE settings +$(shell mkdir -p include/generated && \ + echo $(ARCH) > include/generated/kernel.arch && \ + echo $(CROSS_COMPILE) > include/generated/kernel.cross) + config: scripts_basic outputmakefile FORCE $(Q)mkdir -p include/linux include/config $(Q)$(MAKE) $(build)=scripts/kconfig $@ diff --git a/trunk/arch/sh/kernel/traps_32.c b/trunk/arch/sh/kernel/traps_32.c index e0b5e4b5accd..7a2ee3a6b8e7 100644 --- a/trunk/arch/sh/kernel/traps_32.c +++ b/trunk/arch/sh/kernel/traps_32.c @@ -25,6 +25,7 @@ #include #include #include +#include #include #include #include @@ -159,12 +160,12 @@ void die(const char * str, struct pt_regs * regs, long err) oops_enter(); - console_verbose(); spin_lock_irq(&die_lock); + console_verbose(); bust_spinlocks(1); printk("%s: %04lx [#%d]\n", str, err & 0xffff, ++die_counter); - + sysfs_printk_last_file(); print_modules(); show_regs(regs); @@ -180,6 +181,7 @@ void die(const char * str, struct pt_regs * regs, long err) bust_spinlocks(0); add_taint(TAINT_DIE); spin_unlock_irq(&die_lock); + oops_exit(); if (kexec_should_crash(current)) crash_kexec(regs); @@ -190,7 +192,6 @@ void die(const char * str, struct pt_regs * regs, long err) if (panic_on_oops) panic("Fatal exception"); - oops_exit(); do_exit(SIGSEGV); } diff --git a/trunk/arch/x86/Kconfig b/trunk/arch/x86/Kconfig index 07e01149e3bf..c876bace8fdc 100644 --- a/trunk/arch/x86/Kconfig +++ b/trunk/arch/x86/Kconfig @@ -491,7 +491,7 @@ if PARAVIRT_GUEST source "arch/x86/xen/Kconfig" config VMI - bool "VMI Guest support (DEPRECATED)" + bool "VMI Guest support" select PARAVIRT depends on X86_32 ---help--- @@ -500,15 +500,6 @@ config VMI at the moment), by linking the kernel to a GPL-ed ROM module provided by the hypervisor. - As of September 2009, VMware has started a phased retirement - of this feature from VMware's products. Please see - feature-removal-schedule.txt for details. If you are - planning to enable this option, please note that you cannot - live migrate a VMI enabled VM to a future VMware product, - which doesn't support VMI. So if you expect your kernel to - seamlessly migrate to newer VMware products, keep this - disabled. - config KVM_CLOCK bool "KVM paravirtualized clock" select PARAVIRT diff --git a/trunk/arch/x86/include/asm/paravirt.h b/trunk/arch/x86/include/asm/paravirt.h index efb38994859c..8aebcc41041d 100644 --- a/trunk/arch/x86/include/asm/paravirt.h +++ b/trunk/arch/x86/include/asm/paravirt.h @@ -840,22 +840,42 @@ static __always_inline void __raw_spin_unlock(struct raw_spinlock *lock) static inline unsigned long __raw_local_save_flags(void) { - return PVOP_CALLEE0(unsigned long, pv_irq_ops.save_fl); + unsigned long f; + + asm volatile(paravirt_alt(PARAVIRT_CALL) + : "=a"(f) + : paravirt_type(pv_irq_ops.save_fl), + paravirt_clobber(CLBR_EAX) + : "memory", "cc"); + return f; } static inline void raw_local_irq_restore(unsigned long f) { - PVOP_VCALLEE1(pv_irq_ops.restore_fl, f); + asm volatile(paravirt_alt(PARAVIRT_CALL) + : "=a"(f) + : PV_FLAGS_ARG(f), + paravirt_type(pv_irq_ops.restore_fl), + paravirt_clobber(CLBR_EAX) + : "memory", "cc"); } static inline void raw_local_irq_disable(void) { - PVOP_VCALLEE0(pv_irq_ops.irq_disable); + asm volatile(paravirt_alt(PARAVIRT_CALL) + : + : paravirt_type(pv_irq_ops.irq_disable), + paravirt_clobber(CLBR_EAX) + : "memory", "eax", "cc"); } static inline void raw_local_irq_enable(void) { - PVOP_VCALLEE0(pv_irq_ops.irq_enable); + asm volatile(paravirt_alt(PARAVIRT_CALL) + : + : paravirt_type(pv_irq_ops.irq_enable), + paravirt_clobber(CLBR_EAX) + : "memory", "eax", "cc"); } static inline unsigned long __raw_local_irq_save(void) diff --git a/trunk/arch/x86/include/asm/paravirt_types.h b/trunk/arch/x86/include/asm/paravirt_types.h index 9357473c8da0..dd0f5b32489d 100644 --- a/trunk/arch/x86/include/asm/paravirt_types.h +++ b/trunk/arch/x86/include/asm/paravirt_types.h @@ -494,11 +494,10 @@ int paravirt_disable_iospace(void); #define EXTRA_CLOBBERS #define VEXTRA_CLOBBERS #else /* CONFIG_X86_64 */ -/* [re]ax isn't an arg, but the return val */ #define PVOP_VCALL_ARGS \ unsigned long __edi = __edi, __esi = __esi, \ - __edx = __edx, __ecx = __ecx, __eax = __eax -#define PVOP_CALL_ARGS PVOP_VCALL_ARGS + __edx = __edx, __ecx = __ecx +#define PVOP_CALL_ARGS PVOP_VCALL_ARGS, __eax #define PVOP_CALL_ARG1(x) "D" ((unsigned long)(x)) #define PVOP_CALL_ARG2(x) "S" ((unsigned long)(x)) @@ -510,7 +509,6 @@ int paravirt_disable_iospace(void); "=c" (__ecx) #define PVOP_CALL_CLOBBERS PVOP_VCALL_CLOBBERS, "=a" (__eax) -/* void functions are still allowed [re]ax for scratch */ #define PVOP_VCALLEE_CLOBBERS "=a" (__eax) #define PVOP_CALLEE_CLOBBERS PVOP_VCALLEE_CLOBBERS @@ -585,8 +583,8 @@ int paravirt_disable_iospace(void); VEXTRA_CLOBBERS, \ pre, post, ##__VA_ARGS__) -#define __PVOP_VCALLEESAVE(op, pre, post, ...) \ - ____PVOP_VCALL(op.func, CLBR_RET_REG, \ +#define __PVOP_VCALLEESAVE(rettype, op, pre, post, ...) \ + ____PVOP_CALL(rettype, op.func, CLBR_RET_REG, \ PVOP_VCALLEE_CLOBBERS, , \ pre, post, ##__VA_ARGS__) diff --git a/trunk/arch/x86/kernel/irq.c b/trunk/arch/x86/kernel/irq.c index 74656d1d4e30..391206199515 100644 --- a/trunk/arch/x86/kernel/irq.c +++ b/trunk/arch/x86/kernel/irq.c @@ -244,6 +244,7 @@ unsigned int __irq_entry do_IRQ(struct pt_regs *regs) __func__, smp_processor_id(), vector, irq); } + run_local_timers(); irq_exit(); set_irq_regs(old_regs); @@ -268,6 +269,7 @@ void smp_generic_interrupt(struct pt_regs *regs) if (generic_interrupt_extension) generic_interrupt_extension(); + run_local_timers(); irq_exit(); set_irq_regs(old_regs); diff --git a/trunk/arch/x86/kernel/pci-dma.c b/trunk/arch/x86/kernel/pci-dma.c index b2a71dca5642..d20009b4e6ef 100644 --- a/trunk/arch/x86/kernel/pci-dma.c +++ b/trunk/arch/x86/kernel/pci-dma.c @@ -311,7 +311,7 @@ void pci_iommu_shutdown(void) amd_iommu_shutdown(); } /* Must execute after PCI subsystem */ -rootfs_initcall(pci_iommu_init); +fs_initcall(pci_iommu_init); #ifdef CONFIG_PCI /* Many VIA bridges seem to corrupt data for DAC. Disable it here */ diff --git a/trunk/arch/x86/kernel/smp.c b/trunk/arch/x86/kernel/smp.c index ec1de97600e7..d915d956e66d 100644 --- a/trunk/arch/x86/kernel/smp.c +++ b/trunk/arch/x86/kernel/smp.c @@ -198,6 +198,7 @@ void smp_reschedule_interrupt(struct pt_regs *regs) { ack_APIC_irq(); inc_irq_stat(irq_resched_count); + run_local_timers(); /* * KVM uses this interrupt to force a cpu out of guest mode */ diff --git a/trunk/arch/x86/kernel/time.c b/trunk/arch/x86/kernel/time.c index be2573448ed9..dcb00d278512 100644 --- a/trunk/arch/x86/kernel/time.c +++ b/trunk/arch/x86/kernel/time.c @@ -38,8 +38,7 @@ unsigned long profile_pc(struct pt_regs *regs) #ifdef CONFIG_FRAME_POINTER return *(unsigned long *)(regs->bp + sizeof(long)); #else - unsigned long *sp = - (unsigned long *)kernel_stack_pointer(regs); + unsigned long *sp = (unsigned long *)regs->sp; /* * Return address is either directly at stack pointer * or above a saved flags. Eflags has bits 22-31 zero, diff --git a/trunk/arch/x86/kernel/trampoline.c b/trunk/arch/x86/kernel/trampoline.c index cd022121cab6..699f7eeb896a 100644 --- a/trunk/arch/x86/kernel/trampoline.c +++ b/trunk/arch/x86/kernel/trampoline.c @@ -3,16 +3,8 @@ #include #include -#if defined(CONFIG_X86_64) && defined(CONFIG_ACPI_SLEEP) -#define __trampinit -#define __trampinitdata -#else -#define __trampinit __cpuinit -#define __trampinitdata __cpuinitdata -#endif - /* ready for x86_64 and x86 */ -unsigned char *__trampinitdata trampoline_base = __va(TRAMPOLINE_BASE); +unsigned char *__cpuinitdata trampoline_base = __va(TRAMPOLINE_BASE); void __init reserve_trampoline_memory(void) { @@ -34,7 +26,7 @@ void __init reserve_trampoline_memory(void) * bootstrap into the page concerned. The caller * has made sure it's suitably aligned. */ -unsigned long __trampinit setup_trampoline(void) +unsigned long __cpuinit setup_trampoline(void) { memcpy(trampoline_base, trampoline_data, TRAMPOLINE_SIZE); return virt_to_phys(trampoline_base); diff --git a/trunk/arch/x86/kernel/trampoline_64.S b/trunk/arch/x86/kernel/trampoline_64.S index 3af2dff58b21..596d54c660a5 100644 --- a/trunk/arch/x86/kernel/trampoline_64.S +++ b/trunk/arch/x86/kernel/trampoline_64.S @@ -32,12 +32,8 @@ #include #include -#ifdef CONFIG_ACPI_SLEEP -.section .rodata, "a", @progbits -#else /* We can free up the trampoline after bootup if cpu hotplug is not supported. */ __CPUINITRODATA -#endif .code16 ENTRY(trampoline_data) diff --git a/trunk/arch/x86/kernel/vmi_32.c b/trunk/arch/x86/kernel/vmi_32.c index d430e4c30193..31e6f6cfe53e 100644 --- a/trunk/arch/x86/kernel/vmi_32.c +++ b/trunk/arch/x86/kernel/vmi_32.c @@ -648,7 +648,7 @@ static inline int __init activate_vmi(void) pv_info.paravirt_enabled = 1; pv_info.kernel_rpl = kernel_cs & SEGMENT_RPL_MASK; - pv_info.name = "vmi [deprecated]"; + pv_info.name = "vmi"; pv_init_ops.patch = vmi_patch; diff --git a/trunk/block/blk-core.c b/trunk/block/blk-core.c index ac0fa10f8fa5..81f34311659a 100644 --- a/trunk/block/blk-core.c +++ b/trunk/block/blk-core.c @@ -70,7 +70,7 @@ static void drive_stat_acct(struct request *rq, int new_io) part_stat_inc(cpu, part, merges[rw]); else { part_round_stats(cpu, part); - part_inc_in_flight(part, rw); + part_inc_in_flight(part); } part_stat_unlock(); @@ -1030,9 +1030,9 @@ static void part_round_stats_single(int cpu, struct hd_struct *part, if (now == part->stamp) return; - if (part_in_flight(part)) { + if (part->in_flight) { __part_stat_add(cpu, part, time_in_queue, - part_in_flight(part) * (now - part->stamp)); + part->in_flight * (now - part->stamp)); __part_stat_add(cpu, part, io_ticks, (now - part->stamp)); } part->stamp = now; @@ -1739,7 +1739,7 @@ static void blk_account_io_done(struct request *req) part_stat_inc(cpu, part, ios[rw]); part_stat_add(cpu, part, ticks[rw], duration); part_round_stats(cpu, part); - part_dec_in_flight(part, rw); + part_dec_in_flight(part); part_stat_unlock(); } @@ -2492,6 +2492,14 @@ int kblockd_schedule_work(struct request_queue *q, struct work_struct *work) } EXPORT_SYMBOL(kblockd_schedule_work); +int kblockd_schedule_delayed_work(struct request_queue *q, + struct delayed_work *work, + unsigned long delay) +{ + return queue_delayed_work(kblockd_workqueue, work, delay); +} +EXPORT_SYMBOL(kblockd_schedule_delayed_work); + int __init blk_dev_init(void) { BUILD_BUG_ON(__REQ_NR_BITS > 8 * diff --git a/trunk/block/blk-merge.c b/trunk/block/blk-merge.c index 99cb5cf1f447..b0de8574fdc8 100644 --- a/trunk/block/blk-merge.c +++ b/trunk/block/blk-merge.c @@ -351,7 +351,7 @@ static void blk_account_io_merge(struct request *req) part = disk_map_sector_rcu(req->rq_disk, blk_rq_pos(req)); part_round_stats(cpu, part); - part_dec_in_flight(part, rq_data_dir(req)); + part_dec_in_flight(part); part_stat_unlock(); } diff --git a/trunk/block/blk-settings.c b/trunk/block/blk-settings.c index 66d4aa8799b7..e0695bca7027 100644 --- a/trunk/block/blk-settings.c +++ b/trunk/block/blk-settings.c @@ -242,7 +242,7 @@ EXPORT_SYMBOL(blk_queue_max_hw_sectors); /** * blk_queue_max_discard_sectors - set max sectors for a single discard * @q: the request queue for the device - * @max_discard_sectors: maximum number of sectors to discard + * @max_discard: maximum number of sectors to discard **/ void blk_queue_max_discard_sectors(struct request_queue *q, unsigned int max_discard_sectors) diff --git a/trunk/block/blk-tag.c b/trunk/block/blk-tag.c index 6b0f52c20964..2e5cfeb59333 100644 --- a/trunk/block/blk-tag.c +++ b/trunk/block/blk-tag.c @@ -359,7 +359,7 @@ int blk_queue_start_tag(struct request_queue *q, struct request *rq) max_depth -= 2; if (!max_depth) max_depth = 1; - if (q->in_flight[BLK_RW_ASYNC] > max_depth) + if (q->in_flight[0] > max_depth) return 1; } diff --git a/trunk/block/cfq-iosched.c b/trunk/block/cfq-iosched.c index 069a61017c02..9c4b679908f4 100644 --- a/trunk/block/cfq-iosched.c +++ b/trunk/block/cfq-iosched.c @@ -150,7 +150,7 @@ struct cfq_data { * idle window management */ struct timer_list idle_slice_timer; - struct work_struct unplug_work; + struct delayed_work unplug_work; struct cfq_queue *active_queue; struct cfq_io_context *active_cic; @@ -230,7 +230,7 @@ CFQ_CFQQ_FNS(coop); blk_add_trace_msg((cfqd)->queue, "cfq " fmt, ##args) static void cfq_dispatch_insert(struct request_queue *, struct request *); -static struct cfq_queue *cfq_get_queue(struct cfq_data *, bool, +static struct cfq_queue *cfq_get_queue(struct cfq_data *, int, struct io_context *, gfp_t); static struct cfq_io_context *cfq_cic_lookup(struct cfq_data *, struct io_context *); @@ -241,35 +241,40 @@ static inline int rq_in_driver(struct cfq_data *cfqd) } static inline struct cfq_queue *cic_to_cfqq(struct cfq_io_context *cic, - bool is_sync) + int is_sync) { - return cic->cfqq[is_sync]; + return cic->cfqq[!!is_sync]; } static inline void cic_set_cfqq(struct cfq_io_context *cic, - struct cfq_queue *cfqq, bool is_sync) + struct cfq_queue *cfqq, int is_sync) { - cic->cfqq[is_sync] = cfqq; + cic->cfqq[!!is_sync] = cfqq; } /* * We regard a request as SYNC, if it's either a read or has the SYNC bit * set (in which case it could also be direct WRITE). */ -static inline bool cfq_bio_sync(struct bio *bio) +static inline int cfq_bio_sync(struct bio *bio) { - return bio_data_dir(bio) == READ || bio_rw_flagged(bio, BIO_RW_SYNCIO); + if (bio_data_dir(bio) == READ || bio_rw_flagged(bio, BIO_RW_SYNCIO)) + return 1; + + return 0; } /* * scheduler run of queue, if there are requests pending and no one in the * driver that will restart queueing */ -static inline void cfq_schedule_dispatch(struct cfq_data *cfqd) +static inline void cfq_schedule_dispatch(struct cfq_data *cfqd, + unsigned long delay) { if (cfqd->busy_queues) { cfq_log(cfqd, "schedule dispatch"); - kblockd_schedule_work(cfqd->queue, &cfqd->unplug_work); + kblockd_schedule_delayed_work(cfqd->queue, &cfqd->unplug_work, + delay); } } @@ -285,7 +290,7 @@ static int cfq_queue_empty(struct request_queue *q) * if a queue is marked sync and has sync io queued. A sync queue with async * io only, should not get full sync slice length. */ -static inline int cfq_prio_slice(struct cfq_data *cfqd, bool sync, +static inline int cfq_prio_slice(struct cfq_data *cfqd, int sync, unsigned short prio) { const int base_slice = cfqd->cfq_slice[sync]; @@ -313,7 +318,7 @@ cfq_set_prio_slice(struct cfq_data *cfqd, struct cfq_queue *cfqq) * isn't valid until the first request from the dispatch is activated * and the slice time set. */ -static inline bool cfq_slice_used(struct cfq_queue *cfqq) +static inline int cfq_slice_used(struct cfq_queue *cfqq) { if (cfq_cfqq_slice_new(cfqq)) return 0; @@ -488,7 +493,7 @@ static unsigned long cfq_slice_offset(struct cfq_data *cfqd, * we will service the queues. */ static void cfq_service_tree_add(struct cfq_data *cfqd, struct cfq_queue *cfqq, - bool add_front) + int add_front) { struct rb_node **p, *parent; struct cfq_queue *__cfqq; @@ -504,20 +509,11 @@ static void cfq_service_tree_add(struct cfq_data *cfqd, struct cfq_queue *cfqq, } else rb_key += jiffies; } else if (!add_front) { - /* - * Get our rb key offset. Subtract any residual slice - * value carried from last service. A negative resid - * count indicates slice overrun, and this should position - * the next service time further away in the tree. - */ rb_key = cfq_slice_offset(cfqd, cfqq) + jiffies; - rb_key -= cfqq->slice_resid; + rb_key += cfqq->slice_resid; cfqq->slice_resid = 0; - } else { - rb_key = -HZ; - __cfqq = cfq_rb_first(&cfqd->service_tree); - rb_key += __cfqq ? __cfqq->rb_key : jiffies; - } + } else + rb_key = 0; if (!RB_EMPTY_NODE(&cfqq->rb_node)) { /* @@ -551,7 +547,7 @@ static void cfq_service_tree_add(struct cfq_data *cfqd, struct cfq_queue *cfqq, n = &(*p)->rb_left; else if (cfq_class_idle(cfqq) > cfq_class_idle(__cfqq)) n = &(*p)->rb_right; - else if (time_before(rb_key, __cfqq->rb_key)) + else if (rb_key < __cfqq->rb_key) n = &(*p)->rb_left; else n = &(*p)->rb_right; @@ -831,10 +827,8 @@ cfq_merged_requests(struct request_queue *q, struct request *rq, * reposition in fifo if next is older than rq */ if (!list_empty(&rq->queuelist) && !list_empty(&next->queuelist) && - time_before(rq_fifo_time(next), rq_fifo_time(rq))) { + time_before(next->start_time, rq->start_time)) list_move(&rq->queuelist, &next->queuelist); - rq_set_fifo_time(rq, rq_fifo_time(next)); - } cfq_remove_request(next); } @@ -850,7 +844,7 @@ static int cfq_allow_merge(struct request_queue *q, struct request *rq, * Disallow merge of a sync bio into an async request. */ if (cfq_bio_sync(bio) && !rq_is_sync(rq)) - return false; + return 0; /* * Lookup the cfqq that this bio will be queued with. Allow @@ -858,10 +852,13 @@ static int cfq_allow_merge(struct request_queue *q, struct request *rq, */ cic = cfq_cic_lookup(cfqd, current->io_context); if (!cic) - return false; + return 0; cfqq = cic_to_cfqq(cic, cfq_bio_sync(bio)); - return cfqq == RQ_CFQQ(rq); + if (cfqq == RQ_CFQQ(rq)) + return 1; + + return 0; } static void __cfq_set_active_queue(struct cfq_data *cfqd, @@ -889,7 +886,7 @@ static void __cfq_set_active_queue(struct cfq_data *cfqd, */ static void __cfq_slice_expired(struct cfq_data *cfqd, struct cfq_queue *cfqq, - bool timed_out) + int timed_out) { cfq_log_cfqq(cfqd, cfqq, "slice expired t=%d", timed_out); @@ -917,7 +914,7 @@ __cfq_slice_expired(struct cfq_data *cfqd, struct cfq_queue *cfqq, } } -static inline void cfq_slice_expired(struct cfq_data *cfqd, bool timed_out) +static inline void cfq_slice_expired(struct cfq_data *cfqd, int timed_out) { struct cfq_queue *cfqq = cfqd->active_queue; @@ -1029,7 +1026,7 @@ static struct cfq_queue *cfqq_close(struct cfq_data *cfqd, */ static struct cfq_queue *cfq_close_cooperator(struct cfq_data *cfqd, struct cfq_queue *cur_cfqq, - bool probe) + int probe) { struct cfq_queue *cfqq; @@ -1093,15 +1090,6 @@ static void cfq_arm_slice_timer(struct cfq_data *cfqd) if (!cic || !atomic_read(&cic->ioc->nr_tasks)) return; - /* - * If our average think time is larger than the remaining time - * slice, then don't idle. This avoids overrunning the allotted - * time slice. - */ - if (sample_valid(cic->ttime_samples) && - (cfqq->slice_end - jiffies < cic->ttime_mean)) - return; - cfq_mark_cfqq_wait_request(cfqq); /* @@ -1141,7 +1129,9 @@ static void cfq_dispatch_insert(struct request_queue *q, struct request *rq) */ static struct request *cfq_check_fifo(struct cfq_queue *cfqq) { - struct request *rq = NULL; + struct cfq_data *cfqd = cfqq->cfqd; + struct request *rq; + int fifo; if (cfq_cfqq_fifo_expire(cfqq)) return NULL; @@ -1151,11 +1141,13 @@ static struct request *cfq_check_fifo(struct cfq_queue *cfqq) if (list_empty(&cfqq->fifo)) return NULL; + fifo = cfq_cfqq_sync(cfqq); rq = rq_entry_fifo(cfqq->fifo.next); - if (time_before(jiffies, rq_fifo_time(rq))) + + if (time_before(jiffies, rq->start_time + cfqd->cfq_fifo_expire[fifo])) rq = NULL; - cfq_log_cfqq(cfqq->cfqd, cfqq, "fifo=%p", rq); + cfq_log_cfqq(cfqd, cfqq, "fifo=%p", rq); return rq; } @@ -1256,21 +1248,67 @@ static int cfq_forced_dispatch(struct cfq_data *cfqd) return dispatched; } -static bool cfq_may_dispatch(struct cfq_data *cfqd, struct cfq_queue *cfqq) +/* + * Dispatch a request from cfqq, moving them to the request queue + * dispatch list. + */ +static void cfq_dispatch_request(struct cfq_data *cfqd, struct cfq_queue *cfqq) { + struct request *rq; + + BUG_ON(RB_EMPTY_ROOT(&cfqq->sort_list)); + + /* + * follow expired path, else get first next available + */ + rq = cfq_check_fifo(cfqq); + if (!rq) + rq = cfqq->next_rq; + + /* + * insert request into driver dispatch list + */ + cfq_dispatch_insert(cfqd->queue, rq); + + if (!cfqd->active_cic) { + struct cfq_io_context *cic = RQ_CIC(rq); + + atomic_long_inc(&cic->ioc->refcount); + cfqd->active_cic = cic; + } +} + +/* + * Find the cfqq that we need to service and move a request from that to the + * dispatch list + */ +static int cfq_dispatch_requests(struct request_queue *q, int force) +{ + struct cfq_data *cfqd = q->elevator->elevator_data; + struct cfq_queue *cfqq; unsigned int max_dispatch; + if (!cfqd->busy_queues) + return 0; + + if (unlikely(force)) + return cfq_forced_dispatch(cfqd); + + cfqq = cfq_select_queue(cfqd); + if (!cfqq) + return 0; + /* * Drain async requests before we start sync IO */ if (cfq_cfqq_idle_window(cfqq) && cfqd->rq_in_driver[BLK_RW_ASYNC]) - return false; + return 0; /* * If this is an async queue and we have sync IO in flight, let it wait */ if (cfqd->sync_flight && !cfq_cfqq_sync(cfqq)) - return false; + return 0; max_dispatch = cfqd->cfq_quantum; if (cfq_class_idle(cfqq)) @@ -1284,13 +1322,13 @@ static bool cfq_may_dispatch(struct cfq_data *cfqd, struct cfq_queue *cfqq) * idle queue must always only have a single IO in flight */ if (cfq_class_idle(cfqq)) - return false; + return 0; /* * We have other queues, don't allow more IO from this one */ if (cfqd->busy_queues > 1) - return false; + return 0; /* * Sole queue user, allow bigger slice @@ -1314,72 +1352,13 @@ static bool cfq_may_dispatch(struct cfq_data *cfqd, struct cfq_queue *cfqq) max_dispatch = depth; } - /* - * If we're below the current max, allow a dispatch - */ - return cfqq->dispatched < max_dispatch; -} - -/* - * Dispatch a request from cfqq, moving them to the request queue - * dispatch list. - */ -static bool cfq_dispatch_request(struct cfq_data *cfqd, struct cfq_queue *cfqq) -{ - struct request *rq; - - BUG_ON(RB_EMPTY_ROOT(&cfqq->sort_list)); - - if (!cfq_may_dispatch(cfqd, cfqq)) - return false; - - /* - * follow expired path, else get first next available - */ - rq = cfq_check_fifo(cfqq); - if (!rq) - rq = cfqq->next_rq; - - /* - * insert request into driver dispatch list - */ - cfq_dispatch_insert(cfqd->queue, rq); - - if (!cfqd->active_cic) { - struct cfq_io_context *cic = RQ_CIC(rq); - - atomic_long_inc(&cic->ioc->refcount); - cfqd->active_cic = cic; - } - - return true; -} - -/* - * Find the cfqq that we need to service and move a request from that to the - * dispatch list - */ -static int cfq_dispatch_requests(struct request_queue *q, int force) -{ - struct cfq_data *cfqd = q->elevator->elevator_data; - struct cfq_queue *cfqq; - - if (!cfqd->busy_queues) - return 0; - - if (unlikely(force)) - return cfq_forced_dispatch(cfqd); - - cfqq = cfq_select_queue(cfqd); - if (!cfqq) + if (cfqq->dispatched >= max_dispatch) return 0; /* - * Dispatch a request from this cfqq, if it is allowed + * Dispatch a request from this cfqq */ - if (!cfq_dispatch_request(cfqd, cfqq)) - return 0; - + cfq_dispatch_request(cfqd, cfqq); cfqq->slice_dispatch++; cfq_clear_cfqq_must_dispatch(cfqq); @@ -1420,7 +1399,7 @@ static void cfq_put_queue(struct cfq_queue *cfqq) if (unlikely(cfqd->active_queue == cfqq)) { __cfq_slice_expired(cfqd, cfqq, 0); - cfq_schedule_dispatch(cfqd); + cfq_schedule_dispatch(cfqd, 0); } kmem_cache_free(cfq_pool, cfqq); @@ -1515,7 +1494,7 @@ static void cfq_exit_cfqq(struct cfq_data *cfqd, struct cfq_queue *cfqq) { if (unlikely(cfqq == cfqd->active_queue)) { __cfq_slice_expired(cfqd, cfqq, 0); - cfq_schedule_dispatch(cfqd); + cfq_schedule_dispatch(cfqd, 0); } cfq_put_queue(cfqq); @@ -1679,7 +1658,7 @@ static void cfq_ioc_set_ioprio(struct io_context *ioc) } static void cfq_init_cfqq(struct cfq_data *cfqd, struct cfq_queue *cfqq, - pid_t pid, bool is_sync) + pid_t pid, int is_sync) { RB_CLEAR_NODE(&cfqq->rb_node); RB_CLEAR_NODE(&cfqq->p_node); @@ -1699,7 +1678,7 @@ static void cfq_init_cfqq(struct cfq_data *cfqd, struct cfq_queue *cfqq, } static struct cfq_queue * -cfq_find_alloc_queue(struct cfq_data *cfqd, bool is_sync, +cfq_find_alloc_queue(struct cfq_data *cfqd, int is_sync, struct io_context *ioc, gfp_t gfp_mask) { struct cfq_queue *cfqq, *new_cfqq = NULL; @@ -1763,7 +1742,7 @@ cfq_async_queue_prio(struct cfq_data *cfqd, int ioprio_class, int ioprio) } static struct cfq_queue * -cfq_get_queue(struct cfq_data *cfqd, bool is_sync, struct io_context *ioc, +cfq_get_queue(struct cfq_data *cfqd, int is_sync, struct io_context *ioc, gfp_t gfp_mask) { const int ioprio = task_ioprio(ioc); @@ -1998,10 +1977,7 @@ cfq_update_idle_window(struct cfq_data *cfqd, struct cfq_queue *cfqq, (!cfqd->cfq_latency && cfqd->hw_tag && CIC_SEEKY(cic))) enable_idle = 0; else if (sample_valid(cic->ttime_samples)) { - unsigned int slice_idle = cfqd->cfq_slice_idle; - if (sample_valid(cic->seek_samples) && CIC_SEEKY(cic)) - slice_idle = msecs_to_jiffies(CFQ_MIN_TT); - if (cic->ttime_mean > slice_idle) + if (cic->ttime_mean > cfqd->cfq_slice_idle) enable_idle = 0; else enable_idle = 1; @@ -2020,7 +1996,7 @@ cfq_update_idle_window(struct cfq_data *cfqd, struct cfq_queue *cfqq, * Check if new_cfqq should preempt the currently active queue. Return 0 for * no or if we aren't sure, a 1 will cause a preempt. */ -static bool +static int cfq_should_preempt(struct cfq_data *cfqd, struct cfq_queue *new_cfqq, struct request *rq) { @@ -2028,48 +2004,48 @@ cfq_should_preempt(struct cfq_data *cfqd, struct cfq_queue *new_cfqq, cfqq = cfqd->active_queue; if (!cfqq) - return false; + return 0; if (cfq_slice_used(cfqq)) - return true; + return 1; if (cfq_class_idle(new_cfqq)) - return false; + return 0; if (cfq_class_idle(cfqq)) - return true; + return 1; /* * if the new request is sync, but the currently running queue is * not, let the sync request have priority. */ if (rq_is_sync(rq) && !cfq_cfqq_sync(cfqq)) - return true; + return 1; /* * So both queues are sync. Let the new request get disk time if * it's a metadata request and the current queue is doing regular IO. */ if (rq_is_meta(rq) && !cfqq->meta_pending) - return false; + return 1; /* * Allow an RT request to pre-empt an ongoing non-RT cfqq timeslice. */ if (cfq_class_rt(new_cfqq) && !cfq_class_rt(cfqq)) - return true; + return 1; if (!cfqd->active_cic || !cfq_cfqq_wait_request(cfqq)) - return false; + return 0; /* * if this request is as-good as one we would expect from the * current cfqq, let it preempt */ if (cfq_rq_close(cfqd, rq)) - return true; + return 1; - return false; + return 0; } /* @@ -2154,7 +2130,6 @@ static void cfq_insert_request(struct request_queue *q, struct request *rq) cfq_add_rq_rb(rq); - rq_set_fifo_time(rq, jiffies + cfqd->cfq_fifo_expire[rq_is_sync(rq)]); list_add_tail(&rq->queuelist, &cfqq->fifo); cfq_rq_enqueued(cfqd, cfqq, rq); @@ -2236,7 +2211,7 @@ static void cfq_completed_request(struct request_queue *q, struct request *rq) } if (!rq_in_driver(cfqd)) - cfq_schedule_dispatch(cfqd); + cfq_schedule_dispatch(cfqd, 0); } /* @@ -2334,7 +2309,7 @@ cfq_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask) struct cfq_data *cfqd = q->elevator->elevator_data; struct cfq_io_context *cic; const int rw = rq_data_dir(rq); - const bool is_sync = rq_is_sync(rq); + const int is_sync = rq_is_sync(rq); struct cfq_queue *cfqq; unsigned long flags; @@ -2366,7 +2341,7 @@ cfq_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask) if (cic) put_io_context(cic->ioc); - cfq_schedule_dispatch(cfqd); + cfq_schedule_dispatch(cfqd, 0); spin_unlock_irqrestore(q->queue_lock, flags); cfq_log(cfqd, "set_request fail"); return 1; @@ -2375,7 +2350,7 @@ cfq_set_request(struct request_queue *q, struct request *rq, gfp_t gfp_mask) static void cfq_kick_queue(struct work_struct *work) { struct cfq_data *cfqd = - container_of(work, struct cfq_data, unplug_work); + container_of(work, struct cfq_data, unplug_work.work); struct request_queue *q = cfqd->queue; spin_lock_irq(q->queue_lock); @@ -2429,7 +2404,7 @@ static void cfq_idle_slice_timer(unsigned long data) expire: cfq_slice_expired(cfqd, timed_out); out_kick: - cfq_schedule_dispatch(cfqd); + cfq_schedule_dispatch(cfqd, 0); out_cont: spin_unlock_irqrestore(cfqd->queue->queue_lock, flags); } @@ -2437,7 +2412,7 @@ static void cfq_idle_slice_timer(unsigned long data) static void cfq_shutdown_timer_wq(struct cfq_data *cfqd) { del_timer_sync(&cfqd->idle_slice_timer); - cancel_work_sync(&cfqd->unplug_work); + cancel_delayed_work_sync(&cfqd->unplug_work); } static void cfq_put_async_queues(struct cfq_data *cfqd) @@ -2519,7 +2494,7 @@ static void *cfq_init_queue(struct request_queue *q) cfqd->idle_slice_timer.function = cfq_idle_slice_timer; cfqd->idle_slice_timer.data = (unsigned long) cfqd; - INIT_WORK(&cfqd->unplug_work, cfq_kick_queue); + INIT_DELAYED_WORK(&cfqd->unplug_work, cfq_kick_queue); cfqd->cfq_quantum = cfq_quantum; cfqd->cfq_fifo_expire[0] = cfq_fifo_expire[0]; diff --git a/trunk/block/elevator.c b/trunk/block/elevator.c index a847046c6e53..1975b619c86d 100644 --- a/trunk/block/elevator.c +++ b/trunk/block/elevator.c @@ -1059,7 +1059,9 @@ ssize_t elv_iosched_store(struct request_queue *q, const char *name, return count; strlcpy(elevator_name, name, sizeof(elevator_name)); - e = elevator_get(strstrip(elevator_name)); + strstrip(elevator_name); + + e = elevator_get(elevator_name); if (!e) { printk(KERN_ERR "elevator: type %s not found\n", elevator_name); return -EINVAL; diff --git a/trunk/block/genhd.c b/trunk/block/genhd.c index 517e4332cb37..5a0861da324d 100644 --- a/trunk/block/genhd.c +++ b/trunk/block/genhd.c @@ -869,7 +869,6 @@ static DEVICE_ATTR(size, S_IRUGO, part_size_show, NULL); static DEVICE_ATTR(alignment_offset, S_IRUGO, disk_alignment_offset_show, NULL); static DEVICE_ATTR(capability, S_IRUGO, disk_capability_show, NULL); static DEVICE_ATTR(stat, S_IRUGO, part_stat_show, NULL); -static DEVICE_ATTR(inflight, S_IRUGO, part_inflight_show, NULL); #ifdef CONFIG_FAIL_MAKE_REQUEST static struct device_attribute dev_attr_fail = __ATTR(make-it-fail, S_IRUGO|S_IWUSR, part_fail_show, part_fail_store); @@ -889,7 +888,6 @@ static struct attribute *disk_attrs[] = { &dev_attr_alignment_offset.attr, &dev_attr_capability.attr, &dev_attr_stat.attr, - &dev_attr_inflight.attr, #ifdef CONFIG_FAIL_MAKE_REQUEST &dev_attr_fail.attr, #endif @@ -1055,7 +1053,7 @@ static int diskstats_show(struct seq_file *seqf, void *v) part_stat_read(hd, merges[1]), (unsigned long long)part_stat_read(hd, sectors[1]), jiffies_to_msecs(part_stat_read(hd, ticks[1])), - part_in_flight(hd), + hd->in_flight, jiffies_to_msecs(part_stat_read(hd, io_ticks)), jiffies_to_msecs(part_stat_read(hd, time_in_queue)) ); diff --git a/trunk/drivers/block/cciss.c b/trunk/drivers/block/cciss.c index 6399e5090df4..fb5be2d95d52 100644 --- a/trunk/drivers/block/cciss.c +++ b/trunk/drivers/block/cciss.c @@ -68,12 +68,6 @@ MODULE_SUPPORTED_DEVICE("HP SA5i SA5i+ SA532 SA5300 SA5312 SA641 SA642 SA6400" MODULE_VERSION("3.6.20"); MODULE_LICENSE("GPL"); -static int cciss_allow_hpsa; -module_param(cciss_allow_hpsa, int, S_IRUGO|S_IWUSR); -MODULE_PARM_DESC(cciss_allow_hpsa, - "Prevent cciss driver from accessing hardware known to be " - " supported by the hpsa driver"); - #include "cciss_cmd.h" #include "cciss.h" #include @@ -107,6 +101,8 @@ static const struct pci_device_id cciss_pci_device_id[] = { {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSE, 0x103C, 0x3249}, {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSE, 0x103C, 0x324A}, {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSE, 0x103C, 0x324B}, + {PCI_VENDOR_ID_HP, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, + PCI_CLASS_STORAGE_RAID << 8, 0xffff << 8, 0}, {0,} }; @@ -127,6 +123,8 @@ static struct board_type products[] = { {0x409D0E11, "Smart Array 6400 EM", &SA5_access}, {0x40910E11, "Smart Array 6i", &SA5_access}, {0x3225103C, "Smart Array P600", &SA5_access}, + {0x3223103C, "Smart Array P800", &SA5_access}, + {0x3234103C, "Smart Array P400", &SA5_access}, {0x3235103C, "Smart Array P400i", &SA5_access}, {0x3211103C, "Smart Array E200i", &SA5_access}, {0x3212103C, "Smart Array E200", &SA5_access}, @@ -134,10 +132,6 @@ static struct board_type products[] = { {0x3214103C, "Smart Array E200i", &SA5_access}, {0x3215103C, "Smart Array E200i", &SA5_access}, {0x3237103C, "Smart Array E500", &SA5_access}, -/* controllers below this line are also supported by the hpsa driver. */ -#define HPSA_BOUNDARY 0x3223103C - {0x3223103C, "Smart Array P800", &SA5_access}, - {0x3234103C, "Smart Array P400", &SA5_access}, {0x323D103C, "Smart Array P700m", &SA5_access}, {0x3241103C, "Smart Array P212", &SA5_access}, {0x3243103C, "Smart Array P410", &SA5_access}, @@ -146,6 +140,7 @@ static struct board_type products[] = { {0x3249103C, "Smart Array P812", &SA5_access}, {0x324A103C, "Smart Array P712m", &SA5_access}, {0x324B103C, "Smart Array P711m", &SA5_access}, + {0xFFFF103C, "Unknown Smart Array", &SA5_access}, }; /* How long to wait (in milliseconds) for board to go into simple mode */ @@ -3759,27 +3754,7 @@ static int __devinit cciss_pci_init(ctlr_info_t *c, struct pci_dev *pdev) __u64 cfg_offset; __u32 cfg_base_addr; __u64 cfg_base_addr_index; - int i, prod_index, err; - - subsystem_vendor_id = pdev->subsystem_vendor; - subsystem_device_id = pdev->subsystem_device; - board_id = (((__u32) (subsystem_device_id << 16) & 0xffff0000) | - subsystem_vendor_id); - - for (i = 0; i < ARRAY_SIZE(products); i++) { - /* Stand aside for hpsa driver on request */ - if (cciss_allow_hpsa && products[i].board_id == HPSA_BOUNDARY) - return -ENODEV; - if (board_id == products[i].board_id) - break; - } - prod_index = i; - if (prod_index == ARRAY_SIZE(products)) { - dev_warn(&pdev->dev, - "unrecognized board ID: 0x%08lx, ignoring.\n", - (unsigned long) board_id); - return -ENODEV; - } + int i, err; /* check to see if controller has been disabled */ /* BEFORE trying to enable it */ @@ -3803,6 +3778,11 @@ static int __devinit cciss_pci_init(ctlr_info_t *c, struct pci_dev *pdev) return err; } + subsystem_vendor_id = pdev->subsystem_vendor; + subsystem_device_id = pdev->subsystem_device; + board_id = (((__u32) (subsystem_device_id << 16) & 0xffff0000) | + subsystem_vendor_id); + #ifdef CCISS_DEBUG printk("command = %x\n", command); printk("irq = %x\n", pdev->irq); @@ -3888,9 +3868,14 @@ static int __devinit cciss_pci_init(ctlr_info_t *c, struct pci_dev *pdev) * leave a little room for ioctl calls. */ c->max_commands = readl(&(c->cfgtable->CmdsOutMax)); - c->product_name = products[prod_index].product_name; - c->access = *(products[prod_index].access); - c->nr_cmds = c->max_commands - 4; + for (i = 0; i < ARRAY_SIZE(products); i++) { + if (board_id == products[i].board_id) { + c->product_name = products[i].product_name; + c->access = *(products[i].access); + c->nr_cmds = c->max_commands - 4; + break; + } + } if ((readb(&c->cfgtable->Signature[0]) != 'C') || (readb(&c->cfgtable->Signature[1]) != 'I') || (readb(&c->cfgtable->Signature[2]) != 'S') || @@ -3899,6 +3884,27 @@ static int __devinit cciss_pci_init(ctlr_info_t *c, struct pci_dev *pdev) err = -ENODEV; goto err_out_free_res; } + /* We didn't find the controller in our list. We know the + * signature is valid. If it's an HP device let's try to + * bind to the device and fire it up. Otherwise we bail. + */ + if (i == ARRAY_SIZE(products)) { + if (subsystem_vendor_id == PCI_VENDOR_ID_HP) { + c->product_name = products[i-1].product_name; + c->access = *(products[i-1].access); + c->nr_cmds = c->max_commands - 4; + printk(KERN_WARNING "cciss: This is an unknown " + "Smart Array controller.\n" + "cciss: Please update to the latest driver " + "available from www.hp.com.\n"); + } else { + printk(KERN_WARNING "cciss: Sorry, I don't know how" + " to access the Smart Array controller %08lx\n" + , (unsigned long)board_id); + err = -ENODEV; + goto err_out_free_res; + } + } #ifdef CONFIG_X86 { /* Need to enable prefetch in the SCSI core for 6400 in x86 */ @@ -4248,7 +4254,7 @@ static int __devinit cciss_init_one(struct pci_dev *pdev, mutex_init(&hba[i]->busy_shutting_down); if (cciss_pci_init(hba[i], pdev) != 0) - goto clean_no_release_regions; + goto clean0; sprintf(hba[i]->devname, "cciss%d", i); hba[i]->ctlr = i; @@ -4385,14 +4391,13 @@ static int __devinit cciss_init_one(struct pci_dev *pdev, clean1: cciss_destroy_hba_sysfs_entry(hba[i]); clean0: - pci_release_regions(pdev); -clean_no_release_regions: hba[i]->busy_initializing = 0; /* * Deliberately omit pci_disable_device(): it does something nasty to * Smart Array controllers that pci_enable_device does not undo */ + pci_release_regions(pdev); pci_set_drvdata(pdev, NULL); free_hba(i); return -1; diff --git a/trunk/drivers/char/genrtc.c b/trunk/drivers/char/genrtc.c index 31e7c91c2d9d..aac0985a572b 100644 --- a/trunk/drivers/char/genrtc.c +++ b/trunk/drivers/char/genrtc.c @@ -43,7 +43,6 @@ #define RTC_VERSION "1.07" #include -#include #include #include #include diff --git a/trunk/drivers/char/rtc.c b/trunk/drivers/char/rtc.c index bc4ab3e54550..e0d0f8b2696b 100644 --- a/trunk/drivers/char/rtc.c +++ b/trunk/drivers/char/rtc.c @@ -74,7 +74,6 @@ #include #include #include -#include #include #include #include diff --git a/trunk/drivers/char/sonypi.c b/trunk/drivers/char/sonypi.c index 8c262aaf7c26..fd3dced97776 100644 --- a/trunk/drivers/char/sonypi.c +++ b/trunk/drivers/char/sonypi.c @@ -36,7 +36,6 @@ */ #include -#include #include #include #include diff --git a/trunk/drivers/hid/hid-core.c b/trunk/drivers/hid/hid-core.c index 7d05c4bb201e..be34d32906bd 100644 --- a/trunk/drivers/hid/hid-core.c +++ b/trunk/drivers/hid/hid-core.c @@ -1066,7 +1066,7 @@ EXPORT_SYMBOL_GPL(hid_report_raw_event); * @type: HID report type (HID_*_REPORT) * @data: report contents * @size: size of data parameter - * @interrupt: distinguish between interrupt and control transfers + * @interrupt: called from atomic? * * This is data entry for lower layers. */ diff --git a/trunk/drivers/hid/hid-twinhan.c b/trunk/drivers/hid/hid-twinhan.c index c40afc57fc8f..b05f602c051e 100644 --- a/trunk/drivers/hid/hid-twinhan.c +++ b/trunk/drivers/hid/hid-twinhan.c @@ -132,12 +132,12 @@ static struct hid_driver twinhan_driver = { .input_mapping = twinhan_input_mapping, }; -static int __init twinhan_init(void) +static int twinhan_init(void) { return hid_register_driver(&twinhan_driver); } -static void __exit twinhan_exit(void) +static void twinhan_exit(void) { hid_unregister_driver(&twinhan_driver); } diff --git a/trunk/drivers/hid/hidraw.c b/trunk/drivers/hid/hidraw.c index cdd136942bca..ba05275e5104 100644 --- a/trunk/drivers/hid/hidraw.c +++ b/trunk/drivers/hid/hidraw.c @@ -48,9 +48,10 @@ static ssize_t hidraw_read(struct file *file, char __user *buffer, size_t count, char *report; DECLARE_WAITQUEUE(wait, current); - mutex_lock(&list->read_mutex); - while (ret == 0) { + + mutex_lock(&list->read_mutex); + if (list->head == list->tail) { add_wait_queue(&list->hidraw->wait, &wait); set_current_state(TASK_INTERRUPTIBLE); diff --git a/trunk/drivers/md/dm.c b/trunk/drivers/md/dm.c index 376f1ab48a24..23e76fe0d359 100644 --- a/trunk/drivers/md/dm.c +++ b/trunk/drivers/md/dm.c @@ -130,7 +130,7 @@ struct mapped_device { /* * A list of ios that arrived while we were suspended. */ - atomic_t pending[2]; + atomic_t pending; wait_queue_head_t wait; struct work_struct work; struct bio_list deferred; @@ -453,14 +453,13 @@ static void start_io_acct(struct dm_io *io) { struct mapped_device *md = io->md; int cpu; - int rw = bio_data_dir(io->bio); io->start_time = jiffies; cpu = part_stat_lock(); part_round_stats(cpu, &dm_disk(md)->part0); part_stat_unlock(); - dm_disk(md)->part0.in_flight[rw] = atomic_inc_return(&md->pending[rw]); + dm_disk(md)->part0.in_flight = atomic_inc_return(&md->pending); } static void end_io_acct(struct dm_io *io) @@ -480,9 +479,8 @@ static void end_io_acct(struct dm_io *io) * After this is decremented the bio must not be touched if it is * a barrier. */ - dm_disk(md)->part0.in_flight[rw] = pending = - atomic_dec_return(&md->pending[rw]); - pending += atomic_read(&md->pending[rw^0x1]); + dm_disk(md)->part0.in_flight = pending = + atomic_dec_return(&md->pending); /* nudge anyone waiting on suspend queue */ if (!pending) @@ -1787,8 +1785,7 @@ static struct mapped_device *alloc_dev(int minor) if (!md->disk) goto bad_disk; - atomic_set(&md->pending[0], 0); - atomic_set(&md->pending[1], 0); + atomic_set(&md->pending, 0); init_waitqueue_head(&md->wait); INIT_WORK(&md->work, dm_wq_work); init_waitqueue_head(&md->eventq); @@ -2091,8 +2088,7 @@ static int dm_wait_for_completion(struct mapped_device *md, int interruptible) break; } spin_unlock_irqrestore(q->queue_lock, flags); - } else if (!atomic_read(&md->pending[0]) && - !atomic_read(&md->pending[1])) + } else if (!atomic_read(&md->pending)) break; if (interruptible == TASK_INTERRUPTIBLE && diff --git a/trunk/drivers/mfd/twl4030-core.c b/trunk/drivers/mfd/twl4030-core.c index e832e975da60..e424cf6d8e9e 100644 --- a/trunk/drivers/mfd/twl4030-core.c +++ b/trunk/drivers/mfd/twl4030-core.c @@ -480,6 +480,7 @@ static int add_children(struct twl4030_platform_data *pdata, unsigned long features) { struct device *child; + struct device *usb_transceiver = NULL; if (twl_has_bci() && pdata->bci && !(features & TPS_SUBSET)) { child = add_child(3, "twl4030_bci", @@ -531,61 +532,16 @@ add_children(struct twl4030_platform_data *pdata, unsigned long features) } if (twl_has_usb() && pdata->usb) { - - static struct regulator_consumer_supply usb1v5 = { - .supply = "usb1v5", - }; - static struct regulator_consumer_supply usb1v8 = { - .supply = "usb1v8", - }; - static struct regulator_consumer_supply usb3v1 = { - .supply = "usb3v1", - }; - - /* First add the regulators so that they can be used by transceiver */ - if (twl_has_regulator()) { - /* this is a template that gets copied */ - struct regulator_init_data usb_fixed = { - .constraints.valid_modes_mask = - REGULATOR_MODE_NORMAL - | REGULATOR_MODE_STANDBY, - .constraints.valid_ops_mask = - REGULATOR_CHANGE_MODE - | REGULATOR_CHANGE_STATUS, - }; - - child = add_regulator_linked(TWL4030_REG_VUSB1V5, - &usb_fixed, &usb1v5, 1); - if (IS_ERR(child)) - return PTR_ERR(child); - - child = add_regulator_linked(TWL4030_REG_VUSB1V8, - &usb_fixed, &usb1v8, 1); - if (IS_ERR(child)) - return PTR_ERR(child); - - child = add_regulator_linked(TWL4030_REG_VUSB3V1, - &usb_fixed, &usb3v1, 1); - if (IS_ERR(child)) - return PTR_ERR(child); - - } - child = add_child(0, "twl4030_usb", pdata->usb, sizeof(*pdata->usb), true, /* irq0 = USB_PRES, irq1 = USB */ pdata->irq_base + 8 + 2, pdata->irq_base + 4); - if (IS_ERR(child)) return PTR_ERR(child); /* we need to connect regulators to this transceiver */ - if (twl_has_regulator() && child) { - usb1v5.dev = child; - usb1v8.dev = child; - usb3v1.dev = child; - } + usb_transceiver = child; } if (twl_has_watchdog()) { @@ -624,6 +580,47 @@ add_children(struct twl4030_platform_data *pdata, unsigned long features) return PTR_ERR(child); } + if (twl_has_regulator() && usb_transceiver) { + static struct regulator_consumer_supply usb1v5 = { + .supply = "usb1v5", + }; + static struct regulator_consumer_supply usb1v8 = { + .supply = "usb1v8", + }; + static struct regulator_consumer_supply usb3v1 = { + .supply = "usb3v1", + }; + + /* this is a template that gets copied */ + struct regulator_init_data usb_fixed = { + .constraints.valid_modes_mask = + REGULATOR_MODE_NORMAL + | REGULATOR_MODE_STANDBY, + .constraints.valid_ops_mask = + REGULATOR_CHANGE_MODE + | REGULATOR_CHANGE_STATUS, + }; + + usb1v5.dev = usb_transceiver; + usb1v8.dev = usb_transceiver; + usb3v1.dev = usb_transceiver; + + child = add_regulator_linked(TWL4030_REG_VUSB1V5, &usb_fixed, + &usb1v5, 1); + if (IS_ERR(child)) + return PTR_ERR(child); + + child = add_regulator_linked(TWL4030_REG_VUSB1V8, &usb_fixed, + &usb1v8, 1); + if (IS_ERR(child)) + return PTR_ERR(child); + + child = add_regulator_linked(TWL4030_REG_VUSB3V1, &usb_fixed, + &usb3v1, 1); + if (IS_ERR(child)) + return PTR_ERR(child); + } + /* maybe add LDOs that are omitted on cost-reduced parts */ if (twl_has_regulator() && !(features & TPS_SUBSET)) { child = add_regulator(TWL4030_REG_VPLL2, pdata->vpll2); diff --git a/trunk/drivers/net/wan/c101.c b/trunk/drivers/net/wan/c101.c index 0bd898c94759..9693b0fd323d 100644 --- a/trunk/drivers/net/wan/c101.c +++ b/trunk/drivers/net/wan/c101.c @@ -16,7 +16,6 @@ #include #include -#include #include #include #include diff --git a/trunk/drivers/net/wan/n2.c b/trunk/drivers/net/wan/n2.c index 58c66819f39b..83da596e2052 100644 --- a/trunk/drivers/net/wan/n2.c +++ b/trunk/drivers/net/wan/n2.c @@ -18,7 +18,6 @@ #include #include -#include #include #include #include diff --git a/trunk/drivers/net/wan/pci200syn.c b/trunk/drivers/net/wan/pci200syn.c index f1340faaf022..a52f29c72c33 100644 --- a/trunk/drivers/net/wan/pci200syn.c +++ b/trunk/drivers/net/wan/pci200syn.c @@ -16,7 +16,6 @@ #include #include -#include #include #include #include diff --git a/trunk/drivers/oprofile/event_buffer.c b/trunk/drivers/oprofile/event_buffer.c index 5df60a6b6776..2b7ae366ceb1 100644 --- a/trunk/drivers/oprofile/event_buffer.c +++ b/trunk/drivers/oprofile/event_buffer.c @@ -35,23 +35,12 @@ static size_t buffer_pos; /* atomic_t because wait_event checks it outside of buffer_mutex */ static atomic_t buffer_ready = ATOMIC_INIT(0); -/* - * Add an entry to the event buffer. When we get near to the end we - * wake up the process sleeping on the read() of the file. To protect - * the event_buffer this function may only be called when buffer_mutex - * is set. +/* Add an entry to the event buffer. When we + * get near to the end we wake up the process + * sleeping on the read() of the file. */ void add_event_entry(unsigned long value) { - /* - * This shouldn't happen since all workqueues or handlers are - * canceled or flushed before the event buffer is freed. - */ - if (!event_buffer) { - WARN_ON_ONCE(1); - return; - } - if (buffer_pos == buffer_size) { atomic_inc(&oprofile_stats.event_lost_overflow); return; @@ -80,6 +69,7 @@ void wake_up_buffer_waiter(void) int alloc_event_buffer(void) { + int err = -ENOMEM; unsigned long flags; spin_lock_irqsave(&oprofilefs_lock, flags); @@ -90,22 +80,21 @@ int alloc_event_buffer(void) if (buffer_watershed >= buffer_size) return -EINVAL; - buffer_pos = 0; event_buffer = vmalloc(sizeof(unsigned long) * buffer_size); if (!event_buffer) - return -ENOMEM; + goto out; - return 0; + err = 0; +out: + return err; } void free_event_buffer(void) { - mutex_lock(&buffer_mutex); vfree(event_buffer); - buffer_pos = 0; + event_buffer = NULL; - mutex_unlock(&buffer_mutex); } @@ -178,12 +167,6 @@ static ssize_t event_buffer_read(struct file *file, char __user *buf, mutex_lock(&buffer_mutex); - /* May happen if the buffer is freed during pending reads. */ - if (!event_buffer) { - retval = -EINTR; - goto out; - } - atomic_set(&buffer_ready, 0); retval = -EFAULT; diff --git a/trunk/drivers/pci/dmar.c b/trunk/drivers/pci/dmar.c index 22b02c6df854..14bbaa17e2ca 100644 --- a/trunk/drivers/pci/dmar.c +++ b/trunk/drivers/pci/dmar.c @@ -354,7 +354,6 @@ dmar_table_print_dmar_entry(struct acpi_dmar_header *header) struct acpi_dmar_hardware_unit *drhd; struct acpi_dmar_reserved_memory *rmrr; struct acpi_dmar_atsr *atsr; - struct acpi_dmar_rhsa *rhsa; switch (header->type) { case ACPI_DMAR_TYPE_HARDWARE_UNIT: @@ -376,12 +375,6 @@ dmar_table_print_dmar_entry(struct acpi_dmar_header *header) atsr = container_of(header, struct acpi_dmar_atsr, header); printk(KERN_INFO PREFIX "ATSR flags: %#x\n", atsr->flags); break; - case ACPI_DMAR_HARDWARE_AFFINITY: - rhsa = container_of(header, struct acpi_dmar_rhsa, header); - printk(KERN_INFO PREFIX "RHSA base: %#016Lx proximity domain: %#x\n", - (unsigned long long)rhsa->base_address, - rhsa->proximity_domain); - break; } } @@ -466,13 +459,9 @@ parse_dmar_table(void) ret = dmar_parse_one_atsr(entry_header); #endif break; - case ACPI_DMAR_HARDWARE_AFFINITY: - /* We don't do anything with RHSA (yet?) */ - break; default: printk(KERN_WARNING PREFIX - "Unknown DMAR structure type %d\n", - entry_header->type); + "Unknown DMAR structure type\n"); ret = 0; /* for forward compatibility */ break; } diff --git a/trunk/drivers/pci/hotplug/cpqphp.h b/trunk/drivers/pci/hotplug/cpqphp.h index 9c6a9fd26812..53836001d511 100644 --- a/trunk/drivers/pci/hotplug/cpqphp.h +++ b/trunk/drivers/pci/hotplug/cpqphp.h @@ -32,7 +32,6 @@ #include /* for read? and write? functions */ #include /* for delays */ #include -#include /* for signal_pending() */ #define MY_NAME "cpqphp" diff --git a/trunk/drivers/pci/intel-iommu.c b/trunk/drivers/pci/intel-iommu.c index b1e97e682500..855dd7ca47f3 100644 --- a/trunk/drivers/pci/intel-iommu.c +++ b/trunk/drivers/pci/intel-iommu.c @@ -48,7 +48,6 @@ #define IS_GFX_DEVICE(pdev) ((pdev->class >> 16) == PCI_BASE_CLASS_DISPLAY) #define IS_ISA_DEVICE(pdev) ((pdev->class >> 8) == PCI_CLASS_BRIDGE_ISA) -#define IS_AZALIA(pdev) ((pdev)->vendor == 0x8086 && (pdev)->device == 0x3a3e) #define IOAPIC_RANGE_START (0xfee00000) #define IOAPIC_RANGE_END (0xfeefffff) @@ -95,7 +94,6 @@ static inline unsigned long virt_to_dma_pfn(void *p) /* global iommu list, set NULL for ignored DMAR units */ static struct intel_iommu **g_iommus; -static void __init check_tylersburg_isoch(void); static int rwbf_quirk; /* @@ -1936,9 +1934,6 @@ static struct dmar_domain *get_domain_for_dev(struct pci_dev *pdev, int gaw) } static int iommu_identity_mapping; -#define IDENTMAP_ALL 1 -#define IDENTMAP_GFX 2 -#define IDENTMAP_AZALIA 4 static int iommu_domain_identity_map(struct dmar_domain *domain, unsigned long long start, @@ -2156,14 +2151,8 @@ static int domain_add_dev_info(struct dmar_domain *domain, static int iommu_should_identity_map(struct pci_dev *pdev, int startup) { - if ((iommu_identity_mapping & IDENTMAP_AZALIA) && IS_AZALIA(pdev)) - return 1; - - if ((iommu_identity_mapping & IDENTMAP_GFX) && IS_GFX_DEVICE(pdev)) - return 1; - - if (!(iommu_identity_mapping & IDENTMAP_ALL)) - return 0; + if (iommu_identity_mapping == 2) + return IS_GFX_DEVICE(pdev); /* * We want to start off with all devices in the 1:1 domain, and @@ -2343,14 +2332,11 @@ int __init init_dmars(void) } if (iommu_pass_through) - iommu_identity_mapping |= IDENTMAP_ALL; - + iommu_identity_mapping = 1; #ifdef CONFIG_DMAR_BROKEN_GFX_WA - iommu_identity_mapping |= IDENTMAP_GFX; + else + iommu_identity_mapping = 2; #endif - - check_tylersburg_isoch(); - /* * If pass through is not set or not enabled, setup context entries for * identity mappings for rmrr, gfx, and isa and may fall back to static @@ -3684,61 +3670,3 @@ static void __devinit quirk_iommu_rwbf(struct pci_dev *dev) } DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2a40, quirk_iommu_rwbf); - -/* On Tylersburg chipsets, some BIOSes have been known to enable the - ISOCH DMAR unit for the Azalia sound device, but not give it any - TLB entries, which causes it to deadlock. Check for that. We do - this in a function called from init_dmars(), instead of in a PCI - quirk, because we don't want to print the obnoxious "BIOS broken" - message if VT-d is actually disabled. -*/ -static void __init check_tylersburg_isoch(void) -{ - struct pci_dev *pdev; - uint32_t vtisochctrl; - - /* If there's no Azalia in the system anyway, forget it. */ - pdev = pci_get_device(PCI_VENDOR_ID_INTEL, 0x3a3e, NULL); - if (!pdev) - return; - pci_dev_put(pdev); - - /* System Management Registers. Might be hidden, in which case - we can't do the sanity check. But that's OK, because the - known-broken BIOSes _don't_ actually hide it, so far. */ - pdev = pci_get_device(PCI_VENDOR_ID_INTEL, 0x342e, NULL); - if (!pdev) - return; - - if (pci_read_config_dword(pdev, 0x188, &vtisochctrl)) { - pci_dev_put(pdev); - return; - } - - pci_dev_put(pdev); - - /* If Azalia DMA is routed to the non-isoch DMAR unit, fine. */ - if (vtisochctrl & 1) - return; - - /* Drop all bits other than the number of TLB entries */ - vtisochctrl &= 0x1c; - - /* If we have the recommended number of TLB entries (16), fine. */ - if (vtisochctrl == 0x10) - return; - - /* Zero TLB entries? You get to ride the short bus to school. */ - if (!vtisochctrl) { - WARN(1, "Your BIOS is broken; DMA routed to ISOCH DMAR unit but no TLB space.\n" - "BIOS vendor: %s; Ver: %s; Product Version: %s\n", - dmi_get_system_info(DMI_BIOS_VENDOR), - dmi_get_system_info(DMI_BIOS_VERSION), - dmi_get_system_info(DMI_PRODUCT_VERSION)); - iommu_identity_mapping |= IDENTMAP_AZALIA; - return; - } - - printk(KERN_WARNING "DMAR: Recommended TLB entries for ISOCH unit is 16; your BIOS set %d\n", - vtisochctrl); -} diff --git a/trunk/drivers/pci/pci.c b/trunk/drivers/pci/pci.c index 4e4c295a049f..3835871f4832 100644 --- a/trunk/drivers/pci/pci.c +++ b/trunk/drivers/pci/pci.c @@ -2723,6 +2723,17 @@ int __attribute__ ((weak)) pci_ext_cfg_avail(struct pci_dev *dev) return 1; } +static int __devinit pci_init(void) +{ + struct pci_dev *dev = NULL; + + while ((dev = pci_get_device(PCI_ANY_ID, PCI_ANY_ID, dev)) != NULL) { + pci_fixup_device(pci_fixup_final, dev); + } + + return 0; +} + static int __init pci_setup(char *str) { while (str) { @@ -2760,6 +2771,8 @@ static int __init pci_setup(char *str) } early_param("pci", pci_setup); +device_initcall(pci_init); + EXPORT_SYMBOL(pci_reenable_device); EXPORT_SYMBOL(pci_enable_device_io); EXPORT_SYMBOL(pci_enable_device_mem); diff --git a/trunk/drivers/pci/quirks.c b/trunk/drivers/pci/quirks.c index a790b1771f9f..efa6534a6593 100644 --- a/trunk/drivers/pci/quirks.c +++ b/trunk/drivers/pci/quirks.c @@ -2591,19 +2591,6 @@ void pci_fixup_device(enum pci_fixup_pass pass, struct pci_dev *dev) } pci_do_fixups(dev, start, end); } - -static int __init pci_apply_final_quirks(void) -{ - struct pci_dev *dev = NULL; - - while ((dev = pci_get_device(PCI_ANY_ID, PCI_ANY_ID, dev)) != NULL) { - pci_fixup_device(pci_fixup_final, dev); - } - - return 0; -} - -fs_initcall_sync(pci_apply_final_quirks); #else void pci_fixup_device(enum pci_fixup_pass pass, struct pci_dev *dev) {} #endif diff --git a/trunk/fs/ext3/super.c b/trunk/fs/ext3/super.c index 7a520a862f49..72743d360509 100644 --- a/trunk/fs/ext3/super.c +++ b/trunk/fs/ext3/super.c @@ -2321,18 +2321,7 @@ static int ext3_commit_super(struct super_block *sb, if (!sbh) return error; - /* - * If the file system is mounted read-only, don't update the - * superblock write time. This avoids updating the superblock - * write time when we are mounting the root file system - * read/only but we need to replay the journal; at that point, - * for people who are east of GMT and who make their clock - * tick in localtime for Windows bug-for-bug compatibility, - * the clock is set in the future, and this will cause e2fsck - * to complain and force a full file system check. - */ - if (!(sb->s_flags & MS_RDONLY)) - es->s_wtime = cpu_to_le32(get_seconds()); + es->s_wtime = cpu_to_le32(get_seconds()); es->s_free_blocks_count = cpu_to_le32(ext3_count_free_blocks(sb)); es->s_free_inodes_count = cpu_to_le32(ext3_count_free_inodes(sb)); BUFFER_TRACE(sbh, "marking dirty"); diff --git a/trunk/fs/partitions/check.c b/trunk/fs/partitions/check.c index 7b685e10cbad..f38fee0311a7 100644 --- a/trunk/fs/partitions/check.c +++ b/trunk/fs/partitions/check.c @@ -248,19 +248,11 @@ ssize_t part_stat_show(struct device *dev, part_stat_read(p, merges[WRITE]), (unsigned long long)part_stat_read(p, sectors[WRITE]), jiffies_to_msecs(part_stat_read(p, ticks[WRITE])), - part_in_flight(p), + p->in_flight, jiffies_to_msecs(part_stat_read(p, io_ticks)), jiffies_to_msecs(part_stat_read(p, time_in_queue))); } -ssize_t part_inflight_show(struct device *dev, - struct device_attribute *attr, char *buf) -{ - struct hd_struct *p = dev_to_part(dev); - - return sprintf(buf, "%8u %8u\n", p->in_flight[0], p->in_flight[1]); -} - #ifdef CONFIG_FAIL_MAKE_REQUEST ssize_t part_fail_show(struct device *dev, struct device_attribute *attr, char *buf) @@ -289,7 +281,6 @@ static DEVICE_ATTR(start, S_IRUGO, part_start_show, NULL); static DEVICE_ATTR(size, S_IRUGO, part_size_show, NULL); static DEVICE_ATTR(alignment_offset, S_IRUGO, part_alignment_offset_show, NULL); static DEVICE_ATTR(stat, S_IRUGO, part_stat_show, NULL); -static DEVICE_ATTR(inflight, S_IRUGO, part_inflight_show, NULL); #ifdef CONFIG_FAIL_MAKE_REQUEST static struct device_attribute dev_attr_fail = __ATTR(make-it-fail, S_IRUGO|S_IWUSR, part_fail_show, part_fail_store); @@ -301,7 +292,6 @@ static struct attribute *part_attrs[] = { &dev_attr_size.attr, &dev_attr_alignment_offset.attr, &dev_attr_stat.attr, - &dev_attr_inflight.attr, #ifdef CONFIG_FAIL_MAKE_REQUEST &dev_attr_fail.attr, #endif diff --git a/trunk/include/linux/blkdev.h b/trunk/include/linux/blkdev.h index 221cecd86bd3..25119041e034 100644 --- a/trunk/include/linux/blkdev.h +++ b/trunk/include/linux/blkdev.h @@ -1172,7 +1172,11 @@ static inline void put_dev_sector(Sector p) } struct work_struct; +struct delayed_work; int kblockd_schedule_work(struct request_queue *q, struct work_struct *work); +int kblockd_schedule_delayed_work(struct request_queue *q, + struct delayed_work *work, + unsigned long delay); #define MODULE_ALIAS_BLOCKDEV(major,minor) \ MODULE_ALIAS("block-major-" __stringify(major) "-" __stringify(minor)) diff --git a/trunk/include/linux/genhd.h b/trunk/include/linux/genhd.h index 297df45ffd0a..7beaa21b3880 100644 --- a/trunk/include/linux/genhd.h +++ b/trunk/include/linux/genhd.h @@ -98,7 +98,7 @@ struct hd_struct { int make_it_fail; #endif unsigned long stamp; - int in_flight[2]; + int in_flight; #ifdef CONFIG_SMP struct disk_stats *dkstats; #else @@ -322,23 +322,18 @@ static inline void free_part_stats(struct hd_struct *part) #define part_stat_sub(cpu, gendiskp, field, subnd) \ part_stat_add(cpu, gendiskp, field, -subnd) -static inline void part_inc_in_flight(struct hd_struct *part, int rw) +static inline void part_inc_in_flight(struct hd_struct *part) { - part->in_flight[rw]++; + part->in_flight++; if (part->partno) - part_to_disk(part)->part0.in_flight[rw]++; + part_to_disk(part)->part0.in_flight++; } -static inline void part_dec_in_flight(struct hd_struct *part, int rw) +static inline void part_dec_in_flight(struct hd_struct *part) { - part->in_flight[rw]--; + part->in_flight--; if (part->partno) - part_to_disk(part)->part0.in_flight[rw]--; -} - -static inline int part_in_flight(struct hd_struct *part) -{ - return part->in_flight[0] + part->in_flight[1]; + part_to_disk(part)->part0.in_flight--; } /* block/blk-core.c */ @@ -551,8 +546,6 @@ extern ssize_t part_size_show(struct device *dev, struct device_attribute *attr, char *buf); extern ssize_t part_stat_show(struct device *dev, struct device_attribute *attr, char *buf); -extern ssize_t part_inflight_show(struct device *dev, - struct device_attribute *attr, char *buf); #ifdef CONFIG_FAIL_MAKE_REQUEST extern ssize_t part_fail_show(struct device *dev, struct device_attribute *attr, char *buf); diff --git a/trunk/include/linux/kernel.h b/trunk/include/linux/kernel.h index f4e3184fa054..d3cd23f30039 100644 --- a/trunk/include/linux/kernel.h +++ b/trunk/include/linux/kernel.h @@ -659,12 +659,6 @@ extern int do_sysinfo(struct sysinfo *info); #endif /* __KERNEL__ */ -#ifndef __EXPORTED_HEADERS__ -#ifndef __KERNEL__ -#warning Attempt to use kernel headers from user space, see http://kernelnewbies.org/KernelHeaders -#endif /* __KERNEL__ */ -#endif /* __EXPORTED_HEADERS__ */ - #define SI_LOAD_SHIFT 16 struct sysinfo { long uptime; /* Seconds since boot */ diff --git a/trunk/kernel/lockdep.c b/trunk/kernel/lockdep.c index 9af56723c096..3815ac1d58b2 100644 --- a/trunk/kernel/lockdep.c +++ b/trunk/kernel/lockdep.c @@ -142,11 +142,6 @@ static inline struct lock_class *hlock_class(struct held_lock *hlock) #ifdef CONFIG_LOCK_STAT static DEFINE_PER_CPU(struct lock_class_stats[MAX_LOCKDEP_KEYS], lock_stats); -static inline u64 lockstat_clock(void) -{ - return cpu_clock(smp_processor_id()); -} - static int lock_point(unsigned long points[], unsigned long ip) { int i; @@ -163,7 +158,7 @@ static int lock_point(unsigned long points[], unsigned long ip) return i; } -static void lock_time_inc(struct lock_time *lt, u64 time) +static void lock_time_inc(struct lock_time *lt, s64 time) { if (time > lt->max) lt->max = time; @@ -239,12 +234,12 @@ static void put_lock_stats(struct lock_class_stats *stats) static void lock_release_holdtime(struct held_lock *hlock) { struct lock_class_stats *stats; - u64 holdtime; + s64 holdtime; if (!lock_stat) return; - holdtime = lockstat_clock() - hlock->holdtime_stamp; + holdtime = sched_clock() - hlock->holdtime_stamp; stats = get_lock_stats(hlock_class(hlock)); if (hlock->read) @@ -2797,7 +2792,7 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass, hlock->references = references; #ifdef CONFIG_LOCK_STAT hlock->waittime_stamp = 0; - hlock->holdtime_stamp = lockstat_clock(); + hlock->holdtime_stamp = sched_clock(); #endif if (check == 2 && !mark_irqflags(curr, hlock)) @@ -3327,7 +3322,7 @@ __lock_contended(struct lockdep_map *lock, unsigned long ip) if (hlock->instance != lock) return; - hlock->waittime_stamp = lockstat_clock(); + hlock->waittime_stamp = sched_clock(); contention_point = lock_point(hlock_class(hlock)->contention_point, ip); contending_point = lock_point(hlock_class(hlock)->contending_point, @@ -3350,7 +3345,8 @@ __lock_acquired(struct lockdep_map *lock, unsigned long ip) struct held_lock *hlock, *prev_hlock; struct lock_class_stats *stats; unsigned int depth; - u64 now, waittime = 0; + u64 now; + s64 waittime = 0; int i, cpu; depth = curr->lockdep_depth; @@ -3378,7 +3374,7 @@ __lock_acquired(struct lockdep_map *lock, unsigned long ip) cpu = smp_processor_id(); if (hlock->waittime_stamp) { - now = lockstat_clock(); + now = sched_clock(); waittime = now - hlock->waittime_stamp; hlock->holdtime_stamp = now; } diff --git a/trunk/kernel/sched.c b/trunk/kernel/sched.c index e88689522e66..76c0e9691fc0 100644 --- a/trunk/kernel/sched.c +++ b/trunk/kernel/sched.c @@ -676,7 +676,6 @@ inline void update_rq_clock(struct rq *rq) /** * runqueue_is_locked - * @cpu: the processor in question. * * Returns true if the current cpu runqueue is locked. * This interface allows printk to be called with the runqueue lock @@ -2312,7 +2311,7 @@ static int try_to_wake_up(struct task_struct *p, unsigned int state, { int cpu, orig_cpu, this_cpu, success = 0; unsigned long flags; - struct rq *rq, *orig_rq; + struct rq *rq; if (!sched_feat(SYNC_WAKEUPS)) wake_flags &= ~WF_SYNC; @@ -2320,7 +2319,7 @@ static int try_to_wake_up(struct task_struct *p, unsigned int state, this_cpu = get_cpu(); smp_wmb(); - rq = orig_rq = task_rq_lock(p, &flags); + rq = task_rq_lock(p, &flags); update_rq_clock(rq); if (!(p->state & state)) goto out; @@ -2351,10 +2350,6 @@ static int try_to_wake_up(struct task_struct *p, unsigned int state, set_task_cpu(p, cpu); rq = task_rq_lock(p, &flags); - - if (rq != orig_rq) - update_rq_clock(rq); - WARN_ON(p->state != TASK_WAKING); cpu = task_cpu(p); @@ -3661,7 +3656,6 @@ static void update_group_power(struct sched_domain *sd, int cpu) /** * update_sg_lb_stats - Update sched_group's statistics for load balancing. - * @sd: The sched_domain whose statistics are to be updated. * @group: sched_group whose statistics are to be updated. * @this_cpu: Cpu for which load balance is currently performed. * @idle: Idle status of this_cpu @@ -6724,6 +6718,9 @@ EXPORT_SYMBOL(yield); /* * This task is about to go to sleep on IO. Increment rq->nr_iowait so * that process accounting knows that this is a task in IO wait state. + * + * But don't do that if it is a deliberate, throttling IO wait (this task + * has set its backing_dev_info: the queue against which it should throttle) */ void __sched io_schedule(void) { diff --git a/trunk/kernel/trace/trace.c b/trunk/kernel/trace/trace.c index c820b0310a12..45068269ebb1 100644 --- a/trunk/kernel/trace/trace.c +++ b/trunk/kernel/trace/trace.c @@ -1393,7 +1393,7 @@ int trace_array_vprintk(struct trace_array *tr, int trace_vprintk(unsigned long ip, const char *fmt, va_list args) { - return trace_array_vprintk(&global_trace, ip, fmt, args); + return trace_array_printk(&global_trace, ip, fmt, args); } EXPORT_SYMBOL_GPL(trace_vprintk); diff --git a/trunk/kernel/trace/trace_events_filter.c b/trunk/kernel/trace/trace_events_filter.c index 98a6cc5c64ed..23245785927f 100644 --- a/trunk/kernel/trace/trace_events_filter.c +++ b/trunk/kernel/trace/trace_events_filter.c @@ -933,9 +933,8 @@ static void postfix_clear(struct filter_parse_state *ps) while (!list_empty(&ps->postfix)) { elt = list_first_entry(&ps->postfix, struct postfix_elt, list); - list_del(&elt->list); kfree(elt->operand); - kfree(elt); + list_del(&elt->list); } } diff --git a/trunk/mm/backing-dev.c b/trunk/mm/backing-dev.c index 5a37e2055717..3d3accb1f800 100644 --- a/trunk/mm/backing-dev.c +++ b/trunk/mm/backing-dev.c @@ -92,7 +92,7 @@ static int bdi_debug_stats_show(struct seq_file *m, void *v) "BdiDirtyThresh: %8lu kB\n" "DirtyThresh: %8lu kB\n" "BackgroundThresh: %8lu kB\n" - "WritebackThreads: %8lu\n" + "WriteBack threads:%8lu\n" "b_dirty: %8lu\n" "b_io: %8lu\n" "b_more_io: %8lu\n" diff --git a/trunk/mm/page-writeback.c b/trunk/mm/page-writeback.c index 2c5d79236ead..a3b14090b1fb 100644 --- a/trunk/mm/page-writeback.c +++ b/trunk/mm/page-writeback.c @@ -566,8 +566,7 @@ static void balance_dirty_pages(struct address_space *mapping, if (pages_written >= write_chunk) break; /* We've done our duty */ - __set_current_state(TASK_INTERRUPTIBLE); - io_schedule_timeout(pause); + schedule_timeout_interruptible(pause); /* * Increase the delay for each loop, up to our previous diff --git a/trunk/mm/percpu.c b/trunk/mm/percpu.c index 6af78c1ee704..4a048abad043 100644 --- a/trunk/mm/percpu.c +++ b/trunk/mm/percpu.c @@ -1870,14 +1870,13 @@ int __init pcpu_embed_first_chunk(size_t reserved_size, ssize_t dyn_size, max_distance = 0; for (group = 0; group < ai->nr_groups; group++) { ai->groups[group].base_offset = areas[group] - base; - max_distance = max_t(size_t, max_distance, - ai->groups[group].base_offset); + max_distance = max(max_distance, ai->groups[group].base_offset); } max_distance += ai->unit_size; /* warn if maximum distance is further than 75% of vmalloc space */ if (max_distance > (VMALLOC_END - VMALLOC_START) * 3 / 4) { - pr_warning("PERCPU: max_distance=0x%zx too large for vmalloc " + pr_warning("PERCPU: max_distance=0x%lx too large for vmalloc " "space 0x%lx\n", max_distance, VMALLOC_END - VMALLOC_START); #ifdef CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK diff --git a/trunk/scripts/Kbuild.include b/trunk/scripts/Kbuild.include index c67e73ecd5be..4f9c1908593b 100644 --- a/trunk/scripts/Kbuild.include +++ b/trunk/scripts/Kbuild.include @@ -100,7 +100,7 @@ as-option = $(call try-run,\ # Usage: cflags-y += $(call as-instr,instr,option1,option2) as-instr = $(call try-run,\ - /bin/echo -e "$(1)" | $(CC) $(KBUILD_AFLAGS) -c -xassembler -o "$$TMP" -,$(2),$(3)) + echo -e "$(1)" | $(CC) $(KBUILD_AFLAGS) -c -xassembler -o "$$TMP" -,$(2),$(3)) # cc-option # Usage: cflags-y += $(call cc-option,-march=winchip-c6,-march=i586) diff --git a/trunk/scripts/Makefile.lib b/trunk/scripts/Makefile.lib index ffdafb26f539..7a7778746ea6 100644 --- a/trunk/scripts/Makefile.lib +++ b/trunk/scripts/Makefile.lib @@ -208,7 +208,7 @@ cmd_gzip = (cat $(filter-out FORCE,$^) | gzip -f -9 > $@) || \ # Bzip2 and LZMA do not include size in file... so we have to fake that; # append the size as a 32-bit littleendian number as gzip does. -size_append = /bin/echo -ne $(shell \ +size_append = echo -ne $(shell \ dec_size=0; \ for F in $1; do \ fsize=$$(stat -c "%s" $$F); \ diff --git a/trunk/scripts/checkkconfigsymbols.sh b/trunk/scripts/checkkconfigsymbols.sh index 46be3c5a62b7..39677c82747a 100755 --- a/trunk/scripts/checkkconfigsymbols.sh +++ b/trunk/scripts/checkkconfigsymbols.sh @@ -9,7 +9,7 @@ paths="$@" # Doing this once at the beginning saves a lot of time, on a cache-hot tree. Kconfigs="`find . -name 'Kconfig' -o -name 'Kconfig*[^~]'`" -/bin/echo -e "File list \tundefined symbol used" +echo -e "File list \tundefined symbol used" find $paths -name '*.[chS]' -o -name 'Makefile' -o -name 'Makefile*[^~]'| while read i do # Output the bare Kconfig variable and the filename; the _MODULE part at @@ -54,6 +54,6 @@ while read symb files; do # beyond the purpose of this script. symb_bare=`echo $symb | sed -e 's/_MODULE//'` if ! grep -q "\<$symb_bare\>" $Kconfigs; then - /bin/echo -e "$files: \t$symb" + echo -e "$files: \t$symb" fi done|sort diff --git a/trunk/scripts/headers_install.pl b/trunk/scripts/headers_install.pl index b89ca2c58fdb..c6ae4052ab43 100644 --- a/trunk/scripts/headers_install.pl +++ b/trunk/scripts/headers_install.pl @@ -20,7 +20,7 @@ my ($readdir, $installdir, $arch, @files) = @ARGV; -my $unifdef = "scripts/unifdef -U__KERNEL__ -D__EXPORTED_HEADERS__"; +my $unifdef = "scripts/unifdef -U__KERNEL__"; foreach my $file (@files) { local *INFILE; diff --git a/trunk/scripts/mkcompile_h b/trunk/scripts/mkcompile_h index bce3d0fe6fbd..6a12dd9f1181 100755 --- a/trunk/scripts/mkcompile_h +++ b/trunk/scripts/mkcompile_h @@ -1,5 +1,3 @@ -#!/bin/sh - TARGET=$1 ARCH=$2 SMP=$3 @@ -52,7 +50,7 @@ UTS_VERSION="$UTS_VERSION $CONFIG_FLAGS $TIMESTAMP" # Truncate to maximum length UTS_LEN=64 -UTS_TRUNCATE="cut -b -$UTS_LEN" +UTS_TRUNCATE="sed -e s/\(.\{1,$UTS_LEN\}\).*/\1/" # Generate a temporary compile.h @@ -68,13 +66,9 @@ UTS_TRUNCATE="cut -b -$UTS_LEN" echo \#define LINUX_COMPILE_HOST \"`hostname | $UTS_TRUNCATE`\" if [ -x /bin/dnsdomainname ]; then - domain=`dnsdomainname 2> /dev/null` + echo \#define LINUX_COMPILE_DOMAIN \"`dnsdomainname | $UTS_TRUNCATE`\" elif [ -x /bin/domainname ]; then - domain=`domainname 2> /dev/null` - fi - - if [ -n "$domain" ]; then - echo \#define LINUX_COMPILE_DOMAIN \"`echo $domain | $UTS_TRUNCATE`\" + echo \#define LINUX_COMPILE_DOMAIN \"`domainname | $UTS_TRUNCATE`\" else echo \#define LINUX_COMPILE_DOMAIN fi diff --git a/trunk/scripts/package/Makefile b/trunk/scripts/package/Makefile index f67cc885c807..fa4a0a17b7e0 100644 --- a/trunk/scripts/package/Makefile +++ b/trunk/scripts/package/Makefile @@ -18,9 +18,6 @@ # e) generate the rpm files, based on kernel.spec # - Use /. to avoid tar packing just the symlink -# Note that the rpm-pkg target cannot be used with KBUILD_OUTPUT, -# but the binrpm-pkg target can; for some reason O= gets ignored. - # Do we have rpmbuild, otherwise fall back to the older rpm RPM := $(shell if [ -x "/usr/bin/rpmbuild" ]; then echo rpmbuild; \ else echo rpm; fi) @@ -36,12 +33,6 @@ $(objtree)/kernel.spec: $(MKSPEC) $(srctree)/Makefile $(CONFIG_SHELL) $(MKSPEC) > $@ rpm-pkg rpm: $(objtree)/kernel.spec FORCE - @if test -n "$(KBUILD_OUTPUT)"; then \ - echo "Building source + binary RPM is not possible outside the"; \ - echo "kernel source tree. Don't set KBUILD_OUTPUT, or use the"; \ - echo "binrpm-pkg target instead."; \ - false; \ - fi $(MAKE) clean $(PREV) ln -sf $(srctree) $(KERNELPATH) $(CONFIG_SHELL) $(srctree)/scripts/setlocalversion > $(objtree)/.scmversion @@ -70,7 +61,7 @@ binrpm-pkg: $(objtree)/binkernel.spec FORCE set -e; \ mv -f $(objtree)/.tmp_version $(objtree)/.version - $(RPM) $(RPMOPTS) --define "_builddir $(objtree)" --target \ + $(RPM) $(RPMOPTS) --define "_builddir $(srctree)" --target \ $(UTS_MACHINE) -bb $< clean-files += $(objtree)/binkernel.spec diff --git a/trunk/scripts/package/mkspec b/trunk/scripts/package/mkspec index 47bdd2f99b78..3d93f8c81252 100755 --- a/trunk/scripts/package/mkspec +++ b/trunk/scripts/package/mkspec @@ -70,7 +70,7 @@ echo 'mkdir -p $RPM_BUILD_ROOT/boot $RPM_BUILD_ROOT/lib/modules' echo 'mkdir -p $RPM_BUILD_ROOT/lib/firmware' echo "%endif" -echo 'INSTALL_MOD_PATH=$RPM_BUILD_ROOT make %{_smp_mflags} KBUILD_SRC= modules_install' +echo 'INSTALL_MOD_PATH=$RPM_BUILD_ROOT make %{_smp_mflags} modules_install' echo "%ifarch ia64" echo 'cp $KBUILD_IMAGE $RPM_BUILD_ROOT'"/boot/efi/vmlinuz-$KERNELRELEASE" echo 'ln -s '"efi/vmlinuz-$KERNELRELEASE" '$RPM_BUILD_ROOT'"/boot/" diff --git a/trunk/sound/arm/aaci.c b/trunk/sound/arm/aaci.c index 1f0f8213e2d5..dc78272fc39f 100644 --- a/trunk/sound/arm/aaci.c +++ b/trunk/sound/arm/aaci.c @@ -937,7 +937,6 @@ static int __devinit aaci_probe_ac97(struct aaci *aaci) struct snd_ac97 *ac97; int ret; - writel(0, aaci->base + AC97_POWERDOWN); /* * Assert AACIRESET for 2us */ diff --git a/trunk/sound/pci/bt87x.c b/trunk/sound/pci/bt87x.c index 4e2b925a94cc..24585c6c6d01 100644 --- a/trunk/sound/pci/bt87x.c +++ b/trunk/sound/pci/bt87x.c @@ -808,8 +808,6 @@ static struct pci_device_id snd_bt87x_ids[] = { BT_DEVICE(PCI_DEVICE_ID_BROOKTREE_878, 0x1002, 0x0001, GENERIC), /* Leadtek Winfast tv 2000xp delux */ BT_DEVICE(PCI_DEVICE_ID_BROOKTREE_878, 0x107d, 0x6606, GENERIC), - /* Pinnacle PCTV */ - BT_DEVICE(PCI_DEVICE_ID_BROOKTREE_878, 0x11bd, 0x0012, GENERIC), /* Voodoo TV 200 */ BT_DEVICE(PCI_DEVICE_ID_BROOKTREE_878, 0x121a, 0x3000, GENERIC), /* Askey Computer Corp. MagicTView'99 */ diff --git a/trunk/sound/pci/hda/patch_nvhdmi.c b/trunk/sound/pci/hda/patch_nvhdmi.c index 9fb60276f5c9..c8435c9a97f9 100644 --- a/trunk/sound/pci/hda/patch_nvhdmi.c +++ b/trunk/sound/pci/hda/patch_nvhdmi.c @@ -29,9 +29,6 @@ #include "hda_codec.h" #include "hda_local.h" -/* define below to restrict the supported rates and formats */ -/* #define LIMITED_RATE_FMT_SUPPORT */ - struct nvhdmi_spec { struct hda_multi_out multiout; @@ -63,22 +60,6 @@ static struct hda_verb nvhdmi_basic_init[] = { {} /* terminator */ }; -#ifdef LIMITED_RATE_FMT_SUPPORT -/* support only the safe format and rate */ -#define SUPPORTED_RATES SNDRV_PCM_RATE_48000 -#define SUPPORTED_MAXBPS 16 -#define SUPPORTED_FORMATS SNDRV_PCM_FMTBIT_S16_LE -#else -/* support all rates and formats */ -#define SUPPORTED_RATES \ - (SNDRV_PCM_RATE_22050 | SNDRV_PCM_RATE_44100 | SNDRV_PCM_RATE_48000 |\ - SNDRV_PCM_RATE_88200 | SNDRV_PCM_RATE_96000 | SNDRV_PCM_RATE_176400 |\ - SNDRV_PCM_RATE_192000) -#define SUPPORTED_MAXBPS 24 -#define SUPPORTED_FORMATS \ - (SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S32_LE) -#endif - /* * Controls */ @@ -277,9 +258,9 @@ static struct hda_pcm_stream nvhdmi_pcm_digital_playback_8ch = { .channels_min = 2, .channels_max = 8, .nid = Nv_Master_Convert_nid, - .rates = SUPPORTED_RATES, - .maxbps = SUPPORTED_MAXBPS, - .formats = SUPPORTED_FORMATS, + .rates = SNDRV_PCM_RATE_48000, + .maxbps = 16, + .formats = SNDRV_PCM_FMTBIT_S16_LE, .ops = { .open = nvhdmi_dig_playback_pcm_open, .close = nvhdmi_dig_playback_pcm_close_8ch, @@ -292,9 +273,9 @@ static struct hda_pcm_stream nvhdmi_pcm_digital_playback_2ch = { .channels_min = 2, .channels_max = 2, .nid = Nv_Master_Convert_nid, - .rates = SUPPORTED_RATES, - .maxbps = SUPPORTED_MAXBPS, - .formats = SUPPORTED_FORMATS, + .rates = SNDRV_PCM_RATE_48000, + .maxbps = 16, + .formats = SNDRV_PCM_FMTBIT_S16_LE, .ops = { .open = nvhdmi_dig_playback_pcm_open, .close = nvhdmi_dig_playback_pcm_close_2ch, diff --git a/trunk/sound/pci/hda/patch_realtek.c b/trunk/sound/pci/hda/patch_realtek.c index c08ca660daba..470fd74a0a1a 100644 --- a/trunk/sound/pci/hda/patch_realtek.c +++ b/trunk/sound/pci/hda/patch_realtek.c @@ -275,7 +275,7 @@ struct alc_spec { struct snd_kcontrol_new *cap_mixer; /* capture mixer */ unsigned int beep_amp; /* beep amp value, set via set_beep_amp() */ - const struct hda_verb *init_verbs[10]; /* initialization verbs + const struct hda_verb *init_verbs[5]; /* initialization verbs * don't forget NULL * termination! */ diff --git a/trunk/sound/pci/hda/patch_sigmatel.c b/trunk/sound/pci/hda/patch_sigmatel.c index 66c0876bf734..a9b26828a651 100644 --- a/trunk/sound/pci/hda/patch_sigmatel.c +++ b/trunk/sound/pci/hda/patch_sigmatel.c @@ -158,7 +158,6 @@ enum { STAC_D965_5ST_NO_FP, STAC_DELL_3ST, STAC_DELL_BIOS, - STAC_927X_VOLKNOB, STAC_927X_MODELS }; @@ -908,16 +907,6 @@ static struct hda_verb d965_core_init[] = { {} }; -static struct hda_verb dell_3st_core_init[] = { - /* don't set delta bit */ - {0x24, AC_VERB_SET_VOLUME_KNOB_CONTROL, 0x7f}, - /* unmute node 0x1b */ - {0x1b, AC_VERB_SET_AMP_GAIN_MUTE, 0xb000}, - /* select node 0x03 as DAC */ - {0x0b, AC_VERB_SET_CONNECT_SEL, 0x01}, - {} -}; - static struct hda_verb stac927x_core_init[] = { /* set master volume and direct control */ { 0x24, AC_VERB_SET_VOLUME_KNOB_CONTROL, 0xff}, @@ -926,14 +915,6 @@ static struct hda_verb stac927x_core_init[] = { {} }; -static struct hda_verb stac927x_volknob_core_init[] = { - /* don't set delta bit */ - {0x24, AC_VERB_SET_VOLUME_KNOB_CONTROL, 0x7f}, - /* enable analog pc beep path */ - {0x01, AC_VERB_SET_DIGI_CONVERT_2, 1 << 5}, - {} -}; - static struct hda_verb stac9205_core_init[] = { /* set master volume and direct control */ { 0x24, AC_VERB_SET_VOLUME_KNOB_CONTROL, 0xff}, @@ -2018,7 +1999,6 @@ static unsigned int *stac927x_brd_tbl[STAC_927X_MODELS] = { [STAC_D965_5ST_NO_FP] = d965_5st_no_fp_pin_configs, [STAC_DELL_3ST] = dell_3st_pin_configs, [STAC_DELL_BIOS] = NULL, - [STAC_927X_VOLKNOB] = NULL, }; static const char *stac927x_models[STAC_927X_MODELS] = { @@ -2030,7 +2010,6 @@ static const char *stac927x_models[STAC_927X_MODELS] = { [STAC_D965_5ST_NO_FP] = "5stack-no-fp", [STAC_DELL_3ST] = "dell-3stack", [STAC_DELL_BIOS] = "dell-bios", - [STAC_927X_VOLKNOB] = "volknob", }; static struct snd_pci_quirk stac927x_cfg_tbl[] = { @@ -2066,8 +2045,6 @@ static struct snd_pci_quirk stac927x_cfg_tbl[] = { "Intel D965", STAC_D965_5ST), SND_PCI_QUIRK_MASK(PCI_VENDOR_ID_INTEL, 0xff00, 0x2500, "Intel D965", STAC_D965_5ST), - /* volume-knob fixes */ - SND_PCI_QUIRK_VENDOR(0x10cf, "FSC", STAC_927X_VOLKNOB), {} /* terminator */ }; @@ -5635,14 +5612,10 @@ static int patch_stac927x(struct hda_codec *codec) spec->dmic_nids = stac927x_dmic_nids; spec->num_dmics = STAC927X_NUM_DMICS; - spec->init = dell_3st_core_init; + spec->init = d965_core_init; spec->dmux_nids = stac927x_dmux_nids; spec->num_dmuxes = ARRAY_SIZE(stac927x_dmux_nids); break; - case STAC_927X_VOLKNOB: - spec->num_dmics = 0; - spec->init = stac927x_volknob_core_init; - break; default: spec->num_dmics = 0; spec->init = stac927x_core_init; diff --git a/trunk/sound/pci/ice1712/amp.c b/trunk/sound/pci/ice1712/amp.c index 6da21a2bcade..37564300b50d 100644 --- a/trunk/sound/pci/ice1712/amp.c +++ b/trunk/sound/pci/ice1712/amp.c @@ -52,13 +52,11 @@ static int __devinit snd_vt1724_amp_init(struct snd_ice1712 *ice) /* only use basic functionality for now */ - /* VT1616 6ch codec connected to PSDOUT0 using packed mode */ - ice->num_total_dacs = 6; + ice->num_total_dacs = 2; /* only PSDOUT0 is connected */ ice->num_total_adcs = 2; - /* Chaintech AV-710 has another WM8728 codec connected to PSDOUT4 - (shared with the SPDIF output). Mixer control for this codec - is not yet supported. */ + /* Chaintech AV-710 has another codecs, which need initialization */ + /* initialize WM8728 codec */ if (ice->eeprom.subvendor == VT1724_SUBDEVICE_AV710) { for (i = 0; i < ARRAY_SIZE(wm_inits); i += 2) wm_put(ice, wm_inits[i], wm_inits[i+1]); diff --git a/trunk/sound/pci/ice1712/ice1724.c b/trunk/sound/pci/ice1712/ice1724.c index 10fc92c05574..76b717dae4b6 100644 --- a/trunk/sound/pci/ice1712/ice1724.c +++ b/trunk/sound/pci/ice1712/ice1724.c @@ -648,7 +648,7 @@ static int snd_vt1724_set_pro_rate(struct snd_ice1712 *ice, unsigned int rate, (inb(ICEMT1724(ice, DMA_PAUSE)) & DMA_PAUSES)) { /* running? we cannot change the rate now... */ spin_unlock_irqrestore(&ice->reg_lock, flags); - return ((rate == ice->cur_rate) && !force) ? 0 : -EBUSY; + return -EBUSY; } if (!force && is_pro_rate_locked(ice)) { spin_unlock_irqrestore(&ice->reg_lock, flags); diff --git a/trunk/tools/perf/Makefile b/trunk/tools/perf/Makefile index 742a32eee8fc..5881943f0c34 100644 --- a/trunk/tools/perf/Makefile +++ b/trunk/tools/perf/Makefile @@ -157,18 +157,11 @@ uname_R := $(shell sh -c 'uname -r 2>/dev/null || echo not') uname_P := $(shell sh -c 'uname -p 2>/dev/null || echo not') uname_V := $(shell sh -c 'uname -v 2>/dev/null || echo not') -# -# Add -m32 for cross-builds: -# -ifdef NO_64BIT - MBITS := -m32 -else - # - # If we're on a 64-bit kernel, use -m64: - # - ifneq ($(patsubst %64,%,$(uname_M)),$(uname_M)) - MBITS := -m64 - endif +# If we're on a 64-bit kernel, use -m64 +ifndef NO_64BIT + ifneq ($(patsubst %64,%,$(uname_M)),$(uname_M)) + M64 := -m64 + endif endif # CFLAGS and LDFLAGS are for the users to override from the command line. @@ -201,7 +194,7 @@ EXTRA_WARNINGS := $(EXTRA_WARNINGS) -Wold-style-definition EXTRA_WARNINGS := $(EXTRA_WARNINGS) -Wstrict-prototypes EXTRA_WARNINGS := $(EXTRA_WARNINGS) -Wdeclaration-after-statement -CFLAGS = $(MBITS) -ggdb3 -Wall -Wextra -std=gnu99 -Werror -O6 -fstack-protector-all -D_FORTIFY_SOURCE=2 $(EXTRA_WARNINGS) +CFLAGS = $(M64) -ggdb3 -Wall -Wextra -std=gnu99 -Werror -O6 -fstack-protector-all -D_FORTIFY_SOURCE=2 $(EXTRA_WARNINGS) LDFLAGS = -lpthread -lrt -lelf -lm ALL_CFLAGS = $(CFLAGS) ALL_LDFLAGS = $(LDFLAGS) @@ -423,7 +416,7 @@ ifeq ($(uname_S),Darwin) endif ifneq ($(shell sh -c "(echo '\#include '; echo 'int main(void) { Elf * elf = elf_begin(0, ELF_C_READ_MMAP, 0); return (long)elf; }') | $(CC) -x c - $(ALL_CFLAGS) -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -o /dev/null $(ALL_LDFLAGS) > /dev/null 2>&1 && echo y"), y) - msg := $(error No libelf.h/libelf found, please install libelf-dev/elfutils-libelf-devel and glibc-dev[el]); + msg := $(error No libelf.h/libelf found, please install libelf-dev/elfutils-libelf-devel); endif ifdef NO_DEMANGLE diff --git a/trunk/tools/perf/builtin-sched.c b/trunk/tools/perf/builtin-sched.c index ce2d5be4f30e..ea9c15c0cdfe 100644 --- a/trunk/tools/perf/builtin-sched.c +++ b/trunk/tools/perf/builtin-sched.c @@ -1287,7 +1287,7 @@ static struct sort_dimension *available_sorts[] = { static LIST_HEAD(sort_list); -static int sort_dimension__add(const char *tok, struct list_head *list) +static int sort_dimension__add(char *tok, struct list_head *list) { int i; @@ -1917,7 +1917,7 @@ static void setup_sorting(void) free(str); - sort_dimension__add("pid", &cmp_pid); + sort_dimension__add((char *)"pid", &cmp_pid); } static const char *record_args[] = { diff --git a/trunk/tools/perf/util/parse-events.c b/trunk/tools/perf/util/parse-events.c index 8cfb48cbbea0..87c424de79ee 100644 --- a/trunk/tools/perf/util/parse-events.c +++ b/trunk/tools/perf/util/parse-events.c @@ -691,10 +691,7 @@ static void store_event_type(const char *orgname) FILE *file; int id; - sprintf(filename, "%s/", debugfs_path); - strncat(filename, orgname, strlen(orgname)); - strcat(filename, "/id"); - + sprintf(filename, "/sys/kernel/debug/tracing/events/%s/id", orgname); c = strchr(filename, ':'); if (c) *c = '/'; diff --git a/trunk/tools/perf/util/trace-event-parse.c b/trunk/tools/perf/util/trace-event-parse.c index 55c9659a56e2..55b41b9e3834 100644 --- a/trunk/tools/perf/util/trace-event-parse.c +++ b/trunk/tools/perf/util/trace-event-parse.c @@ -618,7 +618,7 @@ static int test_type(enum event_type type, enum event_type expect) } static int test_type_token(enum event_type type, char *token, - enum event_type expect, const char *expect_tok) + enum event_type expect, char *expect_tok) { if (type != expect) { die("Error: expected type %d but read %d", @@ -650,7 +650,7 @@ static int read_expect_type(enum event_type expect, char **tok) return __read_expect_type(expect, tok, 1); } -static int __read_expected(enum event_type expect, const char *str, int newline_ok) +static int __read_expected(enum event_type expect, char *str, int newline_ok) { enum event_type type; char *token; @@ -668,12 +668,12 @@ static int __read_expected(enum event_type expect, const char *str, int newline_ return 0; } -static int read_expected(enum event_type expect, const char *str) +static int read_expected(enum event_type expect, char *str) { return __read_expected(expect, str, 1); } -static int read_expected_item(enum event_type expect, const char *str) +static int read_expected_item(enum event_type expect, char *str) { return __read_expected(expect, str, 0); }