Skip to content

Commit

Permalink
---
Browse files Browse the repository at this point in the history
yaml
---
r: 71655
b: refs/heads/master
c: fc333b2
h: refs/heads/master
i:
  71653: f2c6cb2
  71651: 953948a
  71647: df8799f
v: v3
  • Loading branch information
Sam Ravnborg authored and Sam Ravnborg committed Oct 18, 2007
1 parent 3670503 commit 03f66a9
Show file tree
Hide file tree
Showing 1,136 changed files with 30,572 additions and 31,459 deletions.
2 changes: 1 addition & 1 deletion [refs]
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
---
refs/heads/master: c4ec20717313daafba59225f812db89595952b83
refs/heads/master: fc333b2df388d6e8791b3ee59c0679e4a131555a
2 changes: 1 addition & 1 deletion trunk/Documentation/DocBook/kernel-api.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@

<sect1><title>Atomic and pointer manipulation</title>
!Iinclude/asm-x86/atomic_32.h
!Iinclude/asm-x86/unaligned.h
!Iinclude/asm-x86/unaligned_32.h
</sect1>

<sect1><title>Delaying, scheduling, and timer routines</title>
Expand Down
25 changes: 10 additions & 15 deletions trunk/Documentation/IPMI.txt
Original file line number Diff line number Diff line change
Expand Up @@ -441,20 +441,17 @@ ACPI, and if none of those then a KCS device at the spec-specified
0xca2. If you want to turn this off, set the "trydefaults" option to
false.

If your IPMI interface does not support interrupts and is a KCS or
SMIC interface, the IPMI driver will start a kernel thread for the
interface to help speed things up. This is a low-priority kernel
thread that constantly polls the IPMI driver while an IPMI operation
is in progress. The force_kipmid module parameter will all the user to
force this thread on or off. If you force it off and don't have
interrupts, the driver will run VERY slowly. Don't blame me,
If you have high-res timers compiled into the kernel, the driver will
use them to provide much better performance. Note that if you do not
have high-res timers enabled in the kernel and you don't have
interrupts enabled, the driver will run VERY slowly. Don't blame me,
these interfaces suck.

The driver supports a hot add and remove of interfaces. This way,
interfaces can be added or removed after the kernel is up and running.
This is done using /sys/modules/ipmi_si/parameters/hotmod, which is a
write-only parameter. You write a string to this interface. The string
has the format:
This is done using /sys/modules/ipmi_si/hotmod, which is a write-only
parameter. You write a string to this interface. The string has the
format:
<op1>[:op2[:op3...]]
The "op"s are:
add|remove,kcs|bt|smic,mem|i/o,<address>[,<opt1>[,<opt2>[,...]]]
Expand Down Expand Up @@ -584,11 +581,9 @@ The watchdog will panic and start a 120 second reset timeout if it
gets a pre-action. During a panic or a reboot, the watchdog will
start a 120 timer if it is running to make sure the reboot occurs.

Note that if you use the NMI preaction for the watchdog, you MUST NOT
use the nmi watchdog. There is no reasonable way to tell if an NMI
comes from the IPMI controller, so it must assume that if it gets an
otherwise unhandled NMI, it must be from IPMI and it will panic
immediately.
Note that if you use the NMI preaction for the watchdog, you MUST
NOT use nmi watchdog mode 1. If you use the NMI watchdog, you
must use mode 2.

Once you open the watchdog timer, you must write a 'V' character to the
device to close it, or the timer will not stop. This is a new semantic
Expand Down
27 changes: 0 additions & 27 deletions trunk/Documentation/accounting/cgroupstats.txt

This file was deleted.

14 changes: 0 additions & 14 deletions trunk/Documentation/atomic_ops.txt
Original file line number Diff line number Diff line change
Expand Up @@ -418,20 +418,6 @@ brothers:
*/
smp_mb__after_clear_bit();

There are two special bitops with lock barrier semantics (acquire/release,
same as spinlocks). These operate in the same way as their non-_lock/unlock
postfixed variants, except that they are to provide acquire/release semantics,
respectively. This means they can be used for bit_spin_trylock and
bit_spin_unlock type operations without specifying any more barriers.

int test_and_set_bit_lock(unsigned long nr, unsigned long *addr);
void clear_bit_unlock(unsigned long nr, unsigned long *addr);
void __clear_bit_unlock(unsigned long nr, unsigned long *addr);

The __clear_bit_unlock version is non-atomic, however it still implements
unlock barrier semantics. This can be useful if the lock itself is protecting
the other bits in the word.

Finally, there are non-atomic versions of the bitmask operations
provided. They are used in contexts where some other higher-level SMP
locking scheme is being used to protect the bitmask, and thus less
Expand Down
27 changes: 25 additions & 2 deletions trunk/Documentation/cachetlb.txt
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,30 @@ changes occur:

This is used primarily during fault processing.

5) void update_mmu_cache(struct vm_area_struct *vma,
5) void flush_tlb_pgtables(struct mm_struct *mm,
unsigned long start, unsigned long end)

The software page tables for address space 'mm' for virtual
addresses in the range 'start' to 'end-1' are being torn down.

Some platforms cache the lowest level of the software page tables
in a linear virtually mapped array, to make TLB miss processing
more efficient. On such platforms, since the TLB is caching the
software page table structure, it needs to be flushed when parts
of the software page table tree are unlinked/freed.

Sparc64 is one example of a platform which does this.

Usually, when munmap()'ing an area of user virtual address
space, the kernel leaves the page table parts around and just
marks the individual pte's as invalid. However, if very large
portions of the address space are unmapped, the kernel frees up
those portions of the software page tables to prevent potential
excessive kernel memory usage caused by erratic mmap/mmunmap
sequences. It is at these times that flush_tlb_pgtables will
be invoked.

6) void update_mmu_cache(struct vm_area_struct *vma,
unsigned long address, pte_t pte)

At the end of every page fault, this routine is invoked to
Expand All @@ -100,7 +123,7 @@ changes occur:
translations for software managed TLB configurations.
The sparc64 port currently does this.

6) void tlb_migrate_finish(struct mm_struct *mm)
7) void tlb_migrate_finish(struct mm_struct *mm)

This interface is called at the end of an explicit
process migration. This interface provides a hook
Expand Down
Loading

0 comments on commit 03f66a9

Please sign in to comment.