Skip to content

Commit

Permalink
---
Browse files Browse the repository at this point in the history
yaml
---
r: 71599
b: refs/heads/master
c: f9e2629
h: refs/heads/master
i:
  71597: c4c63c7
  71595: ecf45dd
  71591: d5bf583
  71583: 498bd25
v: v3
  • Loading branch information
Christian Borntraeger authored and Ingo Molnar committed Oct 19, 2007
1 parent c14cccb commit ea2ba09
Show file tree
Hide file tree
Showing 628 changed files with 6,567 additions and 13,072 deletions.
2 changes: 1 addition & 1 deletion [refs]
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
---
refs/heads/master: 0a4908e19fd016d60403fc76cf38b2d08d21e2d2
refs/heads/master: f9e26291be31cb494c1845e356daba84b39ab059
2 changes: 1 addition & 1 deletion trunk/Documentation/DocBook/kernel-api.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@

<sect1><title>Atomic and pointer manipulation</title>
!Iinclude/asm-x86/atomic_32.h
!Iinclude/asm-x86/unaligned.h
!Iinclude/asm-x86/unaligned_32.h
</sect1>

<sect1><title>Delaying, scheduling, and timer routines</title>
Expand Down
27 changes: 0 additions & 27 deletions trunk/Documentation/accounting/cgroupstats.txt

This file was deleted.

27 changes: 25 additions & 2 deletions trunk/Documentation/cachetlb.txt
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,30 @@ changes occur:

This is used primarily during fault processing.

5) void update_mmu_cache(struct vm_area_struct *vma,
5) void flush_tlb_pgtables(struct mm_struct *mm,
unsigned long start, unsigned long end)

The software page tables for address space 'mm' for virtual
addresses in the range 'start' to 'end-1' are being torn down.

Some platforms cache the lowest level of the software page tables
in a linear virtually mapped array, to make TLB miss processing
more efficient. On such platforms, since the TLB is caching the
software page table structure, it needs to be flushed when parts
of the software page table tree are unlinked/freed.

Sparc64 is one example of a platform which does this.

Usually, when munmap()'ing an area of user virtual address
space, the kernel leaves the page table parts around and just
marks the individual pte's as invalid. However, if very large
portions of the address space are unmapped, the kernel frees up
those portions of the software page tables to prevent potential
excessive kernel memory usage caused by erratic mmap/mmunmap
sequences. It is at these times that flush_tlb_pgtables will
be invoked.

6) void update_mmu_cache(struct vm_area_struct *vma,
unsigned long address, pte_t pte)

At the end of every page fault, this routine is invoked to
Expand All @@ -100,7 +123,7 @@ changes occur:
translations for software managed TLB configurations.
The sparc64 port currently does this.

6) void tlb_migrate_finish(struct mm_struct *mm)
7) void tlb_migrate_finish(struct mm_struct *mm)

This interface is called at the end of an explicit
process migration. This interface provides a hook
Expand Down
Loading

0 comments on commit ea2ba09

Please sign in to comment.