Skip to content

Commit

Permalink
---
Browse files Browse the repository at this point in the history
yaml
---
r: 71591
b: refs/heads/master
c: 723ee05
h: refs/heads/master
i:
  71589: 7b4c3bb
  71587: 383fc8d
  71583: 498bd25
v: v3
  • Loading branch information
Ralf Baechle committed Oct 19, 2007
1 parent b0988e7 commit d5bf583
Show file tree
Hide file tree
Showing 607 changed files with 5,822 additions and 13,211 deletions.
2 changes: 1 addition & 1 deletion [refs]
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
---
refs/heads/master: 2843483d2eb02ad104edbe8b2429fb6a39d25063
refs/heads/master: 723ee050aa2dd4aa483bdb30413dcd7d48829783
2 changes: 1 addition & 1 deletion trunk/Documentation/DocBook/kernel-api.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@

<sect1><title>Atomic and pointer manipulation</title>
!Iinclude/asm-x86/atomic_32.h
!Iinclude/asm-x86/unaligned.h
!Iinclude/asm-x86/unaligned_32.h
</sect1>

<sect1><title>Delaying, scheduling, and timer routines</title>
Expand Down
27 changes: 0 additions & 27 deletions trunk/Documentation/accounting/cgroupstats.txt

This file was deleted.

27 changes: 25 additions & 2 deletions trunk/Documentation/cachetlb.txt
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,30 @@ changes occur:

This is used primarily during fault processing.

5) void update_mmu_cache(struct vm_area_struct *vma,
5) void flush_tlb_pgtables(struct mm_struct *mm,
unsigned long start, unsigned long end)

The software page tables for address space 'mm' for virtual
addresses in the range 'start' to 'end-1' are being torn down.

Some platforms cache the lowest level of the software page tables
in a linear virtually mapped array, to make TLB miss processing
more efficient. On such platforms, since the TLB is caching the
software page table structure, it needs to be flushed when parts
of the software page table tree are unlinked/freed.

Sparc64 is one example of a platform which does this.

Usually, when munmap()'ing an area of user virtual address
space, the kernel leaves the page table parts around and just
marks the individual pte's as invalid. However, if very large
portions of the address space are unmapped, the kernel frees up
those portions of the software page tables to prevent potential
excessive kernel memory usage caused by erratic mmap/mmunmap
sequences. It is at these times that flush_tlb_pgtables will
be invoked.

6) void update_mmu_cache(struct vm_area_struct *vma,
unsigned long address, pte_t pte)

At the end of every page fault, this routine is invoked to
Expand All @@ -100,7 +123,7 @@ changes occur:
translations for software managed TLB configurations.
The sparc64 port currently does this.

6) void tlb_migrate_finish(struct mm_struct *mm)
7) void tlb_migrate_finish(struct mm_struct *mm)

This interface is called at the end of an explicit
process migration. This interface provides a hook
Expand Down
Loading

0 comments on commit d5bf583

Please sign in to comment.