Skip to content

Commit

Permalink
---
Browse files Browse the repository at this point in the history
yaml
---
r: 35595
b: refs/heads/master
c: bfa5bf6
h: refs/heads/master
i:
  35593: aa5436e
  35591: 28e83b3
v: v3
  • Loading branch information
Rolf Eike Beer authored and Linus Torvalds committed Sep 26, 2006
1 parent ee77374 commit 3edfe31
Show file tree
Hide file tree
Showing 2 changed files with 28 additions and 8 deletions.
2 changes: 1 addition & 1 deletion [refs]
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
---
refs/heads/master: 7ff6f08295d90ab20d25200ef485ebb45b1b8d71
refs/heads/master: bfa5bf6d6446f0028187a727f792fbc7934228ad
34 changes: 27 additions & 7 deletions trunk/mm/memory.c
Original file line number Diff line number Diff line change
Expand Up @@ -1227,7 +1227,12 @@ static int insert_page(struct mm_struct *mm, unsigned long addr, struct page *pa
return retval;
}

/*
/**
* vm_insert_page - insert single page into user vma
* @vma: user vma to map to
* @addr: target user address of this page
* @page: source kernel page
*
* This allows drivers to insert individual pages they've allocated
* into a user vma.
*
Expand Down Expand Up @@ -1319,7 +1324,16 @@ static inline int remap_pud_range(struct mm_struct *mm, pgd_t *pgd,
return 0;
}

/* Note: this is only safe if the mm semaphore is held when called. */
/**
* remap_pfn_range - remap kernel memory to userspace
* @vma: user vma to map to
* @addr: target user address to start at
* @pfn: physical address of kernel memory
* @size: size of map area
* @prot: page protection flags for this mapping
*
* Note: this is only safe if the mm semaphore is held when called.
*/
int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
unsigned long pfn, unsigned long size, pgprot_t prot)
{
Expand Down Expand Up @@ -1801,9 +1815,10 @@ void unmap_mapping_range(struct address_space *mapping,
}
EXPORT_SYMBOL(unmap_mapping_range);

/*
* Handle all mappings that got truncated by a "truncate()"
* system call.
/**
* vmtruncate - unmap mappings "freed" by truncate() syscall
* @inode: inode of the file used
* @offset: file offset to start truncating
*
* NOTE! We have to be ready to update the memory sharing
* between the file and the memory map for a potential last
Expand Down Expand Up @@ -1872,11 +1887,16 @@ int vmtruncate_range(struct inode *inode, loff_t offset, loff_t end)
}
EXPORT_UNUSED_SYMBOL(vmtruncate_range); /* June 2006 */

/*
/**
* swapin_readahead - swap in pages in hope we need them soon
* @entry: swap entry of this memory
* @addr: address to start
* @vma: user vma this addresses belong to
*
* Primitive swap readahead code. We simply read an aligned block of
* (1 << page_cluster) entries in the swap area. This method is chosen
* because it doesn't cost us any seek time. We also make sure to queue
* the 'original' request together with the readahead ones...
* the 'original' request together with the readahead ones...
*
* This has been extended to use the NUMA policies from the mm triggering
* the readahead.
Expand Down

0 comments on commit 3edfe31

Please sign in to comment.