Skip to content

Commit

Permalink
hugetlb: fix quota management for private mappings
Browse files Browse the repository at this point in the history
The hugetlbfs quota management system was never taught to handle MAP_PRIVATE
mappings when that support was added.  Currently, quota is debited at page
instantiation and credited at file truncation.  This approach works correctly
for shared pages but is incomplete for private pages.  In addition to
hugetlb_no_page(), private pages can be instantiated by hugetlb_cow(); but
this function does not respect quotas.

Private huge pages are treated very much like normal, anonymous pages.  They
are not "backed" by the hugetlbfs file and are not stored in the mapping's
radix tree.  This means that private pages are invisible to
truncate_hugepages() so that function will not credit the quota.

This patch (based on a prototype provided by Ken Chen) moves quota crediting
for all pages into free_huge_page().  page->private is used to store a pointer
to the mapping to which this page belongs.  This is used to credit quota on
the appropriate hugetlbfs instance.

Signed-off-by: Adam Litke <agl@us.ibm.com>
Cc: Ken Chen <kenchen@google.com>
Cc: Ken Chen <kenchen@google.com>
Cc: Andy Whitcroft <apw@shadowen.org>
Cc: Dave Hansen <haveblue@us.ibm.com>
Cc: David Gibson <hermes@gibson.dropbear.id.au>
Cc: William Lee Irwin III <wli@holomorphy.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  • Loading branch information
Adam Litke authored and Linus Torvalds committed Nov 15, 2007
1 parent 348ea20 commit c79fb75
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 4 deletions.
1 change: 0 additions & 1 deletion fs/hugetlbfs/inode.c
Original file line number Diff line number Diff line change
Expand Up @@ -364,7 +364,6 @@ static void truncate_hugepages(struct inode *inode, loff_t lstart)
++next;
truncate_huge_page(page);
unlock_page(page);
hugetlb_put_quota(mapping);
freed++;
}
huge_pagevec_release(&pvec);
Expand Down
13 changes: 10 additions & 3 deletions mm/hugetlb.c
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,9 @@ static void update_and_free_page(struct page *page)
static void free_huge_page(struct page *page)
{
int nid = page_to_nid(page);
struct address_space *mapping;

mapping = (struct address_space *) page_private(page);
BUG_ON(page_count(page));
INIT_LIST_HEAD(&page->lru);

Expand All @@ -129,6 +131,9 @@ static void free_huge_page(struct page *page)
enqueue_huge_page(page);
}
spin_unlock(&hugetlb_lock);
if (mapping)
hugetlb_put_quota(mapping);
set_page_private(page, 0);
}

/*
Expand Down Expand Up @@ -388,8 +393,10 @@ static struct page *alloc_huge_page(struct vm_area_struct *vma,
page = alloc_huge_page_shared(vma, addr);
else
page = alloc_huge_page_private(vma, addr);
if (page)
if (page) {
set_page_refcounted(page);
set_page_private(page, (unsigned long) vma->vm_file->f_mapping);
}
return page;
}

Expand Down Expand Up @@ -730,6 +737,8 @@ static int hugetlb_cow(struct mm_struct *mm, struct vm_area_struct *vma,
set_huge_ptep_writable(vma, address, ptep);
return 0;
}
if (hugetlb_get_quota(vma->vm_file->f_mapping))
return VM_FAULT_SIGBUS;

page_cache_get(old_page);
new_page = alloc_huge_page(vma, address);
Expand Down Expand Up @@ -796,7 +805,6 @@ static int hugetlb_no_page(struct mm_struct *mm, struct vm_area_struct *vma,
err = add_to_page_cache(page, mapping, idx, GFP_KERNEL);
if (err) {
put_page(page);
hugetlb_put_quota(mapping);
if (err == -EEXIST)
goto retry;
goto out;
Expand Down Expand Up @@ -830,7 +838,6 @@ static int hugetlb_no_page(struct mm_struct *mm, struct vm_area_struct *vma,

backout:
spin_unlock(&mm->page_table_lock);
hugetlb_put_quota(mapping);
unlock_page(page);
put_page(page);
goto out;
Expand Down

0 comments on commit c79fb75

Please sign in to comment.