Skip to content

Commit

Permalink
rmap: wrap page_check_address() using __cond_lock()
Browse files Browse the repository at this point in the history
The page_check_address() conditionally grabs *@ptlp in case of returning
non-NULL.  Rename and wrap it using __cond_lock() removes following
warnings from sparse:

 mm/rmap.c:472:9: warning: context imbalance in 'page_mapped_in_vma' - unexpected unlock
 mm/rmap.c:524:9: warning: context imbalance in 'page_referenced_one' - unexpected unlock
 mm/rmap.c:706:9: warning: context imbalance in 'page_mkclean_one' - unexpected unlock
 mm/rmap.c:1066:9: warning: context imbalance in 'try_to_unmap_one' - unexpected unlock

Signed-off-by: Namhyung Kim <namhyung@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  • Loading branch information
Namhyung Kim authored and Linus Torvalds committed Oct 26, 2010
1 parent ea4525b commit e9a81a8
Show file tree
Hide file tree
Showing 2 changed files with 13 additions and 2 deletions.
13 changes: 12 additions & 1 deletion include/linux/rmap.h
Original file line number Diff line number Diff line change
Expand Up @@ -205,9 +205,20 @@ int try_to_unmap_one(struct page *, struct vm_area_struct *,
/*
* Called from mm/filemap_xip.c to unmap empty zero page
*/
pte_t *page_check_address(struct page *, struct mm_struct *,
pte_t *__page_check_address(struct page *, struct mm_struct *,
unsigned long, spinlock_t **, int);

static inline pte_t *page_check_address(struct page *page, struct mm_struct *mm,
unsigned long address,
spinlock_t **ptlp, int sync)
{
pte_t *ptep;

__cond_lock(*ptlp, ptep = __page_check_address(page, mm, address,
ptlp, sync));
return ptep;
}

/*
* Used by swapoff to help locate where page is expected in vma.
*/
Expand Down
2 changes: 1 addition & 1 deletion mm/rmap.c
Original file line number Diff line number Diff line change
Expand Up @@ -409,7 +409,7 @@ unsigned long page_address_in_vma(struct page *page, struct vm_area_struct *vma)
*
* On success returns with pte mapped and locked.
*/
pte_t *page_check_address(struct page *page, struct mm_struct *mm,
pte_t *__page_check_address(struct page *page, struct mm_struct *mm,
unsigned long address, spinlock_t **ptlp, int sync)
{
pgd_t *pgd;
Expand Down

0 comments on commit e9a81a8

Please sign in to comment.