Skip to content

Commit

Permalink
fs: fix superblock iteration race
Browse files Browse the repository at this point in the history
list_for_each_entry_safe is not suitable to protect against concurrent
modification of the list. 6754af6 introduced a race in sb walking.

list_for_each_entry can use the trick of pinning the current entry in
the list before we drop and retake the lock because it subsequently
follows cur->next. However list_for_each_entry_safe saves n=cur->next
for following before entering the loop body, so when the lock is
dropped, n may be deleted.

Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: John Stultz <johnstul@us.ibm.com>
Cc: Frank Mayhar <fmayhar@google.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  • Loading branch information
npiggin@suse.de authored and Linus Torvalds committed Jun 29, 2010
1 parent 5904b3b commit 57439f8
Show file tree
Hide file tree
Showing 3 changed files with 23 additions and 0 deletions.
2 changes: 2 additions & 0 deletions fs/dcache.c
Original file line number Diff line number Diff line change
Expand Up @@ -590,6 +590,8 @@ static void prune_dcache(int count)
up_read(&sb->s_umount);
}
spin_lock(&sb_lock);
/* lock was dropped, must reset next */
list_safe_reset_next(sb, n, s_list);
count -= pruned;
__put_super(sb);
/* more work left to do? */
Expand Down
6 changes: 6 additions & 0 deletions fs/super.c
Original file line number Diff line number Diff line change
Expand Up @@ -374,6 +374,8 @@ void sync_supers(void)
up_read(&sb->s_umount);

spin_lock(&sb_lock);
/* lock was dropped, must reset next */
list_safe_reset_next(sb, n, s_list);
__put_super(sb);
}
}
Expand Down Expand Up @@ -405,6 +407,8 @@ void iterate_supers(void (*f)(struct super_block *, void *), void *arg)
up_read(&sb->s_umount);

spin_lock(&sb_lock);
/* lock was dropped, must reset next */
list_safe_reset_next(sb, n, s_list);
__put_super(sb);
}
spin_unlock(&sb_lock);
Expand Down Expand Up @@ -585,6 +589,8 @@ static void do_emergency_remount(struct work_struct *work)
}
up_write(&sb->s_umount);
spin_lock(&sb_lock);
/* lock was dropped, must reset next */
list_safe_reset_next(sb, n, s_list);
__put_super(sb);
}
spin_unlock(&sb_lock);
Expand Down
15 changes: 15 additions & 0 deletions include/linux/list.h
Original file line number Diff line number Diff line change
Expand Up @@ -544,6 +544,21 @@ static inline void list_splice_tail_init(struct list_head *list,
&pos->member != (head); \
pos = n, n = list_entry(n->member.prev, typeof(*n), member))

/**
* list_safe_reset_next - reset a stale list_for_each_entry_safe loop
* @pos: the loop cursor used in the list_for_each_entry_safe loop
* @n: temporary storage used in list_for_each_entry_safe
* @member: the name of the list_struct within the struct.
*
* list_safe_reset_next is not safe to use in general if the list may be
* modified concurrently (eg. the lock is dropped in the loop body). An
* exception to this is if the cursor element (pos) is pinned in the list,
* and list_safe_reset_next is called after re-taking the lock and before
* completing the current iteration of the loop body.
*/
#define list_safe_reset_next(pos, n, member) \
n = list_entry(pos->member.next, typeof(*pos), member)

/*
* Double linked lists with a single pointer list head.
* Mostly useful for hash tables where the two pointer list head is
Expand Down

0 comments on commit 57439f8

Please sign in to comment.