Skip to content

Commit

Permalink
Btrfs: avoid returning -ENOMEM in convert_extent_bit() too early
Browse files Browse the repository at this point in the history
We try to allocate an extent state before acquiring the tree's spinlock
just in case we end up needing to split an existing extent state into two.
If that allocation failed, we would return -ENOMEM.
However, our only single caller (transaction/log commit code), passes in
an extent state that was cached from a call to find_first_extent_bit() and
that has a very high chance to match exactly the input range (always true
for a transaction commit and very often, but not always, true for a log
commit) - in this case we end up not needing at all that initial extent
state used for an eventual split. Therefore just don't return -ENOMEM if
we can't allocate the temporary extent state, since we might not need it
at all, and if we end up needing one, we'll do it later anyway.

Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
  • Loading branch information
Filipe Manana authored and Chris Mason committed Nov 21, 2014
1 parent e38e2ed commit c8fd3de
Showing 1 changed file with 10 additions and 1 deletion.
11 changes: 10 additions & 1 deletion fs/btrfs/extent_io.c
Original file line number Diff line number Diff line change
Expand Up @@ -1066,13 +1066,21 @@ int convert_extent_bit(struct extent_io_tree *tree, u64 start, u64 end,
int err = 0;
u64 last_start;
u64 last_end;
bool first_iteration = true;

btrfs_debug_check_extent_io_range(tree, start, end);

again:
if (!prealloc && (mask & __GFP_WAIT)) {
/*
* Best effort, don't worry if extent state allocation fails
* here for the first iteration. We might have a cached state
* that matches exactly the target range, in which case no
* extent state allocations are needed. We'll only know this
* after locking the tree.
*/
prealloc = alloc_extent_state(mask);
if (!prealloc)
if (!prealloc && !first_iteration)
return -ENOMEM;
}

Expand Down Expand Up @@ -1242,6 +1250,7 @@ int convert_extent_bit(struct extent_io_tree *tree, u64 start, u64 end,
spin_unlock(&tree->lock);
if (mask & __GFP_WAIT)
cond_resched();
first_iteration = false;
goto again;
}

Expand Down

0 comments on commit c8fd3de

Please sign in to comment.