Skip to content

Commit

Permalink
perf: Do poll_wait() before checking condition in perf_poll()
Browse files Browse the repository at this point in the history
One should first enqueue to the waitqueue and then check for the
condition. If the condition gets true after mutex_unlock() but before
poll_wait() then we lose it and would have wait for another wakeup.

This has been like this since v2.6.31-rc1 commit c7138f3 ("perf_counter:
fix perf_poll()"). Before that it was slightly worse. I guess we get enough
wakeups so if we miss here one it doesn't really matter. It is still a
bad example.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1407159068-1478-1-git-send-email-bigeasy@linutronix.de
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
  • Loading branch information
Sebastian Andrzej Siewior authored and Ingo Molnar committed Aug 13, 2014
1 parent 36bbb2f commit e708d7a
Showing 1 changed file with 1 addition and 3 deletions.
4 changes: 1 addition & 3 deletions kernel/events/core.c
Original file line number Diff line number Diff line change
Expand Up @@ -3629,6 +3629,7 @@ static unsigned int perf_poll(struct file *file, poll_table *wait)
struct ring_buffer *rb;
unsigned int events = POLL_HUP;

poll_wait(file, &event->waitq, wait);
/*
* Pin the event->rb by taking event->mmap_mutex; otherwise
* perf_event_set_output() can swizzle our rb and make us miss wakeups.
Expand All @@ -3638,9 +3639,6 @@ static unsigned int perf_poll(struct file *file, poll_table *wait)
if (rb)
events = atomic_xchg(&rb->poll, 0);
mutex_unlock(&event->mmap_mutex);

poll_wait(file, &event->waitq, wait);

return events;
}

Expand Down

0 comments on commit e708d7a

Please sign in to comment.