Skip to content

Commit

Permalink
perf: Fix vmalloc ring buffer pages handling
Browse files Browse the repository at this point in the history
If we allocate perf ring buffer with the size of single (user)
page, we will get memory corruption when releasing itin
rb_free_work function (for CONFIG_PERF_USE_VMALLOC option).

For single page sized ring buffer the page_order is -1 (because
nr_pages is 0). This needs to be recognized in the rb_free_work
function to release proper amount of pages.

Adding data_page_nr function that returns number of allocated
data pages. Customizing the rest of the code to use it.

Reported-by: Jan Stancek <jstancek@redhat.com>
Original-patch-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20130319143509.GA1128@krava.brq.redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
  • Loading branch information
Jiri Olsa authored and Ingo Molnar committed May 1, 2013
1 parent 1b0dac2 commit 5919b30
Showing 1 changed file with 10 additions and 4 deletions.
14 changes: 10 additions & 4 deletions kernel/events/ring_buffer.c
Original file line number Diff line number Diff line change
Expand Up @@ -326,11 +326,16 @@ void rb_free(struct ring_buffer *rb)
}

#else
static int data_page_nr(struct ring_buffer *rb)
{
return rb->nr_pages << page_order(rb);
}

struct page *
perf_mmap_to_page(struct ring_buffer *rb, unsigned long pgoff)
{
if (pgoff > (1UL << page_order(rb)))
/* The '>' counts in the user page. */
if (pgoff > data_page_nr(rb))
return NULL;

return vmalloc_to_page((void *)rb->user_page + pgoff * PAGE_SIZE);
Expand All @@ -350,10 +355,11 @@ static void rb_free_work(struct work_struct *work)
int i, nr;

rb = container_of(work, struct ring_buffer, work);
nr = 1 << page_order(rb);
nr = data_page_nr(rb);

base = rb->user_page;
for (i = 0; i < nr + 1; i++)
/* The '<=' counts in the user page. */
for (i = 0; i <= nr; i++)
perf_mmap_unmark_page(base + (i * PAGE_SIZE));

vfree(base);
Expand Down Expand Up @@ -387,7 +393,7 @@ struct ring_buffer *rb_alloc(int nr_pages, long watermark, int cpu, int flags)
rb->user_page = all_buf;
rb->data_pages[0] = all_buf + PAGE_SIZE;
rb->page_order = ilog2(nr_pages);
rb->nr_pages = 1;
rb->nr_pages = !!nr_pages;

ring_buffer_init(rb, watermark, flags);

Expand Down

0 comments on commit 5919b30

Please sign in to comment.