Skip to content

Commit

Permalink
Merge tag 'trace-ringbuffer-v6.14' of git://git.kernel.org/pub/scm/li…
Browse files Browse the repository at this point in the history
…nux/kernel/git/trace/linux-trace

Pull trace ring-buffer updates from Steven Rostedt:

 - Clean up the __rb_map_vma() logic

   The logic of __rb_map_vma() has a error check with WARN_ON() that
   makes sure that the index does not go past the end of the array of
   buffers. The test in the loop pretty much guarantees that it will
   never happen, but since the relation of the variables used is a
   little complex, the WARN_ON() check was added. It was noticed that
   the array was dereferenced before this check and if the logic does
   break and for some reason the logic goes past the array, there will
   be an out of bounds access here. Move the access to after the
   WARN_ON().

 - Consolidate how the ring buffer is determined to be empty

   Currently there's two ways that are used to determine if the ring
   buffer is empty. One relies on the status of the commit and reader
   pages and what was read, and the other is on what was written vs what
   was read. By using the number of entries (written) method, it can be
   used for reading events that are out of the kernel's control (what
   pKVM will use). Move to this method to make it easier to implement a
   pKVM ring buffer that the kernel can read.

* tag 'trace-ringbuffer-v6.14' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace:
  ring-buffer: Make reading page consistent with the code logic
  ring-buffer: Check for empty ring-buffer with rb_num_of_entries()
  • Loading branch information
Linus Torvalds committed Jan 21, 2025
2 parents 9f3ee94 + 6e31b75 commit 0074ade
Showing 1 changed file with 17 additions and 46 deletions.
63 changes: 17 additions & 46 deletions kernel/trace/ring_buffer.c
Original file line number Diff line number Diff line change
Expand Up @@ -4682,40 +4682,22 @@ int ring_buffer_write(struct trace_buffer *buffer,
}
EXPORT_SYMBOL_GPL(ring_buffer_write);

static bool rb_per_cpu_empty(struct ring_buffer_per_cpu *cpu_buffer)
/*
* The total entries in the ring buffer is the running counter
* of entries entered into the ring buffer, minus the sum of
* the entries read from the ring buffer and the number of
* entries that were overwritten.
*/
static inline unsigned long
rb_num_of_entries(struct ring_buffer_per_cpu *cpu_buffer)
{
struct buffer_page *reader = cpu_buffer->reader_page;
struct buffer_page *head = rb_set_head_page(cpu_buffer);
struct buffer_page *commit = cpu_buffer->commit_page;

/* In case of error, head will be NULL */
if (unlikely(!head))
return true;

/* Reader should exhaust content in reader page */
if (reader->read != rb_page_size(reader))
return false;

/*
* If writers are committing on the reader page, knowing all
* committed content has been read, the ring buffer is empty.
*/
if (commit == reader)
return true;

/*
* If writers are committing on a page other than reader page
* and head page, there should always be content to read.
*/
if (commit != head)
return false;
return local_read(&cpu_buffer->entries) -
(local_read(&cpu_buffer->overrun) + cpu_buffer->read);
}

/*
* Writers are committing on the head page, we just need
* to care about there're committed data, and the reader will
* swap reader page with head page when it is to read data.
*/
return rb_page_commit(commit) == 0;
static bool rb_per_cpu_empty(struct ring_buffer_per_cpu *cpu_buffer)
{
return !rb_num_of_entries(cpu_buffer);
}

/**
Expand Down Expand Up @@ -4861,19 +4843,6 @@ void ring_buffer_record_enable_cpu(struct trace_buffer *buffer, int cpu)
}
EXPORT_SYMBOL_GPL(ring_buffer_record_enable_cpu);

/*
* The total entries in the ring buffer is the running counter
* of entries entered into the ring buffer, minus the sum of
* the entries read from the ring buffer and the number of
* entries that were overwritten.
*/
static inline unsigned long
rb_num_of_entries(struct ring_buffer_per_cpu *cpu_buffer)
{
return local_read(&cpu_buffer->entries) -
(local_read(&cpu_buffer->overrun) + cpu_buffer->read);
}

/**
* ring_buffer_oldest_event_ts - get the oldest event timestamp from the buffer
* @buffer: The ring buffer
Expand Down Expand Up @@ -7059,14 +7028,16 @@ static int __rb_map_vma(struct ring_buffer_per_cpu *cpu_buffer,
}

while (p < nr_pages) {
struct page *page = virt_to_page((void *)cpu_buffer->subbuf_ids[s]);
struct page *page;
int off = 0;

if (WARN_ON_ONCE(s >= nr_subbufs)) {
err = -EINVAL;
goto out;
}

page = virt_to_page((void *)cpu_buffer->subbuf_ids[s]);

for (; off < (1 << (subbuf_order)); off++, page++) {
if (p >= nr_pages)
break;
Expand Down

0 comments on commit 0074ade

Please sign in to comment.