Skip to content

Commit

Permalink
perfcounters: tweak group scheduling
Browse files Browse the repository at this point in the history
Impact: schedule in groups atomically

If there are multiple groups in a task, make sure they are scheduled
in and out atomically.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
  • Loading branch information
Ingo Molnar committed Dec 23, 2008
1 parent 8fb9331 commit 7995888
Showing 1 changed file with 13 additions and 3 deletions.
16 changes: 13 additions & 3 deletions kernel/perf_counter.c
Original file line number Diff line number Diff line change
Expand Up @@ -367,21 +367,26 @@ counter_sched_in(struct perf_counter *counter,
ctx->nr_active++;
}

static void
static int
group_sched_in(struct perf_counter *group_counter,
struct perf_cpu_context *cpuctx,
struct perf_counter_context *ctx,
int cpu)
{
struct perf_counter *counter;
int was_group = 0;

counter_sched_in(group_counter, cpuctx, ctx, cpu);

/*
* Schedule in siblings as one group (if any):
*/
list_for_each_entry(counter, &group_counter->sibling_list, list_entry)
list_for_each_entry(counter, &group_counter->sibling_list, list_entry) {
counter_sched_in(counter, cpuctx, ctx, cpu);
was_group = 1;
}

return was_group;
}

/*
Expand Down Expand Up @@ -416,7 +421,12 @@ void perf_counter_task_sched_in(struct task_struct *task, int cpu)
if (counter->cpu != -1 && counter->cpu != cpu)
continue;

group_sched_in(counter, cpuctx, ctx, cpu);
/*
* If we scheduled in a group atomically and
* exclusively, break out:
*/
if (group_sched_in(counter, cpuctx, ctx, cpu))
break;
}
spin_unlock(&ctx->lock);

Expand Down

0 comments on commit 7995888

Please sign in to comment.