Skip to content

Commit

Permalink
sched/fair: Have task_move_group_fair() unconditionally add the entit…
Browse files Browse the repository at this point in the history
…y load to the runqueue

Currently we conditionally add the entity load to the rq when moving
the task between cgroups.

This doesn't make sense as we always 'migrate' the task between
cgroups, so we should always migrate the load too.

[ The history here is that we used to only migrate the blocked load
  which was only meaningfull when !queued. ]

Signed-off-by: Byungchul Park <byungchul.park@lge.com>
[ Rewrote the changelog. ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: yuyang.du@intel.com
Link: http://lkml.kernel.org/r/1440069720-27038-3-git-send-email-byungchul.park@lge.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
  • Loading branch information
Byungchul Park authored and Ingo Molnar committed Sep 13, 2015
1 parent a05e8c5 commit 50a2a3b
Showing 1 changed file with 4 additions and 5 deletions.
9 changes: 4 additions & 5 deletions kernel/sched/fair.c
Original file line number Diff line number Diff line change
Expand Up @@ -8041,13 +8041,12 @@ static void task_move_group_fair(struct task_struct *p, int queued)
se->vruntime -= cfs_rq_of(se)->min_vruntime;
set_task_rq(p, task_cpu(p));
se->depth = se->parent ? se->parent->depth + 1 : 0;
if (!queued) {
cfs_rq = cfs_rq_of(se);
cfs_rq = cfs_rq_of(se);
if (!queued)
se->vruntime += cfs_rq->min_vruntime;

/* Virtually synchronize task with its new cfs_rq */
attach_entity_load_avg(cfs_rq, se);
}
/* Virtually synchronize task with its new cfs_rq */
attach_entity_load_avg(cfs_rq, se);
}

void free_fair_sched_group(struct task_group *tg)
Expand Down

0 comments on commit 50a2a3b

Please sign in to comment.