Skip to content

Commit

Permalink
sched/numa: Retry migration of tasks to CPU on a preferred node
Browse files Browse the repository at this point in the history
When a preferred node is selected for a tasks there is an attempt to migrate
the task to a CPU there. This may fail in which case the task will only
migrate if the active load balancer takes action. This may never happen if
the conditions are not right. This patch will check at NUMA hinting fault
time if another attempt should be made to migrate the task. It will only
make an attempt once every five seconds.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-34-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
  • Loading branch information
Mel Gorman authored and Ingo Molnar committed Oct 9, 2013
1 parent 58d081b commit 6b9a746
Show file tree
Hide file tree
Showing 2 changed files with 24 additions and 7 deletions.
1 change: 1 addition & 0 deletions include/linux/sched.h
Original file line number Diff line number Diff line change
Expand Up @@ -1341,6 +1341,7 @@ struct task_struct {
int numa_migrate_seq;
unsigned int numa_scan_period;
unsigned int numa_scan_period_max;
unsigned long numa_migrate_retry;
u64 node_stamp; /* migration stamp */
struct callback_head numa_work;

Expand Down
30 changes: 23 additions & 7 deletions kernel/sched/fair.c
Original file line number Diff line number Diff line change
Expand Up @@ -1011,6 +1011,23 @@ static int task_numa_migrate(struct task_struct *p)
return migrate_task_to(p, env.best_cpu);
}

/* Attempt to migrate a task to a CPU on the preferred node. */
static void numa_migrate_preferred(struct task_struct *p)
{
/* Success if task is already running on preferred CPU */
p->numa_migrate_retry = 0;
if (cpu_to_node(task_cpu(p)) == p->numa_preferred_nid)
return;

/* This task has no NUMA fault statistics yet */
if (unlikely(p->numa_preferred_nid == -1))
return;

/* Otherwise, try migrate to a CPU on the preferred node */
if (task_numa_migrate(p) != 0)
p->numa_migrate_retry = jiffies + HZ*5;
}

static void task_numa_placement(struct task_struct *p)
{
int seq, nid, max_nid = -1;
Expand Down Expand Up @@ -1045,17 +1062,12 @@ static void task_numa_placement(struct task_struct *p)
}
}

/*
* Record the preferred node as the node with the most faults,
* requeue the task to be running on the idlest CPU on the
* preferred node and reset the scanning rate to recheck
* the working set placement.
*/
/* Preferred node as the node with the most faults */
if (max_faults && max_nid != p->numa_preferred_nid) {
/* Update the preferred nid and migrate task if possible */
p->numa_preferred_nid = max_nid;
p->numa_migrate_seq = 1;
task_numa_migrate(p);
numa_migrate_preferred(p);
}
}

Expand Down Expand Up @@ -1111,6 +1123,10 @@ void task_numa_fault(int last_nidpid, int node, int pages, bool migrated)

task_numa_placement(p);

/* Retry task to preferred node migration if it previously failed */
if (p->numa_migrate_retry && time_after(jiffies, p->numa_migrate_retry))
numa_migrate_preferred(p);

p->numa_faults_buffer[task_faults_idx(node, priv)] += pages;
}

Expand Down

0 comments on commit 6b9a746

Please sign in to comment.