Skip to content

Commit

Permalink
memcg: do not try to drain per-cpu caches without pages
Browse files Browse the repository at this point in the history
drain_all_stock_async tries to optimize a work to be done on the work
queue by excluding any work for the current CPU because it assumes that
the context we are called from already tried to charge from that cache
and it's failed so it must be empty already.

While the assumption is correct we can optimize it even more by checking
the current number of pages in the cache.  This will also reduce a work
on other CPUs with an empty stock.

For the current CPU we can simply call drain_local_stock rather than
deferring it to the work queue.

[kamezawa.hiroyu@jp.fujitsu.com: use drain_local_stock for current CPU optimization]
Signed-off-by: Michal Hocko <mhocko@suse.cz>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  • Loading branch information
Michal Hocko authored and Linus Torvalds committed Jul 26, 2011
1 parent 82f9d48 commit d1a05b6
Showing 1 changed file with 7 additions and 6 deletions.
13 changes: 7 additions & 6 deletions mm/memcontrol.c
Original file line number Diff line number Diff line change
Expand Up @@ -2180,11 +2180,8 @@ static void drain_all_stock_async(struct mem_cgroup *root_mem)
struct memcg_stock_pcp *stock = &per_cpu(memcg_stock, cpu);
struct mem_cgroup *mem;

if (cpu == curcpu)
continue;

mem = stock->cached;
if (!mem)
if (!mem || !stock->nr_pages)
continue;
if (mem != root_mem) {
if (!root_mem->use_hierarchy)
Expand All @@ -2193,8 +2190,12 @@ static void drain_all_stock_async(struct mem_cgroup *root_mem)
if (!css_is_ancestor(&mem->css, &root_mem->css))
continue;
}
if (!test_and_set_bit(FLUSHING_CACHED_CHARGE, &stock->flags))
schedule_work_on(cpu, &stock->work);
if (!test_and_set_bit(FLUSHING_CACHED_CHARGE, &stock->flags)) {
if (cpu == curcpu)
drain_local_stock(&stock->work);
else
schedule_work_on(cpu, &stock->work);
}
}
put_online_cpus();
mutex_unlock(&percpu_charge_mutex);
Expand Down

0 comments on commit d1a05b6

Please sign in to comment.