Skip to content

Commit

Permalink
per-zone and reclaim enhancements for memory controller: calculate ac…
Browse files Browse the repository at this point in the history
…tive/inactive imbalance per cgroup

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Herbert Poetzl <herbert@13thfloor.at>
Cc: Kirill Korotaev <dev@sw.ru>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Paul Menage <menage@google.com>
Cc: Pavel Emelianov <xemul@openvz.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
Cc: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
  • Loading branch information
KAMEZAWA Hiroyuki authored and Linus Torvalds committed Feb 7, 2008
1 parent 58ae83d commit 5932f36
Show file tree
Hide file tree
Showing 2 changed files with 22 additions and 0 deletions.
8 changes: 8 additions & 0 deletions include/linux/memcontrol.h
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,8 @@ extern void mem_cgroup_page_migration(struct page *page, struct page *newpage);
* For memory reclaim.
*/
extern int mem_cgroup_calc_mapped_ratio(struct mem_cgroup *mem);
extern long mem_cgroup_reclaim_imbalance(struct mem_cgroup *mem);



#else /* CONFIG_CGROUP_MEM_CONT */
Expand Down Expand Up @@ -145,6 +147,12 @@ static inline int mem_cgroup_calc_mapped_ratio(struct mem_cgroup *mem)
{
return 0;
}

static inline int mem_cgroup_reclaim_imbalance(struct mem_cgroup *mem)
{
return 0;
}

#endif /* CONFIG_CGROUP_MEM_CONT */

#endif /* _LINUX_MEMCONTROL_H */
Expand Down
14 changes: 14 additions & 0 deletions mm/memcontrol.c
Original file line number Diff line number Diff line change
Expand Up @@ -436,6 +436,20 @@ int mem_cgroup_calc_mapped_ratio(struct mem_cgroup *mem)
rss = (long)mem_cgroup_read_stat(&mem->stat, MEM_CGROUP_STAT_RSS);
return (int)((rss * 100L) / total);
}
/*
* This function is called from vmscan.c. In page reclaiming loop. balance
* between active and inactive list is calculated. For memory controller
* page reclaiming, we should use using mem_cgroup's imbalance rather than
* zone's global lru imbalance.
*/
long mem_cgroup_reclaim_imbalance(struct mem_cgroup *mem)
{
unsigned long active, inactive;
/* active and inactive are the number of pages. 'long' is ok.*/
active = mem_cgroup_get_all_zonestat(mem, MEM_CGROUP_ZSTAT_ACTIVE);
inactive = mem_cgroup_get_all_zonestat(mem, MEM_CGROUP_ZSTAT_INACTIVE);
return (long) (active / (inactive + 1));
}

unsigned long mem_cgroup_isolate_pages(unsigned long nr_to_scan,
struct list_head *dst,
Expand Down

0 comments on commit 5932f36

Please sign in to comment.