From 24d38a441d097ec65e3df11b75a751ff5169b8b9 Mon Sep 17 00:00:00 2001 From: Glauber Costa Date: Tue, 18 Dec 2012 14:23:08 -0800 Subject: [PATCH] --- yaml --- r: 347030 b: refs/heads/master c: 92e793495597af4135d94314113bf13eafb0e663 h: refs/heads/master v: v3 --- [refs] | 2 +- trunk/Documentation/cgroups/memory.txt | 7 +++++++ 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/[refs] b/[refs] index 05f8dd8df5d6..6c2afe710370 100644 --- a/[refs] +++ b/[refs] @@ -1,2 +1,2 @@ --- -refs/heads/master: 107dab5c92d5f9c3afe962036e47c207363255c7 +refs/heads/master: 92e793495597af4135d94314113bf13eafb0e663 diff --git a/trunk/Documentation/cgroups/memory.txt b/trunk/Documentation/cgroups/memory.txt index 5b5b63143778..8b8c28b9864c 100644 --- a/trunk/Documentation/cgroups/memory.txt +++ b/trunk/Documentation/cgroups/memory.txt @@ -301,6 +301,13 @@ to trigger slab reclaim when those limits are reached. kernel memory, we prevent new processes from being created when the kernel memory usage is too high. +* slab pages: pages allocated by the SLAB or SLUB allocator are tracked. A copy +of each kmem_cache is created everytime the cache is touched by the first time +from inside the memcg. The creation is done lazily, so some objects can still be +skipped while the cache is being created. All objects in a slab page should +belong to the same memcg. This only fails to hold when a task is migrated to a +different memcg during the page allocation by the cache. + * sockets memory pressure: some sockets protocols have memory pressure thresholds. The Memory Controller allows them to be controlled individually per cgroup, instead of globally.