Skip to content

Commit

Permalink
---
Browse files Browse the repository at this point in the history
yaml
---
r: 26858
b: refs/heads/master
c: 36be57f
h: refs/heads/master
v: v3
  • Loading branch information
Paul Jackson authored and Linus Torvalds committed May 21, 2006
1 parent 41bbd98 commit f3b3459
Show file tree
Hide file tree
Showing 2 changed files with 16 additions and 10 deletions.
2 changes: 1 addition & 1 deletion [refs]
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
---
refs/heads/master: bdd804f478a0cc74bf7db8e9f9d5fd379d1b31ca
refs/heads/master: 36be57ffe39e03aab9fbe857f70c7a6a15bd9e08
24 changes: 15 additions & 9 deletions trunk/kernel/cpuset.c
Original file line number Diff line number Diff line change
Expand Up @@ -2231,19 +2231,25 @@ static const struct cpuset *nearest_exclusive_ancestor(const struct cpuset *cs)
* So only GFP_KERNEL allocations, if all nodes in the cpuset are
* short of memory, might require taking the callback_mutex mutex.
*
* The first loop over the zonelist in mm/page_alloc.c:__alloc_pages()
* calls here with __GFP_HARDWALL always set in gfp_mask, enforcing
* hardwall cpusets - no allocation on a node outside the cpuset is
* allowed (unless in interrupt, of course).
*
* The second loop doesn't even call here for GFP_ATOMIC requests
* (if the __alloc_pages() local variable 'wait' is set). That check
* and the checks below have the combined affect in the second loop of
* the __alloc_pages() routine that:
* The first call here from mm/page_alloc:get_page_from_freelist()
* has __GFP_HARDWALL set in gfp_mask, enforcing hardwall cpusets, so
* no allocation on a node outside the cpuset is allowed (unless in
* interrupt, of course).
*
* The second pass through get_page_from_freelist() doesn't even call
* here for GFP_ATOMIC calls. For those calls, the __alloc_pages()
* variable 'wait' is not set, and the bit ALLOC_CPUSET is not set
* in alloc_flags. That logic and the checks below have the combined
* affect that:
* in_interrupt - any node ok (current task context irrelevant)
* GFP_ATOMIC - any node ok
* GFP_KERNEL - any node in enclosing mem_exclusive cpuset ok
* GFP_USER - only nodes in current tasks mems allowed ok.
*
* Rule:
* Don't call cpuset_zone_allowed() if you can't sleep, unless you
* pass in the __GFP_HARDWALL flag set in gfp_flag, which disables
* the code that might scan up ancestor cpusets and sleep.
**/

int __cpuset_zone_allowed(struct zone *z, gfp_t gfp_mask)
Expand Down

0 comments on commit f3b3459

Please sign in to comment.