Skip to content

Commit

Permalink
---
Browse files Browse the repository at this point in the history
yaml
---
r: 187729
b: refs/heads/master
c: 2a61aa4
h: refs/heads/master
i:
  187727: 96a884a
v: v3
  • Loading branch information
Adam Buchbinder authored and Jiri Kosina committed Feb 4, 2010
1 parent 423c5c0 commit 96fe517
Show file tree
Hide file tree
Showing 5 changed files with 5 additions and 5 deletions.
2 changes: 1 addition & 1 deletion [refs]
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
---
refs/heads/master: c41b20e721ea4f6f20f66a66e7f0c3c97a2ca9c2
refs/heads/master: 2a61aa401638529cd4231f6106980d307fba98fa
2 changes: 1 addition & 1 deletion trunk/fs/buffer.c
Original file line number Diff line number Diff line change
Expand Up @@ -2893,7 +2893,7 @@ int block_write_full_page_endio(struct page *page, get_block_t *get_block,

/*
* The page straddles i_size. It must be zeroed out on each and every
* writepage invokation because it may be mmapped. "A file is mapped
* writepage invocation because it may be mmapped. "A file is mapped
* in multiples of the page size. For a file that is not a multiple of
* the page size, the remaining memory is zeroed when mapped, and
* writes to that region are not written out to the file."
Expand Down
2 changes: 1 addition & 1 deletion trunk/fs/mpage.c
Original file line number Diff line number Diff line change
Expand Up @@ -561,7 +561,7 @@ static int __mpage_writepage(struct page *page, struct writeback_control *wbc,
if (page->index >= end_index) {
/*
* The page straddles i_size. It must be zeroed out on each
* and every writepage invokation because it may be mmapped.
* and every writepage invocation because it may be mmapped.
* "A file is mapped in multiples of the page size. For a file
* that is not a multiple of the page size, the remaining memory
* is zeroed when mapped, and writes to that region are not
Expand Down
2 changes: 1 addition & 1 deletion trunk/include/linux/mmzone.h
Original file line number Diff line number Diff line change
Expand Up @@ -349,7 +349,7 @@ struct zone {
* prev_priority holds the scanning priority for this zone. It is
* defined as the scanning priority at which we achieved our reclaim
* target at the previous try_to_free_pages() or balance_pgdat()
* invokation.
* invocation.
*
* We use prev_priority as a measure of how much stress page reclaim is
* under - it drives the swappiness decision: whether to unmap mapped
Expand Down
2 changes: 1 addition & 1 deletion trunk/kernel/sched_cpupri.c
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ static int convert_prio(int prio)
* @lowest_mask: A mask to fill in with selected CPUs (or NULL)
*
* Note: This function returns the recommended CPUs as calculated during the
* current invokation. By the time the call returns, the CPUs may have in
* current invocation. By the time the call returns, the CPUs may have in
* fact changed priorities any number of times. While not ideal, it is not
* an issue of correctness since the normal rebalancer logic will correct
* any discrepancies created by racing against the uncertainty of the current
Expand Down

0 comments on commit 96fe517

Please sign in to comment.