Skip to content

Commit

Permalink
MIPS: Random whitespace clean-ups
Browse files Browse the repository at this point in the history
Another whitespace clean-up, this removes tabs from between sentences in
some comments.

Signed-off-by: Maciej W. Rozycki <macro@codesourcery.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6103/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
  • Loading branch information
Maciej W. Rozycki authored and Ralf Baechle committed Nov 4, 2013
1 parent dc73e4c commit edf7b93
Show file tree
Hide file tree
Showing 3 changed files with 6 additions and 6 deletions.
4 changes: 2 additions & 2 deletions arch/mips/include/asm/addrspace.h
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@

/*
* Memory segments (64bit kernel mode addresses)
* The compatibility segments use the full 64-bit sign extended value. Note
* The compatibility segments use the full 64-bit sign extended value. Note
* the R8000 doesn't have them so don't reference these in generic MIPS code.
*/
#define XKUSEG _CONST64_(0x0000000000000000)
Expand Down Expand Up @@ -131,7 +131,7 @@

/*
* The ultimate limited of the 64-bit MIPS architecture: 2 bits for selecting
* the region, 3 bits for the CCA mode. This leaves 59 bits of which the
* the region, 3 bits for the CCA mode. This leaves 59 bits of which the
* R8000 implements most with its 48-bit physical address space.
*/
#define TO_PHYS_MASK _CONST64_(0x07ffffffffffffff) /* 2^^59 - 1 */
Expand Down
2 changes: 1 addition & 1 deletion arch/mips/include/asm/atomic.h
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
/*
* Atomic operations that C can't guarantee us. Useful for
* Atomic operations that C can't guarantee us. Useful for
* resource counting etc..
*
* But use these as seldom as possible since they are much more slower
Expand Down
6 changes: 3 additions & 3 deletions arch/mips/include/asm/barrier.h
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
* over this barrier. All reads preceding this primitive are guaranteed
* to access memory (but not necessarily other CPUs' caches) before any
* reads following this primitive that depend on the data return by
* any of the preceding reads. This primitive is much lighter weight than
* any of the preceding reads. This primitive is much lighter weight than
* rmb() on most CPUs, and is never heavier weight than is
* rmb().
*
Expand All @@ -43,7 +43,7 @@
* </programlisting>
*
* because the read of "*q" depends on the read of "p" and these
* two reads are separated by a read_barrier_depends(). However,
* two reads are separated by a read_barrier_depends(). However,
* the following code, with the same initial values for "a" and "b":
*
* <programlisting>
Expand All @@ -57,7 +57,7 @@
* </programlisting>
*
* does not enforce ordering, since there is no data dependency between
* the read of "a" and the read of "b". Therefore, on some CPUs, such
* the read of "a" and the read of "b". Therefore, on some CPUs, such
* as Alpha, "y" could be set to 3 and "x" to 0. Use rmb()
* in cases like this where there are no data dependencies.
*/
Expand Down

0 comments on commit edf7b93

Please sign in to comment.