Skip to content

Commit

Permalink
powerpc: Rearrange SLB preload code
Browse files Browse the repository at this point in the history
With the new top down layout it is likely that the pc and stack will be in the
same segment, because the pc is most likely in a library allocated via a top
down mmap. Right now we bail out early if these segments match.

Rearrange the SLB preload code to sanity check all SLB preload addresses
are not in the kernel, then check all addresses for conflicts.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
  • Loading branch information
Anton Blanchard authored and Benjamin Herrenschmidt committed Aug 20, 2009
1 parent 30d0b36 commit 5eb9bac
Showing 1 changed file with 8 additions and 13 deletions.
21 changes: 8 additions & 13 deletions arch/powerpc/mm/slb.c
Original file line number Diff line number Diff line change
Expand Up @@ -218,23 +218,18 @@ void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
else
unmapped_base = TASK_UNMAPPED_BASE_USER64;

if (is_kernel_addr(pc))
return;
slb_allocate(pc);

if (esids_match(pc,stack))
if (is_kernel_addr(pc) || is_kernel_addr(stack) ||
is_kernel_addr(unmapped_base))
return;

if (is_kernel_addr(stack))
return;
slb_allocate(stack);
slb_allocate(pc);

if (esids_match(pc,unmapped_base) || esids_match(stack,unmapped_base))
return;
if (!esids_match(pc, stack))
slb_allocate(stack);

if (is_kernel_addr(unmapped_base))
return;
slb_allocate(unmapped_base);
if (!esids_match(pc, unmapped_base) &&
!esids_match(stack, unmapped_base))
slb_allocate(unmapped_base);
}

static inline void patch_slb_encoding(unsigned int *insn_addr,
Expand Down

0 comments on commit 5eb9bac

Please sign in to comment.