Skip to content

Commit

Permalink
arm/arm64: KVM: Enforce Break-Before-Make on Stage-2 page tables
Browse files Browse the repository at this point in the history
The ARM architecture mandates that when changing a page table entry
from a valid entry to another valid entry, an invalid entry is first
written, TLB invalidated, and only then the new entry being written.

The current code doesn't respect this, directly writing the new
entry and only then invalidating TLBs. Let's fix it up.

Cc: <stable@vger.kernel.org>
Reported-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
  • Loading branch information
Marc Zyngier authored and Christoffer Dall committed Apr 29, 2016
1 parent 02e0b76 commit d4b9e07
Showing 1 changed file with 11 additions and 6 deletions.
17 changes: 11 additions & 6 deletions arch/arm/kvm/mmu.c
Original file line number Diff line number Diff line change
Expand Up @@ -910,11 +910,14 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
VM_BUG_ON(pmd_present(*pmd) && pmd_pfn(*pmd) != pmd_pfn(*new_pmd));

old_pmd = *pmd;
kvm_set_pmd(pmd, *new_pmd);
if (pmd_present(old_pmd))
if (pmd_present(old_pmd)) {
pmd_clear(pmd);
kvm_tlb_flush_vmid_ipa(kvm, addr);
else
} else {
get_page(virt_to_page(pmd));
}

kvm_set_pmd(pmd, *new_pmd);
return 0;
}

Expand Down Expand Up @@ -963,12 +966,14 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,

/* Create 2nd stage page table mapping - Level 3 */
old_pte = *pte;
kvm_set_pte(pte, *new_pte);
if (pte_present(old_pte))
if (pte_present(old_pte)) {
kvm_set_pte(pte, __pte(0));
kvm_tlb_flush_vmid_ipa(kvm, addr);
else
} else {
get_page(virt_to_page(pte));
}

kvm_set_pte(pte, *new_pte);
return 0;
}

Expand Down

0 comments on commit d4b9e07

Please sign in to comment.