Skip to content

Commit

Permalink
kvm: x86: mmu: Move pgtbl walk inside retry loop in fast_page_fault
Browse files Browse the repository at this point in the history
Redo the page table walk in fast_page_fault when retrying so that we are
working on the latest PTE even if the hierarchy changes.

Signed-off-by: Junaid Shahid <junaids@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  • Loading branch information
Junaid Shahid authored and Paolo Bonzini committed Jan 27, 2017
1 parent 20d6523 commit d162f30
Showing 1 changed file with 5 additions and 5 deletions.
10 changes: 5 additions & 5 deletions arch/x86/kvm/mmu.c
Original file line number Diff line number Diff line change
Expand Up @@ -3088,14 +3088,16 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
return false;

walk_shadow_page_lockless_begin(vcpu);
for_each_shadow_entry_lockless(vcpu, gva, iterator, spte)
if (!is_shadow_present_pte(spte) || iterator.level < level)
break;

do {
bool remove_write_prot = false;
bool remove_acc_track;

for_each_shadow_entry_lockless(vcpu, gva, iterator, spte)
if (!is_shadow_present_pte(spte) ||
iterator.level < level)
break;

sp = page_header(__pa(iterator.sptep));
if (!is_last_spte(spte, sp->role.level))
break;
Expand Down Expand Up @@ -3176,8 +3178,6 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
break;
}

spte = mmu_spte_get_lockless(iterator.sptep);

} while (true);

trace_fast_page_fault(vcpu, gva, error_code, iterator.sptep,
Expand Down

0 comments on commit d162f30

Please sign in to comment.