Skip to content

Commit

Permalink
KVM: MMU: avoid double write protected in sync page path
Browse files Browse the repository at this point in the history
The sync page is already write protected in mmu_sync_children(), don't
write protected it again

Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
  • Loading branch information
Xiao Guangrong authored and Avi Kivity committed Aug 1, 2010
1 parent cb83cad commit f918b44
Showing 1 changed file with 2 additions and 4 deletions.
6 changes: 2 additions & 4 deletions arch/x86/kvm/mmu.c
Original file line number Diff line number Diff line change
Expand Up @@ -1216,6 +1216,7 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
if ((sp)->gfn != (gfn) || (sp)->role.direct || \
(sp)->role.invalid) {} else

/* @sp->gfn should be write-protected at the call site */
static int __kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
struct list_head *invalid_list, bool clear_unsync)
{
Expand All @@ -1224,11 +1225,8 @@ static int __kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
return 1;
}

if (clear_unsync) {
if (rmap_write_protect(vcpu->kvm, sp->gfn))
kvm_flush_remote_tlbs(vcpu->kvm);
if (clear_unsync)
kvm_unlink_unsync_page(vcpu->kvm, sp);
}

if (vcpu->arch.mmu.sync_page(vcpu, sp)) {
kvm_mmu_prepare_zap_page(vcpu->kvm, sp, invalid_list);
Expand Down

0 comments on commit f918b44

Please sign in to comment.