Skip to content

Commit

Permalink
KVM: MMU: Explicitly set D-bit for writable spte.
Browse files Browse the repository at this point in the history
This patch avoids unnecessary dirty GPA logging to PML buffer in EPT violation
path by setting D-bit manually prior to the occurrence of the write from guest.

We only set D-bit manually in set_spte, and leave fast_page_fault path
unchanged, as fast_page_fault is very unlikely to happen in case of PML.

For the hva <-> pa change case, the spte is updated to either read-only (host
pte is read-only) or be dropped (host pte is writeable), and both cases will be
handled by above changes, therefore no change is necessary.

Signed-off-by: Kai Huang <kai.huang@linux.intel.com>
Reviewed-by: Xiao Guangrong <guangrong.xiao@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  • Loading branch information
Kai Huang authored and Paolo Bonzini committed Jan 29, 2015
1 parent f4b4b18 commit 9b51a63
Showing 1 changed file with 15 additions and 1 deletion.
16 changes: 15 additions & 1 deletion arch/x86/kvm/mmu.c
Original file line number Diff line number Diff line change
Expand Up @@ -2597,8 +2597,10 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep,
}
}

if (pte_access & ACC_WRITE_MASK)
if (pte_access & ACC_WRITE_MASK) {
mark_page_dirty(vcpu->kvm, gfn);
spte |= shadow_dirty_mask;
}

set_pte:
if (mmu_spte_update(sptep, spte))
Expand Down Expand Up @@ -2914,6 +2916,18 @@ fast_pf_fix_direct_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
*/
gfn = kvm_mmu_page_get_gfn(sp, sptep - sp->spt);

/*
* Theoretically we could also set dirty bit (and flush TLB) here in
* order to eliminate unnecessary PML logging. See comments in
* set_spte. But fast_page_fault is very unlikely to happen with PML
* enabled, so we do not do this. This might result in the same GPA
* to be logged in PML buffer again when the write really happens, and
* eventually to be called by mark_page_dirty twice. But it's also no
* harm. This also avoids the TLB flush needed after setting dirty bit
* so non-PML cases won't be impacted.
*
* Compare with set_spte where instead shadow_dirty_mask is set.
*/
if (cmpxchg64(sptep, spte, spte | PT_WRITABLE_MASK) == spte)
mark_page_dirty(vcpu->kvm, gfn);

Expand Down

0 comments on commit 9b51a63

Please sign in to comment.