Skip to content

Commit

Permalink
kvm: x86/mmu: Support disabling dirty logging for the tdp MMU
Browse files Browse the repository at this point in the history
Dirty logging ultimately breaks down MMU mappings to 4k granularity.
When dirty logging is no longer needed, these granaular mappings
represent a useless performance penalty. When dirty logging is disabled,
search the paging structure for mappings that could be re-constituted
into a large page mapping. Zap those mappings so that they can be
faulted in again at a higher mapping level.

Tested by running kvm-unit-tests and KVM selftests on an Intel Haswell
machine. This series introduced no new failures.

This series can be viewed in Gerrit at:
	https://linux-review.googlesource.com/c/virt/kvm/kvm/+/2538

Signed-off-by: Ben Gardon <bgardon@google.com>
Message-Id: <20201014182700.2888246-17-bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  • Loading branch information
Ben Gardon authored and Paolo Bonzini committed Oct 23, 2020
1 parent a6a0b05 commit 1488199
Show file tree
Hide file tree
Showing 3 changed files with 63 additions and 0 deletions.
3 changes: 3 additions & 0 deletions arch/x86/kvm/mmu/mmu.c
Original file line number Diff line number Diff line change
Expand Up @@ -5544,6 +5544,9 @@ void kvm_mmu_zap_collapsible_sptes(struct kvm *kvm,
spin_lock(&kvm->mmu_lock);
slot_handle_leaf(kvm, (struct kvm_memory_slot *)memslot,
kvm_mmu_zap_collapsible_spte, true);

if (kvm->arch.tdp_mmu_enabled)
kvm_tdp_mmu_zap_collapsible_sptes(kvm, memslot);
spin_unlock(&kvm->mmu_lock);
}

Expand Down
58 changes: 58 additions & 0 deletions arch/x86/kvm/mmu/tdp_mmu.c
Original file line number Diff line number Diff line change
Expand Up @@ -1020,3 +1020,61 @@ bool kvm_tdp_mmu_slot_set_dirty(struct kvm *kvm, struct kvm_memory_slot *slot)
return spte_set;
}

/*
* Clear non-leaf entries (and free associated page tables) which could
* be replaced by large mappings, for GFNs within the slot.
*/
static void zap_collapsible_spte_range(struct kvm *kvm,
struct kvm_mmu_page *root,
gfn_t start, gfn_t end)
{
struct tdp_iter iter;
kvm_pfn_t pfn;
bool spte_set = false;

tdp_root_for_each_pte(iter, root, start, end) {
if (!is_shadow_present_pte(iter.old_spte) ||
is_last_spte(iter.old_spte, iter.level))
continue;

pfn = spte_to_pfn(iter.old_spte);
if (kvm_is_reserved_pfn(pfn) ||
!PageTransCompoundMap(pfn_to_page(pfn)))
continue;

tdp_mmu_set_spte(kvm, &iter, 0);

spte_set = tdp_mmu_iter_flush_cond_resched(kvm, &iter);
}

if (spte_set)
kvm_flush_remote_tlbs(kvm);
}

/*
* Clear non-leaf entries (and free associated page tables) which could
* be replaced by large mappings, for GFNs within the slot.
*/
void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm,
const struct kvm_memory_slot *slot)
{
struct kvm_mmu_page *root;
int root_as_id;

for_each_tdp_mmu_root(kvm, root) {
root_as_id = kvm_mmu_page_as_id(root);
if (root_as_id != slot->as_id)
continue;

/*
* Take a reference on the root so that it cannot be freed if
* this thread releases the MMU lock and yields in this loop.
*/
kvm_mmu_get_root(kvm, root);

zap_collapsible_spte_range(kvm, root, slot->base_gfn,
slot->base_gfn + slot->npages);

kvm_mmu_put_root(kvm, root);
}
}
2 changes: 2 additions & 0 deletions arch/x86/kvm/mmu/tdp_mmu.h
Original file line number Diff line number Diff line change
Expand Up @@ -38,4 +38,6 @@ void kvm_tdp_mmu_clear_dirty_pt_masked(struct kvm *kvm,
gfn_t gfn, unsigned long mask,
bool wrprot);
bool kvm_tdp_mmu_slot_set_dirty(struct kvm *kvm, struct kvm_memory_slot *slot);
void kvm_tdp_mmu_zap_collapsible_sptes(struct kvm *kvm,
const struct kvm_memory_slot *slot);
#endif /* __KVM_X86_MMU_TDP_MMU_H */

0 comments on commit 1488199

Please sign in to comment.