Skip to content

Commit

Permalink
KVM: PPC: Make invalidation code more reliable
Browse files Browse the repository at this point in the history
There is a race condition in the pte invalidation code path where we can't
be sure if a pte was invalidated already. So let's move the spin lock around
to get rid of the race.

Signed-off-by: Alexander Graf <agraf@suse.de>
  • Loading branch information
Alexander Graf authored and Avi Kivity committed Oct 24, 2010
1 parent 2e60284 commit e7c1d14
Showing 1 changed file with 8 additions and 6 deletions.
14 changes: 8 additions & 6 deletions arch/powerpc/kvm/book3s_mmu_hpte.c
Original file line number Diff line number Diff line change
Expand Up @@ -92,29 +92,31 @@ static void free_pte_rcu(struct rcu_head *head)

static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
{
/* pte already invalidated? */
if (hlist_unhashed(&pte->list_pte))
return;

trace_kvm_book3s_mmu_invalidate(pte);

/* Different for 32 and 64 bit */
kvmppc_mmu_invalidate_pte(vcpu, pte);

spin_lock(&vcpu->arch.mmu_lock);

/* pte already invalidated in between? */
if (hlist_unhashed(&pte->list_pte)) {
spin_unlock(&vcpu->arch.mmu_lock);
return;
}

hlist_del_init_rcu(&pte->list_pte);
hlist_del_init_rcu(&pte->list_pte_long);
hlist_del_init_rcu(&pte->list_vpte);
hlist_del_init_rcu(&pte->list_vpte_long);

spin_unlock(&vcpu->arch.mmu_lock);

if (pte->pte.may_write)
kvm_release_pfn_dirty(pte->pfn);
else
kvm_release_pfn_clean(pte->pfn);

spin_unlock(&vcpu->arch.mmu_lock);

vcpu->arch.hpte_cache_count--;
call_rcu(&pte->rcu_head, free_pte_rcu);
}
Expand Down

0 comments on commit e7c1d14

Please sign in to comment.