Skip to content

Commit

Permalink
KVM: MMU: Use list_for_each_entry_safe in kvm_mmu_commit_zap_page()
Browse files Browse the repository at this point in the history
We are traversing the linked list, invalid_list, deleting each entry by
kvm_mmu_free_page().  _safe version is there for such a case.

Reviewed-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
  • Loading branch information
Takuya Yoshikawa authored and Marcelo Tosatti committed Mar 7, 2013
1 parent 1044b03 commit 945315b
Showing 1 changed file with 3 additions and 4 deletions.
7 changes: 3 additions & 4 deletions arch/x86/kvm/mmu.c
Original file line number Diff line number Diff line change
Expand Up @@ -2087,7 +2087,7 @@ static int kvm_mmu_prepare_zap_page(struct kvm *kvm, struct kvm_mmu_page *sp,
static void kvm_mmu_commit_zap_page(struct kvm *kvm,
struct list_head *invalid_list)
{
struct kvm_mmu_page *sp;
struct kvm_mmu_page *sp, *nsp;

if (list_empty(invalid_list))
return;
Expand All @@ -2104,11 +2104,10 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
*/
kvm_flush_remote_tlbs(kvm);

do {
sp = list_first_entry(invalid_list, struct kvm_mmu_page, link);
list_for_each_entry_safe(sp, nsp, invalid_list, link) {
WARN_ON(!sp->role.invalid || sp->root_count);
kvm_mmu_free_page(sp);
} while (!list_empty(invalid_list));
}
}

/*
Expand Down

0 comments on commit 945315b

Please sign in to comment.