Skip to content

Commit

Permalink
KVM: MMU: simplify folding of dirty bit into accessed_dirty
Browse files Browse the repository at this point in the history
MMU code tries to avoid if()s HW is not able to predict reliably by using
bitwise operation to streamline code execution, but in case of a dirty bit
folding this gives us nothing since write_fault is checked right before
the folding code. Lets just piggyback onto the if() to make code more clear.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
  • Loading branch information
Gleb Natapov authored and Marcelo Tosatti committed Jan 7, 2013
1 parent ee04e0c commit 908e7d7
Showing 1 changed file with 6 additions and 10 deletions.
16 changes: 6 additions & 10 deletions arch/x86/kvm/paging_tmpl.h
Original file line number Diff line number Diff line change
Expand Up @@ -249,16 +249,12 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker,

if (!write_fault)
protect_clean_gpte(&pte_access, pte);

/*
* On a write fault, fold the dirty bit into accessed_dirty by shifting it one
* place right.
*
* On a read fault, do nothing.
*/
shift = write_fault >> ilog2(PFERR_WRITE_MASK);
shift *= PT_DIRTY_SHIFT - PT_ACCESSED_SHIFT;
accessed_dirty &= pte >> shift;
else
/*
* On a write fault, fold the dirty bit into accessed_dirty by
* shifting it one place right.
*/
accessed_dirty &= pte >> (PT_DIRTY_SHIFT - PT_ACCESSED_SHIFT);

if (unlikely(!accessed_dirty)) {
ret = FNAME(update_accessed_dirty_bits)(vcpu, mmu, walker, write_fault);
Expand Down

0 comments on commit 908e7d7

Please sign in to comment.