Skip to content

Commit

Permalink
KVM: x86: fix RSM into 64-bit protected mode
Browse files Browse the repository at this point in the history
In order to get into 64-bit protected mode, you need to enable
paging while EFER.LMA=1.  For this to work, CS.L must be 0.
Currently, we load the segments before CR0 and CR4, which means
that if RSM returns into 64-bit protected mode CS.L is already 1
and everything breaks.

Luckily, CS.L=0 is always the case when executing RSM, because it
is forbidden to execute RSM from 64-bit protected mode.  Hence it
is enough to load CR0 and CR4 first, and only then the segments.

Fixes: 660a5d5
Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  • Loading branch information
Paolo Bonzini committed Oct 14, 2015
1 parent 25188b9 commit b10d92a
Showing 1 changed file with 7 additions and 3 deletions.
10 changes: 7 additions & 3 deletions arch/x86/kvm/emulate.c
Original file line number Diff line number Diff line change
Expand Up @@ -2418,7 +2418,7 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, u64 smbase)
u64 val, cr0, cr4;
u32 base3;
u16 selector;
int i;
int i, r;

for (i = 0; i < 16; i++)
*reg_write(ctxt, i) = GET_SMSTATE(u64, smbase, 0x7ff8 - i * 8);
Expand Down Expand Up @@ -2460,13 +2460,17 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, u64 smbase)
dt.address = GET_SMSTATE(u64, smbase, 0x7e68);
ctxt->ops->set_gdt(ctxt, &dt);

r = rsm_enter_protected_mode(ctxt, cr0, cr4);
if (r != X86EMUL_CONTINUE)
return r;

for (i = 0; i < 6; i++) {
int r = rsm_load_seg_64(ctxt, smbase, i);
r = rsm_load_seg_64(ctxt, smbase, i);
if (r != X86EMUL_CONTINUE)
return r;
}

return rsm_enter_protected_mode(ctxt, cr0, cr4);
return X86EMUL_CONTINUE;
}

static int em_rsm(struct x86_emulate_ctxt *ctxt)
Expand Down

0 comments on commit b10d92a

Please sign in to comment.