Skip to content

Commit

Permalink
KVM: arm64: Synchronize sysreg state on injecting an AArch32 exception
Browse files Browse the repository at this point in the history
commit 0370964 upstream.

On a VHE system, the EL1 state is left in the CPU most of the time,
and only syncronized back to memory when vcpu_put() is called (most
of the time on preemption).

Which means that when injecting an exception, we'd better have a way
to either:
(1) write directly to the EL1 sysregs
(2) synchronize the state back to memory, and do the changes there

For an AArch64, we already do (1), so we are safe. Unfortunately,
doing the same thing for AArch32 would be pretty invasive. Instead,
we can easily implement (2) by calling the put/load architectural
backends, and keep preemption disabled. We can then reload the
state back into EL1.

Cc: stable@vger.kernel.org
Reported-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  • Loading branch information
Marc Zyngier authored and Greg Kroah-Hartman committed Jun 17, 2020
1 parent a688d4d commit 1e311a1
Show file tree
Hide file tree
Showing 3 changed files with 32 additions and 0 deletions.
2 changes: 2 additions & 0 deletions arch/arm/include/asm/kvm_host.h
Original file line number Diff line number Diff line change
Expand Up @@ -453,4 +453,6 @@ static inline bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu)
return true;
}

#define kvm_arm_vcpu_loaded(vcpu) (false)

#endif /* __ARM_KVM_HOST_H__ */
2 changes: 2 additions & 0 deletions arch/arm64/include/asm/kvm_host.h
Original file line number Diff line number Diff line change
Expand Up @@ -685,4 +685,6 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
#define kvm_arm_vcpu_sve_finalized(vcpu) \
((vcpu)->arch.flags & KVM_ARM64_VCPU_SVE_FINALIZED)

#define kvm_arm_vcpu_loaded(vcpu) ((vcpu)->arch.sysregs_loaded_on_cpu)

#endif /* __ARM64_KVM_HOST_H__ */
28 changes: 28 additions & 0 deletions virt/kvm/arm/aarch32.c
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,26 @@ static const u8 return_offsets[8][2] = {
[7] = { 4, 4 }, /* FIQ, unused */
};

static bool pre_fault_synchronize(struct kvm_vcpu *vcpu)
{
preempt_disable();
if (kvm_arm_vcpu_loaded(vcpu)) {
kvm_arch_vcpu_put(vcpu);
return true;
}

preempt_enable();
return false;
}

static void post_fault_synchronize(struct kvm_vcpu *vcpu, bool loaded)
{
if (loaded) {
kvm_arch_vcpu_load(vcpu, smp_processor_id());
preempt_enable();
}
}

/*
* When an exception is taken, most CPSR fields are left unchanged in the
* handler. However, some are explicitly overridden (e.g. M[4:0]).
Expand Down Expand Up @@ -155,7 +175,10 @@ static void prepare_fault32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset)

void kvm_inject_undef32(struct kvm_vcpu *vcpu)
{
bool loaded = pre_fault_synchronize(vcpu);

prepare_fault32(vcpu, PSR_AA32_MODE_UND, 4);
post_fault_synchronize(vcpu, loaded);
}

/*
Expand All @@ -168,6 +191,9 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt,
u32 vect_offset;
u32 *far, *fsr;
bool is_lpae;
bool loaded;

loaded = pre_fault_synchronize(vcpu);

if (is_pabt) {
vect_offset = 12;
Expand All @@ -191,6 +217,8 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt,
/* no need to shuffle FS[4] into DFSR[10] as its 0 */
*fsr = DFSR_FSC_EXTABT_nLPAE;
}

post_fault_synchronize(vcpu, loaded);
}

void kvm_inject_dabt32(struct kvm_vcpu *vcpu, unsigned long addr)
Expand Down

0 comments on commit 1e311a1

Please sign in to comment.