Skip to content

Commit

Permalink
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Browse files Browse the repository at this point in the history
Pull kvm updates from Paolo Bonzini:
 "Loongarch:

   - Clear LLBCTL if secondary mmu mapping changes

   - Add hypercall service support for usermode VMM

  x86:

   - Add a comment to kvm_mmu_do_page_fault() to explain why KVM
     performs a direct call to kvm_tdp_page_fault() when RETPOLINE is
     enabled

   - Ensure that all SEV code is compiled out when disabled in Kconfig,
     even if building with less brilliant compilers

   - Remove a redundant TLB flush on AMD processors when guest CR4.PGE
     changes

   - Use str_enabled_disabled() to replace open coded strings

   - Drop kvm_x86_ops.hwapic_irr_update() as KVM updates hardware's
     APICv cache prior to every VM-Enter

   - Overhaul KVM's CPUID feature infrastructure to track all vCPU
     capabilities instead of just those where KVM needs to manage state
     and/or explicitly enable the feature in hardware. Along the way,
     refactor the code to make it easier to add features, and to make it
     more self-documenting how KVM is handling each feature

   - Rework KVM's handling of VM-Exits during event vectoring; this
     plugs holes where KVM unintentionally puts the vCPU into infinite
     loops in some scenarios (e.g. if emulation is triggered by the
     exit), and brings parity between VMX and SVM

   - Add pending request and interrupt injection information to the
     kvm_exit and kvm_entry tracepoints respectively

   - Fix a relatively benign flaw where KVM would end up redoing RDPKRU
     when loading guest/host PKRU, due to a refactoring of the kernel
     helpers that didn't account for KVM's pre-checking of the need to
     do WRPKRU

   - Make the completion of hypercalls go through the complete_hypercall
     function pointer argument, no matter if the hypercall exits to
     userspace or not.

     Previously, the code assumed that KVM_HC_MAP_GPA_RANGE specifically
     went to userspace, and all the others did not; the new code need
     not special case KVM_HC_MAP_GPA_RANGE and in fact does not care at
     all whether there was an exit to userspace or not

   - As part of enabling TDX virtual machines, support support
     separation of private/shared EPT into separate roots.

     When TDX will be enabled, operations on private pages will need to
     go through the privileged TDX Module via SEAMCALLs; as a result,
     they are limited and relatively slow compared to reading a PTE.

     The patches included in 6.14 allow KVM to keep a mirror of the
     private EPT in host memory, and define entries in kvm_x86_ops to
     operate on external page tables such as the TDX private EPT

   - The recently introduced conversion of the NX-page reclamation
     kthread to vhost_task moved the task under the main process. The
     task is created as soon as KVM_CREATE_VM was invoked and this, of
     course, broke userspace that didn't expect to see any child task of
     the VM process until it started creating its own userspace threads.

     In particular crosvm refuses to fork() if procfs shows any child
     task, so unbreak it by creating the task lazily. This is arguably a
     userspace bug, as there can be other kinds of legitimate worker
     tasks and they wouldn't impede fork(); but it's not like userspace
     has a way to distinguish kernel worker tasks right now. Should they
     show as "Kthread: 1" in proc/.../status?

  x86 - Intel:

   - Fix a bug where KVM updates hardware's APICv cache of the highest
     ISR bit while L2 is active, while ultimately results in a
     hardware-accelerated L1 EOI effectively being lost

   - Honor event priority when emulating Posted Interrupt delivery
     during nested VM-Enter by queueing KVM_REQ_EVENT instead of
     immediately handling the interrupt

   - Rework KVM's processing of the Page-Modification Logging buffer to
     reap entries in the same order they were created, i.e. to mark gfns
     dirty in the same order that hardware marked the page/PTE dirty

   - Misc cleanups

  Generic:

   - Cleanup and harden kvm_set_memory_region(); add proper lockdep
     assertions when setting memory regions and add a dedicated API for
     setting KVM-internal memory regions. The API can then explicitly
     disallow all flags for KVM-internal memory regions

   - Explicitly verify the target vCPU is online in kvm_get_vcpu() to
     fix a bug where KVM would return a pointer to a vCPU prior to it
     being fully online, and give kvm_for_each_vcpu() similar treatment
     to fix a similar flaw

   - Wait for a vCPU to come online prior to executing a vCPU ioctl, to
     fix a bug where userspace could coerce KVM into handling the ioctl
     on a vCPU that isn't yet onlined

   - Gracefully handle xarray insertion failures; even though such
     failures are impossible in practice after xa_reserve(), reserving
     an entry is always followed by xa_store() which does not know (or
     differentiate) whether there was an xa_reserve() before or not

  RISC-V:

   - Zabha, Svvptc, and Ziccrse extension support for guests. None of
     them require anything in KVM except for detecting them and marking
     them as supported; Zabha adds byte and halfword atomic operations,
     while the others are markers for specific operation of the TLB and
     of LL/SC instructions respectively

   - Virtualize SBI system suspend extension for Guest/VM

   - Support firmware counters which can be used by the guests to
     collect statistics about traps that occur in the host

  Selftests:

   - Rework vcpu_get_reg() to return a value instead of using an
     out-param, and update all affected arch code accordingly

   - Convert the max_guest_memory_test into a more generic
     mmu_stress_test. The basic gist of the "conversion" is to have the
     test do mprotect() on guest memory while vCPUs are accessing said
     memory, e.g. to verify KVM and mmu_notifiers are working as
     intended

   - Play nice with treewrite builds of unsupported architectures, e.g.
     arm (32-bit), as KVM selftests' Makefile doesn't do anything to
     ensure the target architecture is actually one KVM selftests
     supports

   - Use the kernel's $(ARCH) definition instead of the target triple
     for arch specific directories, e.g. arm64 instead of aarch64,
     mainly so as not to be different from the rest of the kernel

   - Ensure that format strings for logging statements are checked by
     the compiler even when the logging statement itself is disabled

   - Attempt to whack the last LLC references/misses mole in the Intel
     PMU counters test by adding a data load and doing CLFLUSH{OPT} on
     the data instead of the code being executed. It seems that modern
     Intel CPUs have learned new code prefetching tricks that bypass the
     PMU counters

   - Fix a flaw in the Intel PMU counters test where it asserts that
     events are counting correctly without actually knowing what the
     events count given the underlying hardware; this can happen if
     Intel reuses a formerly microarchitecture-specific event encoding
     as an architectural event, as was the case for Top-Down Slots"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (151 commits)
  kvm: defer huge page recovery vhost task to later
  KVM: x86/mmu: Return RET_PF* instead of 1 in kvm_mmu_page_fault()
  KVM: Disallow all flags for KVM-internal memslots
  KVM: x86: Drop double-underscores from __kvm_set_memory_region()
  KVM: Add a dedicated API for setting KVM-internal memslots
  KVM: Assert slots_lock is held when setting memory regions
  KVM: Open code kvm_set_memory_region() into its sole caller (ioctl() API)
  LoongArch: KVM: Add hypercall service support for usermode VMM
  LoongArch: KVM: Clear LLBCTL if secondary mmu mapping is changed
  KVM: SVM: Use str_enabled_disabled() helper in svm_hardware_setup()
  KVM: VMX: read the PML log in the same order as it was written
  KVM: VMX: refactor PML terminology
  KVM: VMX: Fix comment of handle_vmx_instruction()
  KVM: VMX: Reinstate __exit attribute for vmx_exit()
  KVM: SVM: Use str_enabled_disabled() helper in sev_hardware_setup()
  KVM: x86: Avoid double RDPKRU when loading host/guest PKRU
  KVM: x86: Use LVT_TIMER instead of an open coded literal
  RISC-V: KVM: Add new exit statstics for redirected traps
  RISC-V: KVM: Update firmware counters for various events
  RISC-V: KVM: Redirect instruction access fault trap to guest
  ...
  • Loading branch information
Linus Torvalds committed Jan 25, 2025
2 parents 382e391 + 931656b commit 0f8e26b
Show file tree
Hide file tree
Showing 222 changed files with 2,893 additions and 1,530 deletions.
10 changes: 7 additions & 3 deletions Documentation/virt/kvm/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1825,15 +1825,18 @@ emulate them efficiently. The fields in each entry are defined as follows:
the values returned by the cpuid instruction for
this function/index combination

The TSC deadline timer feature (CPUID leaf 1, ecx[24]) is always returned
as false, since the feature depends on KVM_CREATE_IRQCHIP for local APIC
support. Instead it is reported via::
x2APIC (CPUID leaf 1, ecx[21) and TSC deadline timer (CPUID leaf 1, ecx[24])
may be returned as true, but they depend on KVM_CREATE_IRQCHIP for in-kernel
emulation of the local APIC. TSC deadline timer support is also reported via::

ioctl(KVM_CHECK_EXTENSION, KVM_CAP_TSC_DEADLINE_TIMER)

if that returns true and you use KVM_CREATE_IRQCHIP, or if you emulate the
feature in userspace, then you can enable the feature for KVM_SET_CPUID2.

Enabling x2APIC in KVM_SET_CPUID2 requires KVM_CREATE_IRQCHIP as KVM doesn't
support forwarding x2APIC MSR accesses to userspace, i.e. KVM does not support
emulating x2APIC in userspace.

4.47 KVM_PPC_GET_PVINFO
-----------------------
Expand Down Expand Up @@ -7673,6 +7676,7 @@ branch to guests' 0x200 interrupt vector.
:Architectures: x86
:Parameters: args[0] defines which exits are disabled
:Returns: 0 on success, -EINVAL when args[0] contains invalid exits
or if any vCPUs have already been created

Valid bits in args[0] are::

Expand Down
12 changes: 6 additions & 6 deletions MAINTAINERS
Original file line number Diff line number Diff line change
Expand Up @@ -12686,8 +12686,8 @@ F: arch/arm64/include/asm/kvm*
F: arch/arm64/include/uapi/asm/kvm*
F: arch/arm64/kvm/
F: include/kvm/arm_*
F: tools/testing/selftests/kvm/*/aarch64/
F: tools/testing/selftests/kvm/aarch64/
F: tools/testing/selftests/kvm/*/arm64/
F: tools/testing/selftests/kvm/arm64/

KERNEL VIRTUAL MACHINE FOR LOONGARCH (KVM/LoongArch)
M: Tianrui Zhao <zhaotianrui@loongson.cn>
Expand Down Expand Up @@ -12758,8 +12758,8 @@ F: arch/s390/kvm/
F: arch/s390/mm/gmap.c
F: drivers/s390/char/uvdevice.c
F: tools/testing/selftests/drivers/s390x/uvdevice/
F: tools/testing/selftests/kvm/*/s390x/
F: tools/testing/selftests/kvm/s390x/
F: tools/testing/selftests/kvm/*/s390/
F: tools/testing/selftests/kvm/s390/

KERNEL VIRTUAL MACHINE FOR X86 (KVM/x86)
M: Sean Christopherson <seanjc@google.com>
Expand All @@ -12776,8 +12776,8 @@ F: arch/x86/include/uapi/asm/svm.h
F: arch/x86/include/uapi/asm/vmx.h
F: arch/x86/kvm/
F: arch/x86/kvm/*/
F: tools/testing/selftests/kvm/*/x86_64/
F: tools/testing/selftests/kvm/x86_64/
F: tools/testing/selftests/kvm/*/x86/
F: tools/testing/selftests/kvm/x86/

KERNFS
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Expand Down
3 changes: 0 additions & 3 deletions arch/arm64/include/uapi/asm/kvm.h
Original file line number Diff line number Diff line change
Expand Up @@ -43,9 +43,6 @@
#define KVM_COALESCED_MMIO_PAGE_OFFSET 1
#define KVM_DIRTY_LOG_PAGE_OFFSET 64

#define KVM_REG_SIZE(id) \
(1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT))

struct kvm_regs {
struct user_pt_regs regs; /* sp = sp_el0 */

Expand Down
1 change: 1 addition & 0 deletions arch/loongarch/include/asm/kvm_host.h
Original file line number Diff line number Diff line change
Expand Up @@ -162,6 +162,7 @@ enum emulation_result {
#define LOONGARCH_PV_FEAT_UPDATED BIT_ULL(63)
#define LOONGARCH_PV_FEAT_MASK (BIT(KVM_FEATURE_IPI) | \
BIT(KVM_FEATURE_STEAL_TIME) | \
BIT(KVM_FEATURE_USER_HCALL) | \
BIT(KVM_FEATURE_VIRT_EXTIOI))

struct kvm_vcpu_arch {
Expand Down
3 changes: 3 additions & 0 deletions arch/loongarch/include/asm/kvm_para.h
Original file line number Diff line number Diff line change
Expand Up @@ -13,13 +13,16 @@

#define KVM_HCALL_CODE_SERVICE 0
#define KVM_HCALL_CODE_SWDBG 1
#define KVM_HCALL_CODE_USER_SERVICE 2

#define KVM_HCALL_SERVICE HYPERCALL_ENCODE(HYPERVISOR_KVM, KVM_HCALL_CODE_SERVICE)
#define KVM_HCALL_FUNC_IPI 1
#define KVM_HCALL_FUNC_NOTIFY 2

#define KVM_HCALL_SWDBG HYPERCALL_ENCODE(HYPERVISOR_KVM, KVM_HCALL_CODE_SWDBG)

#define KVM_HCALL_USER_SERVICE HYPERCALL_ENCODE(HYPERVISOR_KVM, KVM_HCALL_CODE_USER_SERVICE)

/*
* LoongArch hypercall return code
*/
Expand Down
1 change: 1 addition & 0 deletions arch/loongarch/include/asm/kvm_vcpu.h
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@ int kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_inst inst);
int kvm_emu_mmio_write(struct kvm_vcpu *vcpu, larch_inst inst);
int kvm_complete_mmio_read(struct kvm_vcpu *vcpu, struct kvm_run *run);
int kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, struct kvm_run *run);
int kvm_complete_user_service(struct kvm_vcpu *vcpu, struct kvm_run *run);
int kvm_emu_idle(struct kvm_vcpu *vcpu);
int kvm_pending_timer(struct kvm_vcpu *vcpu);
int kvm_handle_fault(struct kvm_vcpu *vcpu, int fault);
Expand Down
1 change: 1 addition & 0 deletions arch/loongarch/include/uapi/asm/kvm_para.h
Original file line number Diff line number Diff line change
Expand Up @@ -17,5 +17,6 @@
#define KVM_FEATURE_STEAL_TIME 2
/* BIT 24 - 31 are features configurable by user space vmm */
#define KVM_FEATURE_VIRT_EXTIOI 24
#define KVM_FEATURE_USER_HCALL 25

#endif /* _UAPI_ASM_KVM_PARA_H */
30 changes: 30 additions & 0 deletions arch/loongarch/kvm/exit.c
Original file line number Diff line number Diff line change
Expand Up @@ -709,6 +709,14 @@ static int kvm_handle_write_fault(struct kvm_vcpu *vcpu)
return kvm_handle_rdwr_fault(vcpu, true);
}

int kvm_complete_user_service(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
update_pc(&vcpu->arch);
kvm_write_reg(vcpu, LOONGARCH_GPR_A0, run->hypercall.ret);

return 0;
}

/**
* kvm_handle_fpu_disabled() - Guest used fpu however it is disabled at host
* @vcpu: Virtual CPU context.
Expand Down Expand Up @@ -873,6 +881,28 @@ static int kvm_handle_hypercall(struct kvm_vcpu *vcpu)
vcpu->stat.hypercall_exits++;
kvm_handle_service(vcpu);
break;
case KVM_HCALL_USER_SERVICE:
if (!kvm_guest_has_pv_feature(vcpu, KVM_FEATURE_USER_HCALL)) {
kvm_write_reg(vcpu, LOONGARCH_GPR_A0, KVM_HCALL_INVALID_CODE);
break;
}

vcpu->stat.hypercall_exits++;
vcpu->run->exit_reason = KVM_EXIT_HYPERCALL;
vcpu->run->hypercall.nr = KVM_HCALL_USER_SERVICE;
vcpu->run->hypercall.args[0] = kvm_read_reg(vcpu, LOONGARCH_GPR_A0);
vcpu->run->hypercall.args[1] = kvm_read_reg(vcpu, LOONGARCH_GPR_A1);
vcpu->run->hypercall.args[2] = kvm_read_reg(vcpu, LOONGARCH_GPR_A2);
vcpu->run->hypercall.args[3] = kvm_read_reg(vcpu, LOONGARCH_GPR_A3);
vcpu->run->hypercall.args[4] = kvm_read_reg(vcpu, LOONGARCH_GPR_A4);
vcpu->run->hypercall.args[5] = kvm_read_reg(vcpu, LOONGARCH_GPR_A5);
vcpu->run->hypercall.flags = 0;
/*
* Set invalid return value by default, let user-mode VMM modify it.
*/
vcpu->run->hypercall.ret = KVM_HCALL_INVALID_CODE;
ret = RESUME_HOST;
break;
case KVM_HCALL_SWDBG:
/* KVM_HCALL_SWDBG only in effective when SW_BP is enabled */
if (vcpu->guest_debug & KVM_GUESTDBG_SW_BP_MASK) {
Expand Down
18 changes: 18 additions & 0 deletions arch/loongarch/kvm/main.c
Original file line number Diff line number Diff line change
Expand Up @@ -245,6 +245,24 @@ void kvm_check_vpid(struct kvm_vcpu *vcpu)
trace_kvm_vpid_change(vcpu, vcpu->arch.vpid);
vcpu->cpu = cpu;
kvm_clear_request(KVM_REQ_TLB_FLUSH_GPA, vcpu);

/*
* LLBCTL is a separated guest CSR register from host, a general
* exception ERET instruction clears the host LLBCTL register in
* host mode, and clears the guest LLBCTL register in guest mode.
* ERET in tlb refill exception does not clear LLBCTL register.
*
* When secondary mmu mapping is changed, guest OS does not know
* even if the content is changed after mapping is changed.
*
* Here clear WCLLB of the guest LLBCTL register when mapping is
* changed. Otherwise, if mmu mapping is changed while guest is
* executing LL/SC pair, LL loads with the old address and set
* the LLBCTL flag, SC checks the LLBCTL flag and will store the
* new address successfully since LLBCTL_WCLLB is on, even if
* memory with new address is changed on other VCPUs.
*/
set_gcsr_llbctl(CSR_LLBCTL_WCLLB);
}

/* Restore GSTAT(0x50).vpid */
Expand Down
7 changes: 6 additions & 1 deletion arch/loongarch/kvm/vcpu.c
Original file line number Diff line number Diff line change
Expand Up @@ -1732,9 +1732,14 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
vcpu->mmio_needed = 0;
}

if (run->exit_reason == KVM_EXIT_LOONGARCH_IOCSR) {
switch (run->exit_reason) {
case KVM_EXIT_HYPERCALL:
kvm_complete_user_service(vcpu, run);
break;
case KVM_EXIT_LOONGARCH_IOCSR:
if (!run->iocsr_io.is_write)
kvm_complete_iocsr_read(vcpu, run);
break;
}

if (!vcpu->wants_to_run)
Expand Down
5 changes: 5 additions & 0 deletions arch/riscv/include/asm/kvm_host.h
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,11 @@ struct kvm_vcpu_stat {
u64 csr_exit_kernel;
u64 signal_exits;
u64 exits;
u64 instr_illegal_exits;
u64 load_misaligned_exits;
u64 store_misaligned_exits;
u64 load_access_exits;
u64 store_access_exits;
};

struct kvm_arch_memory_slot {
Expand Down
1 change: 1 addition & 0 deletions arch/riscv/include/asm/kvm_vcpu_sbi.h
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,7 @@ extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_rfence;
extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_srst;
extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_hsm;
extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_dbcn;
extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_susp;
extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_sta;
extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_experimental;
extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_vendor;
Expand Down
7 changes: 4 additions & 3 deletions arch/riscv/include/uapi/asm/kvm.h
Original file line number Diff line number Diff line change
Expand Up @@ -179,6 +179,9 @@ enum KVM_RISCV_ISA_EXT_ID {
KVM_RISCV_ISA_EXT_SSNPM,
KVM_RISCV_ISA_EXT_SVADE,
KVM_RISCV_ISA_EXT_SVADU,
KVM_RISCV_ISA_EXT_SVVPTC,
KVM_RISCV_ISA_EXT_ZABHA,
KVM_RISCV_ISA_EXT_ZICCRSE,
KVM_RISCV_ISA_EXT_MAX,
};

Expand All @@ -198,6 +201,7 @@ enum KVM_RISCV_SBI_EXT_ID {
KVM_RISCV_SBI_EXT_VENDOR,
KVM_RISCV_SBI_EXT_DBCN,
KVM_RISCV_SBI_EXT_STA,
KVM_RISCV_SBI_EXT_SUSP,
KVM_RISCV_SBI_EXT_MAX,
};

Expand All @@ -211,9 +215,6 @@ struct kvm_riscv_sbi_sta {
#define KVM_RISCV_TIMER_STATE_OFF 0
#define KVM_RISCV_TIMER_STATE_ON 1

#define KVM_REG_SIZE(id) \
(1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT))

/* If you need to interpret the index values, here is the key: */
#define KVM_REG_RISCV_TYPE_MASK 0x00000000FF000000
#define KVM_REG_RISCV_TYPE_SHIFT 24
Expand Down
1 change: 1 addition & 0 deletions arch/riscv/kvm/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ kvm-y += vcpu_sbi_hsm.o
kvm-$(CONFIG_RISCV_PMU_SBI) += vcpu_sbi_pmu.o
kvm-y += vcpu_sbi_replace.o
kvm-y += vcpu_sbi_sta.o
kvm-y += vcpu_sbi_system.o
kvm-$(CONFIG_RISCV_SBI_V01) += vcpu_sbi_v01.o
kvm-y += vcpu_switch.o
kvm-y += vcpu_timer.o
Expand Down
7 changes: 6 additions & 1 deletion arch/riscv/kvm/vcpu.c
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,12 @@ const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
STATS_DESC_COUNTER(VCPU, csr_exit_user),
STATS_DESC_COUNTER(VCPU, csr_exit_kernel),
STATS_DESC_COUNTER(VCPU, signal_exits),
STATS_DESC_COUNTER(VCPU, exits)
STATS_DESC_COUNTER(VCPU, exits),
STATS_DESC_COUNTER(VCPU, instr_illegal_exits),
STATS_DESC_COUNTER(VCPU, load_misaligned_exits),
STATS_DESC_COUNTER(VCPU, store_misaligned_exits),
STATS_DESC_COUNTER(VCPU, load_access_exits),
STATS_DESC_COUNTER(VCPU, store_access_exits),
};

const struct kvm_stats_header kvm_vcpu_stats_header = {
Expand Down
37 changes: 33 additions & 4 deletions arch/riscv/kvm/vcpu_exit.c
Original file line number Diff line number Diff line change
Expand Up @@ -165,6 +165,17 @@ void kvm_riscv_vcpu_trap_redirect(struct kvm_vcpu *vcpu,
vcpu->arch.guest_context.sstatus |= SR_SPP;
}

static inline int vcpu_redirect(struct kvm_vcpu *vcpu, struct kvm_cpu_trap *trap)
{
int ret = -EFAULT;

if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV) {
kvm_riscv_vcpu_trap_redirect(vcpu, trap);
ret = 1;
}
return ret;
}

/*
* Return > 0 to return to guest, < 0 on error, 0 (and set exit_reason) on
* proper exit to userspace.
Expand All @@ -183,14 +194,32 @@ int kvm_riscv_vcpu_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
run->exit_reason = KVM_EXIT_UNKNOWN;
switch (trap->scause) {
case EXC_INST_ILLEGAL:
kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_ILLEGAL_INSN);
vcpu->stat.instr_illegal_exits++;
ret = vcpu_redirect(vcpu, trap);
break;
case EXC_LOAD_MISALIGNED:
kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_MISALIGNED_LOAD);
vcpu->stat.load_misaligned_exits++;
ret = vcpu_redirect(vcpu, trap);
break;
case EXC_STORE_MISALIGNED:
kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_MISALIGNED_STORE);
vcpu->stat.store_misaligned_exits++;
ret = vcpu_redirect(vcpu, trap);
break;
case EXC_LOAD_ACCESS:
kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_ACCESS_LOAD);
vcpu->stat.load_access_exits++;
ret = vcpu_redirect(vcpu, trap);
break;
case EXC_STORE_ACCESS:
if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV) {
kvm_riscv_vcpu_trap_redirect(vcpu, trap);
ret = 1;
}
kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_ACCESS_STORE);
vcpu->stat.store_access_exits++;
ret = vcpu_redirect(vcpu, trap);
break;
case EXC_INST_ACCESS:
ret = vcpu_redirect(vcpu, trap);
break;
case EXC_VIRTUAL_INST_FAULT:
if (vcpu->arch.guest_context.hstatus & HSTATUS_SPV)
Expand Down
6 changes: 6 additions & 0 deletions arch/riscv/kvm/vcpu_onereg.c
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,8 @@ static const unsigned long kvm_isa_ext_arr[] = {
KVM_ISA_EXT_ARR(SVINVAL),
KVM_ISA_EXT_ARR(SVNAPOT),
KVM_ISA_EXT_ARR(SVPBMT),
KVM_ISA_EXT_ARR(SVVPTC),
KVM_ISA_EXT_ARR(ZABHA),
KVM_ISA_EXT_ARR(ZACAS),
KVM_ISA_EXT_ARR(ZAWRS),
KVM_ISA_EXT_ARR(ZBA),
Expand All @@ -65,6 +67,7 @@ static const unsigned long kvm_isa_ext_arr[] = {
KVM_ISA_EXT_ARR(ZFHMIN),
KVM_ISA_EXT_ARR(ZICBOM),
KVM_ISA_EXT_ARR(ZICBOZ),
KVM_ISA_EXT_ARR(ZICCRSE),
KVM_ISA_EXT_ARR(ZICNTR),
KVM_ISA_EXT_ARR(ZICOND),
KVM_ISA_EXT_ARR(ZICSR),
Expand Down Expand Up @@ -145,6 +148,8 @@ static bool kvm_riscv_vcpu_isa_disable_allowed(unsigned long ext)
case KVM_RISCV_ISA_EXT_SSTC:
case KVM_RISCV_ISA_EXT_SVINVAL:
case KVM_RISCV_ISA_EXT_SVNAPOT:
case KVM_RISCV_ISA_EXT_SVVPTC:
case KVM_RISCV_ISA_EXT_ZABHA:
case KVM_RISCV_ISA_EXT_ZACAS:
case KVM_RISCV_ISA_EXT_ZAWRS:
case KVM_RISCV_ISA_EXT_ZBA:
Expand All @@ -162,6 +167,7 @@ static bool kvm_riscv_vcpu_isa_disable_allowed(unsigned long ext)
case KVM_RISCV_ISA_EXT_ZFA:
case KVM_RISCV_ISA_EXT_ZFH:
case KVM_RISCV_ISA_EXT_ZFHMIN:
case KVM_RISCV_ISA_EXT_ZICCRSE:
case KVM_RISCV_ISA_EXT_ZICNTR:
case KVM_RISCV_ISA_EXT_ZICOND:
case KVM_RISCV_ISA_EXT_ZICSR:
Expand Down
4 changes: 4 additions & 0 deletions arch/riscv/kvm/vcpu_sbi.c
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,10 @@ static const struct kvm_riscv_sbi_extension_entry sbi_ext[] = {
.ext_idx = KVM_RISCV_SBI_EXT_DBCN,
.ext_ptr = &vcpu_sbi_ext_dbcn,
},
{
.ext_idx = KVM_RISCV_SBI_EXT_SUSP,
.ext_ptr = &vcpu_sbi_ext_susp,
},
{
.ext_idx = KVM_RISCV_SBI_EXT_STA,
.ext_ptr = &vcpu_sbi_ext_sta,
Expand Down
Loading

0 comments on commit 0f8e26b

Please sign in to comment.