openwrt/staging/blogic.git
5 years agoKVM: x86: Set intercept for Intel PT MSRs read/write
Chao Peng [Wed, 24 Oct 2018 08:05:15 +0000 (16:05 +0800)]
KVM: x86: Set intercept for Intel PT MSRs read/write

To save performance overhead, disable intercept Intel PT MSRs
read/write when Intel PT is enabled in guest.
MSR_IA32_RTIT_CTL is an exception that will always be intercepted.

Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: Implement Intel PT MSRs read/write emulation
Chao Peng [Wed, 24 Oct 2018 08:05:14 +0000 (16:05 +0800)]
KVM: x86: Implement Intel PT MSRs read/write emulation

This patch implement Intel Processor Trace MSRs read/write
emulation.
Intel PT MSRs read/write need to be emulated when Intel PT
MSRs is intercepted in guest and during live migration.

Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: Introduce a function to initialize the PT configuration
Luwei Kang [Wed, 24 Oct 2018 08:05:13 +0000 (16:05 +0800)]
KVM: x86: Introduce a function to initialize the PT configuration

Initialize the Intel PT configuration when cpuid update.
Include cpuid inforamtion, rtit_ctl bit mask and the number of
address ranges.

Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: Add Intel PT context switch for each vcpu
Chao Peng [Wed, 24 Oct 2018 08:05:12 +0000 (16:05 +0800)]
KVM: x86: Add Intel PT context switch for each vcpu

Load/Store Intel Processor Trace register in context switch.
MSR IA32_RTIT_CTL is loaded/stored automatically from VMCS.
In Host-Guest mode, we need load/resore PT MSRs only when PT
is enabled in guest.

Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: Add Intel Processor Trace cpuid emulation
Chao Peng [Wed, 24 Oct 2018 08:05:11 +0000 (16:05 +0800)]
KVM: x86: Add Intel Processor Trace cpuid emulation

Expose Intel Processor Trace to guest only when
the PT works in Host-Guest mode.

Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: Add Intel PT virtualization work mode
Chao Peng [Wed, 24 Oct 2018 08:05:10 +0000 (16:05 +0800)]
KVM: x86: Add Intel PT virtualization work mode

Intel Processor Trace virtualization can be work in one
of 2 possible modes:

a. System-Wide mode (default):
   When the host configures Intel PT to collect trace packets
   of the entire system, it can leave the relevant VMX controls
   clear to allow VMX-specific packets to provide information
   across VMX transitions.
   KVM guest will not aware this feature in this mode and both
   host and KVM guest trace will output to host buffer.

b. Host-Guest mode:
   Host can configure trace-packet generation while in
   VMX non-root operation for guests and root operation
   for native executing normally.
   Intel PT will be exposed to KVM guest in this mode, and
   the trace output to respective buffer of host and guest.
   In this mode, tht status of PT will be saved and disabled
   before VM-entry and restored after VM-exit if trace
   a virtual machine.

Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoperf/x86/intel/pt: add new capability for Intel PT
Luwei Kang [Wed, 24 Oct 2018 08:05:09 +0000 (16:05 +0800)]
perf/x86/intel/pt: add new capability for Intel PT

This adds support for "output to Trace Transport subsystem"
capability of Intel PT. It means that PT can output its
trace to an MMIO address range rather than system memory buffer.

Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoperf/x86/intel/pt: Add new bit definitions for PT MSRs
Luwei Kang [Wed, 24 Oct 2018 08:05:08 +0000 (16:05 +0800)]
perf/x86/intel/pt: Add new bit definitions for PT MSRs

Add bit definitions for Intel PT MSRs to support trace output
directed to the memeory subsystem and holds a count if packet
bytes that have been sent out.

These are required by the upcoming PT support in KVM guests
for MSRs read/write emulation.

Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoperf/x86/intel/pt: Introduce intel_pt_validate_cap()
Luwei Kang [Wed, 24 Oct 2018 08:05:07 +0000 (16:05 +0800)]
perf/x86/intel/pt: Introduce intel_pt_validate_cap()

intel_pt_validate_hw_cap() validates whether a given PT capability is
supported by the hardware. It checks the PT capability array which
reflects the capabilities of the hardware on which the code is executed.

For setting up PT for KVM guests this is not correct as the capability
array for the guest can be different from the host array.

Provide a new function to check against a given capability array.

Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoperf/x86/intel/pt: Export pt_cap_get()
Chao Peng [Wed, 24 Oct 2018 08:05:06 +0000 (16:05 +0800)]
perf/x86/intel/pt: Export pt_cap_get()

pt_cap_get() is required by the upcoming PT support in KVM guests.

Export it and move the capabilites enum to a global header.

As a global functions, "pt_*" is already used for ptrace and
other things, so it makes sense to use "intel_pt_*" as a prefix.

Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoperf/x86/intel/pt: Move Intel PT MSRs bit defines to global header
Chao Peng [Wed, 24 Oct 2018 08:05:05 +0000 (16:05 +0800)]
perf/x86/intel/pt: Move Intel PT MSRs bit defines to global header

The Intel Processor Trace (PT) MSR bit defines are in a private
header. The upcoming support for PT virtualization requires these defines
to be accessible from KVM code.

Move them to the global MSR header file.

Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
Signed-off-by: Luwei Kang <luwei.kang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: selftests: aarch64: dirty_log_test: support greater than 40-bit IPAs
Andrew Jones [Tue, 6 Nov 2018 13:57:12 +0000 (14:57 +0100)]
kvm: selftests: aarch64: dirty_log_test: support greater than 40-bit IPAs

When KVM has KVM_CAP_ARM_VM_IPA_SIZE we can test with > 40-bit IPAs by
using the 'type' field of KVM_CREATE_VM.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
5 years agokvm: selftests: add pa-48/va-48 VM modes
Andrew Jones [Tue, 6 Nov 2018 13:57:11 +0000 (14:57 +0100)]
kvm: selftests: add pa-48/va-48 VM modes

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
5 years agokvm: selftests: dirty_log_test: improve mode param management
Andrew Jones [Tue, 6 Nov 2018 13:57:10 +0000 (14:57 +0100)]
kvm: selftests: dirty_log_test: improve mode param management

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
5 years agokvm: selftests: dirty_log_test: reset guest test phys offset
Andrew Jones [Tue, 6 Nov 2018 13:57:09 +0000 (14:57 +0100)]
kvm: selftests: dirty_log_test: reset guest test phys offset

We need to reset the offset for each mode as it will change
depending on the number of guest physical address bits.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
5 years agokvm: selftests: dirty_log_test: always use -t
Andrew Jones [Tue, 6 Nov 2018 13:57:08 +0000 (14:57 +0100)]
kvm: selftests: dirty_log_test: always use -t

There's no reason not to always test the topmost physical
addresses, and if the user wants to try lower addresses
then '-p' (used to be '-o before this patch) can be used.
Let's remove the '-t' option and just always do what it did.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
5 years agokvm: selftests: dirty_log_test: don't identity map the test mem
Andrew Jones [Tue, 6 Nov 2018 13:57:07 +0000 (14:57 +0100)]
kvm: selftests: dirty_log_test: don't identity map the test mem

It isn't necessary and can even cause problems when testing high
guest physical addresses. This patch leaves the test memory id-
mapped by default, but when using '-t' the test memory virtual
addresses stay the same even though the physical addresses switch
to the topmost valid addresses.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
5 years agokvm: selftests: x86_64: dirty_log_test: fix -t
Andrew Jones [Tue, 6 Nov 2018 13:57:06 +0000 (14:57 +0100)]
kvm: selftests: x86_64: dirty_log_test: fix -t

Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
5 years agoKVM: fix some typos
Wei Yang [Mon, 5 Nov 2018 06:45:03 +0000 (14:45 +0800)]
KVM: fix some typos

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
[Preserved the iff and a probably intentional weird bracket notation.
 Also dropped the style change to make a single-purpose patch. - Radim]
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
5 years agox86/kvmclock: convert to SPDX identifiers
Peng Hao [Fri, 2 Nov 2018 09:05:17 +0000 (17:05 +0800)]
x86/kvmclock: convert to SPDX identifiers

Update the verbose license text with the matching SPDX
license identifier.

Signed-off-by: Peng Hao <peng.hao2@zte.com.cn>
[Changed deprecated GPL-2.0+ to GPL-2.0-or-later. - Radim]
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
5 years agoKVM: x86: Remove KF() macro placeholder
Sean Christopherson [Mon, 5 Nov 2018 18:44:32 +0000 (10:44 -0800)]
KVM: x86: Remove KF() macro placeholder

Although well-intentioned, keeping the KF() definition as a hint for
handling scattered CPUID features may be counter-productive.  Simply
redefining the bit position only works for directly manipulating the
guest's CPUID leafs, e.g. it doesn't make guest_cpuid_has() magically
work.  Taking an alternative approach, e.g. ensuring the bit position
is identical between the Linux-defined and hardware-defined features,
may be a simpler and/or more effective method of exposing scattered
features to the guest.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
5 years agokvm: vmx: Allow guest read access to IA32_TSC
Jim Mattson [Fri, 9 Nov 2018 17:35:11 +0000 (09:35 -0800)]
kvm: vmx: Allow guest read access to IA32_TSC

Let the guest read the IA32_TSC MSR with the generic RDMSR instruction
as well as the specific RDTSC(P) instructions. Note that the hardware
applies the TSC multiplier and offset (when applicable) to the result of
RDMSR(IA32_TSC), just as it does to the result of RDTSC(P).

Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Peter Shier <pshier@google.com>
Reviewed-by: Marc Orr <marcorr@google.com>
Reviewed-by: Liran Alon <liran.alon@oracle.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
5 years agokvm: nVMX: NMI-window and interrupt-window exiting should wake L2 from HLT
Jim Mattson [Mon, 26 Nov 2018 19:22:32 +0000 (11:22 -0800)]
kvm: nVMX: NMI-window and interrupt-window exiting should wake L2 from HLT

According to the SDM, "NMI-window exiting" VM-exits wake a logical
processor from the same inactive states as would an NMI and
"interrupt-window exiting" VM-exits wake a logical processor from the
same inactive states as would an external interrupt. Specifically, they
wake a logical processor from the shutdown state and from the states
entered using the HLT and MWAIT instructions.

Fixes: 6dfacadd5858 ("KVM: nVMX: Add support for activity state HLT")
Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Peter Shier <pshier@google.com>
Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com>
[Squashed comments of two Jim's patches and used the simplified code
 hunk provided by Sean. - Radim]
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
5 years agoKVM: nSVM: Fix nested guest support for PAUSE filtering.
Tambe, William [Tue, 13 Nov 2018 16:51:20 +0000 (16:51 +0000)]
KVM: nSVM: Fix nested guest support for PAUSE filtering.

Currently, the nested guest's PAUSE intercept intentions are not being
honored.  Instead, since the L0 hypervisor's pause_filter_count and
pause_filter_thresh values are still in place, these values are used
instead of those programmed in the VMCB by the L1 hypervisor.

To honor the desired PAUSE intercept support of the L1 hypervisor, the L0
hypervisor must use the PAUSE filtering fields of the L1 hypervisor. This
requires saving and restoring of both the L0 and L1 hypervisor's PAUSE
filtering fields.

Signed-off-by: William Tambe <william.tambe@amd.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
5 years agokvm: Change offset in kvm_write_guest_offset_cached to unsigned
Jim Mattson [Fri, 14 Dec 2018 22:34:43 +0000 (14:34 -0800)]
kvm: Change offset in kvm_write_guest_offset_cached to unsigned

Since the offset is added directly to the hva from the
gfn_to_hva_cache, a negative offset could result in an out of bounds
write. The existing BUG_ON only checks for addresses beyond the end of
the gfn_to_hva_cache, not for addresses before the start of the
gfn_to_hva_cache.

Note that all current call sites have non-negative offsets.

Fixes: 4ec6e8636256 ("kvm: Introduce kvm_write_guest_offset_cached()")
Reported-by: Cfir Cohen <cfir@google.com>
Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Cfir Cohen <cfir@google.com>
Reviewed-by: Peter Shier <pshier@google.com>
Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
5 years agokvm: Disallow wraparound in kvm_gfn_to_hva_cache_init
Jim Mattson [Mon, 17 Dec 2018 21:53:33 +0000 (13:53 -0800)]
kvm: Disallow wraparound in kvm_gfn_to_hva_cache_init

Previously, in the case where (gpa + len) wrapped around, the entire
region was not validated, as the comment claimed. It doesn't actually
seem that wraparound should be allowed here at all.

Furthermore, since some callers don't check the return code from this
function, it seems prudent to clear ghc->memslot in the event of an
error.

Fixes: 8f964525a121f ("KVM: Allow cross page reads and writes from cached translations.")
Reported-by: Cfir Cohen <cfir@google.com>
Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Cfir Cohen <cfir@google.com>
Reviewed-by: Marc Orr <marcorr@google.com>
Cc: Andrew Honig <ahonig@google.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
5 years agoKVM: VMX: Remove duplicated include from vmx.c
YueHaibing [Tue, 18 Dec 2018 01:01:49 +0000 (01:01 +0000)]
KVM: VMX: Remove duplicated include from vmx.c

Remove duplicated include.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
5 years agoselftests: kvm: report failed stage when exit reason is unexpected
Vitaly Kuznetsov [Wed, 19 Dec 2018 11:15:18 +0000 (12:15 +0100)]
selftests: kvm: report failed stage when exit reason is unexpected

When we get a report like

==== Test Assertion Failure ====
  x86_64/state_test.c:157: run->exit_reason == KVM_EXIT_IO
  pid=955 tid=955 - Success
     1 0x0000000000401350: main at state_test.c:154
     2 0x00007fc31c9e9412: ?? ??:0
     3 0x000000000040159d: _start at ??:?
  Unexpected exit reason: 8 (SHUTDOWN),

it is not obvious which particular stage failed. Add the info.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
5 years agoKVM: x86: svm: report MSR_IA32_MCG_EXT_CTL as unsupported
Vitaly Kuznetsov [Wed, 19 Dec 2018 11:06:13 +0000 (12:06 +0100)]
KVM: x86: svm: report MSR_IA32_MCG_EXT_CTL as unsupported

AMD doesn't seem to implement MSR_IA32_MCG_EXT_CTL and svm code in kvm
knows nothing about it, however, this MSR is among emulated_msrs and
thus returned with KVM_GET_MSR_INDEX_LIST. The consequent KVM_GET_MSRS,
of course, fails.

Report the MSR as unsupported to not confuse userspace.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
5 years agoKVM: x86: fix size of x86_fpu_cache objects
Paolo Bonzini [Fri, 21 Dec 2018 10:25:59 +0000 (11:25 +0100)]
KVM: x86: fix size of x86_fpu_cache objects

The memory allocation in b666a4b69739 ("kvm: x86: Dynamically allocate
guest_fpu", 2018-11-06) is wrong, there are other members in struct fpu
before the fpregs_state union and the patch should be doing something
similar to the code in fpu__init_task_struct_size.  It's enough to run
a guest and then rmmod kvm to see slub errors which are actually caused
by memory corruption.

For now let's revert it to sizeof(struct fpu), which is conservative.
I have plans to move fsave/fxsave/xsave directly in KVM, without using
the kernel FPU helpers, and once it's done, the size of the object in
the cache will be something like kvm_xstate_size.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoMerge tag 'kvm-ppc-next-4.21-1' of git://git.kernel.org/pub/scm/linux/kernel/git...
Radim Krčmář [Thu, 20 Dec 2018 13:54:09 +0000 (14:54 +0100)]
Merge tag 'kvm-ppc-next-4.21-1' of git://git./linux/kernel/git/paulus/powerpc

PPC KVM update for 4.21 from Paul Mackerras

The main new feature this time is support in HV nested KVM for passing
a device that is emulated by a level 0 hypervisor and presented to
level 1 as a PCI device through to a level 2 guest using VFIO.

Apart from that there are improvements for migration of radix guests
under HV KVM and some other fixes and cleanups.

5 years agoMerge tag 'kvm-s390-next-4.21-1' of git://git.kernel.org/pub/scm/linux/kernel/git...
Paolo Bonzini [Wed, 19 Dec 2018 21:17:09 +0000 (22:17 +0100)]
Merge tag 'kvm-s390-next-4.21-1' of git://git./linux/kernel/git/kvms390/linux into HEAD

KVM: s390: Fixes for 4.21

Just two small fixes.

5 years agoMerge tag 'kvmarm-for-v4.21' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm...
Paolo Bonzini [Wed, 19 Dec 2018 19:33:55 +0000 (20:33 +0100)]
Merge tag 'kvmarm-for-v4.21' of git://git./linux/kernel/git/kvmarm/kvmarm into HEAD

KVM/arm updates for 4.21

- Large PUD support for HugeTLB
- Single-stepping fixes
- Improved tracing
- Various timer and vgic fixups

5 years agoarm: KVM: Add S2_PMD_{MASK,SIZE} constants
Marc Zyngier [Wed, 19 Dec 2018 08:31:54 +0000 (08:31 +0000)]
arm: KVM: Add S2_PMD_{MASK,SIZE} constants

They were missing, and it turns out that we do need them now.

Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoarm/arm64: KVM: Add ARM_EXCEPTION_IS_TRAP macro
Marc Zyngier [Wed, 19 Dec 2018 08:28:38 +0000 (08:28 +0000)]
arm/arm64: KVM: Add ARM_EXCEPTION_IS_TRAP macro

32 and 64bit use different symbols to identify the traps.
32bit has a fine grained approach (prefetch abort, data abort and HVC),
while 64bit is pretty happy with just "trap".

This has been fine so far, except that we now need to decode some
of that in tracepoints that are common to both architectures.

Introduce ARM_EXCEPTION_IS_TRAP which abstracts the trap symbols
and make the tracepoint use it.

Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoarm64: KVM: Avoid setting the upper 32 bits of VTCR_EL2 to 1
Will Deacon [Thu, 13 Dec 2018 16:06:14 +0000 (16:06 +0000)]
arm64: KVM: Avoid setting the upper 32 bits of VTCR_EL2 to 1

Although bit 31 of VTCR_EL2 is RES1, we inadvertently end up setting all
of the upper 32 bits to 1 as well because we define VTCR_EL2_RES1 as
signed, which is sign-extended when assigning to kvm->arch.vtcr.

Lucky for us, the architecture currently treats these upper bits as RES0
so, whilst we've been naughty, we haven't set fire to anything yet.

Cc: <stable@vger.kernel.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: Fix unintended stage 2 PMD mappings
Christoffer Dall [Fri, 2 Nov 2018 07:53:22 +0000 (08:53 +0100)]
KVM: arm/arm64: Fix unintended stage 2 PMD mappings

There are two things we need to take care of when we create block
mappings in the stage 2 page tables:

  (1) The alignment within a PMD between the host address range and the
  guest IPA range must be the same, since otherwise we end up mapping
  pages with the wrong offset.

  (2) The head and tail of a memory slot may not cover a full block
  size, and we have to take care to not map those with block
  descriptors, since we could expose memory to the guest that the host
  did not intend to expose.

So far, we have been taking care of (1), but not (2), and our commentary
describing (1) was somewhat confusing.

This commit attempts to factor out the checks of both into a common
function, and if we don't pass the check, we won't attempt any PMD
mappings for neither hugetlbfs nor THP.

Note that we used to only check the alignment for THP, not for
hugetlbfs, but as far as I can tell the check needs to be applied to
both scenarios.

Cc: Ralph Palutke <ralph.palutke@fau.de>
Cc: Lukas Braun <koomi@moshbit.net>
Reported-by: Lukas Braun <koomi@moshbit.net>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoarm/arm64: KVM: vgic: Force VM halt when changing the active state of GICv3 PPIs...
Marc Zyngier [Tue, 18 Dec 2018 14:59:09 +0000 (14:59 +0000)]
arm/arm64: KVM: vgic: Force VM halt when changing the active state of GICv3 PPIs/SGIs

We currently only halt the guest when a vCPU messes with the active
state of an SPI. This is perfectly fine for GICv2, but isn't enough
for GICv3, where all vCPUs can access the state of any other vCPU.

Let's broaden the condition to include any GICv3 interrupt that
has an active state (i.e. all but LPIs).

Cc: stable@vger.kernel.org
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoarm64: KVM: Add trapped system register access tracepoint
Marc Zyngier [Tue, 4 Dec 2018 10:44:22 +0000 (10:44 +0000)]
arm64: KVM: Add trapped system register access tracepoint

We're pretty blind when it comes to system register tracing,
and rely on the ESR value displayed by kvm_handle_sys, which
isn't much.

Instead, let's add an actual name to the sysreg entries, so that
we can finally print it as we're about to perform the access
itself.

The new tracepoint is conveniently called kvm_sys_access.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm64: Make vcpu const in vcpu_read_sys_reg
Christoffer Dall [Thu, 29 Nov 2018 11:20:01 +0000 (12:20 +0100)]
KVM: arm64: Make vcpu const in vcpu_read_sys_reg

vcpu_read_sys_reg should not be modifying the VCPU structure.
Eventually, to handle EL2 sysregs for nested virtualization, we will
call vcpu_read_sys_reg from places that have a const vcpu pointer, which
will complain about the lack of the const modifier on the read path.

Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: arch_timer: Simplify kvm_timer_vcpu_terminate
Christoffer Dall [Tue, 11 Dec 2018 08:54:11 +0000 (09:54 +0100)]
KVM: arm/arm64: arch_timer: Simplify kvm_timer_vcpu_terminate

kvm_timer_vcpu_terminate can only be called in two scenarios:

 1. As part of cleanup during a failed VCPU create
 2. As part of freeing the whole VM (struct kvm refcount == 0)

In the first case, we cannot have programmed any timers or mapped any
IRQs, and therefore we do not have to cancel anything or unmap anything.

In the second case, the VCPU will have gone through kvm_timer_vcpu_put,
which will have canceled the emulated physical timer's hrtimer, and we
do not need to that here as well.  We also do not care if the irq is
recorded as mapped or not in the VGIC data structure, because the whole
VM is going away.  That leaves us only with having to ensure that we
cancel the bg_timer if we were blocking the last time we called
kvm_timer_vcpu_put().

Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: Remove arch timer workqueue
Christoffer Dall [Tue, 27 Nov 2018 12:48:08 +0000 (13:48 +0100)]
KVM: arm/arm64: Remove arch timer workqueue

The use of a work queue in the hrtimer expire function for the bg_timer
is a leftover from the time when we would inject interrupts when the
bg_timer expired.

Since we are no longer doing that, we can instead call
kvm_vcpu_wake_up() directly from the hrtimer function and remove all
workqueue functionality from the arch timer code.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: Fixup the kvm_exit tracepoint
Christoffer Dall [Mon, 3 Dec 2018 20:31:24 +0000 (21:31 +0100)]
KVM: arm/arm64: Fixup the kvm_exit tracepoint

The kvm_exit tracepoint strangely always reported exits as being IRQs.
This seems to be because either the __print_symbolic or the tracepoint
macros use a variable named idx.

Take this chance to update the fields in the tracepoint to reflect the
concepts in the arm64 architecture that we pass to the tracepoint and
move the exception type table to the same location and header files as
the exits code.

We also clear out the exception code to 0 for IRQ exits (which
translates to UNKNOWN in text) to make it slighyly less confusing to
parse the trace output.

Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: vgic: Consider priority and active state for pending irq
Christoffer Dall [Sat, 1 Dec 2018 21:21:47 +0000 (13:21 -0800)]
KVM: arm/arm64: vgic: Consider priority and active state for pending irq

When checking if there are any pending IRQs for the VM, consider the
active state and priority of the IRQs as well.

Otherwise we could be continuously scheduling a guest hypervisor without
it seeing an IRQ.

Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: vgic: Fix off-by-one bug in vgic_get_irq()
Gustavo A. R. Silva [Wed, 12 Dec 2018 20:11:23 +0000 (14:11 -0600)]
KVM: arm/arm64: vgic: Fix off-by-one bug in vgic_get_irq()

When using the nospec API, it should be taken into account that:

"...if the CPU speculates past the bounds check then
 * array_index_nospec() will clamp the index within the range of [0,
 * size)."

The above is part of the header for macro array_index_nospec() in
linux/nospec.h

Now, in this particular case, if intid evaluates to exactly VGIC_MAX_SPI
or to exaclty VGIC_MAX_PRIVATE, the array_index_nospec() macro ends up
returning VGIC_MAX_SPI - 1 or VGIC_MAX_PRIVATE - 1 respectively, instead
of VGIC_MAX_SPI or VGIC_MAX_PRIVATE, which, based on the original logic:

/* SGIs and PPIs */
if (intid <= VGIC_MAX_PRIVATE)
  return &vcpu->arch.vgic_cpu.private_irqs[intid];

  /* SPIs */
if (intid <= VGIC_MAX_SPI)
  return &kvm->arch.vgic.spis[intid - VGIC_NR_PRIVATE_IRQS];

are valid values for intid.

Fix this by calling array_index_nospec() macro with VGIC_MAX_PRIVATE + 1
and VGIC_MAX_SPI + 1 as arguments for its parameter size.

Fixes: 41b87599c743 ("KVM: arm/arm64: vgic: fix possible spectre-v1 in vgic_get_irq()")
Cc: stable@vger.kernel.org
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
[dropped the SPI part which was fixed separately]
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: vgic: Cap SPIs to the VM-defined maximum
Marc Zyngier [Tue, 4 Dec 2018 17:11:19 +0000 (17:11 +0000)]
KVM: arm/arm64: vgic: Cap SPIs to the VM-defined maximum

SPIs should be checked against the VMs specific configuration, and
not the architectural maximum.

Cc: stable@vger.kernel.org
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm64: Clarify explanation of STAGE2_PGTABLE_LEVELS
Christoffer Dall [Tue, 6 Nov 2018 12:33:38 +0000 (13:33 +0100)]
KVM: arm64: Clarify explanation of STAGE2_PGTABLE_LEVELS

In attempting to re-construct the logic for our stage 2 page table
layout I found the reasoning in the comment explaining how we calculate
the number of levels used for stage 2 page tables a bit backwards.

This commit attempts to clarify the comment, to make it slightly easier
to read without having the Arm ARM open on the right page.

While we're at it, fixup a typo in a comment that was recently changed.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: vgic: Do not cond_resched_lock() with IRQs disabled
Julien Thierry [Mon, 26 Nov 2018 18:26:44 +0000 (18:26 +0000)]
KVM: arm/arm64: vgic: Do not cond_resched_lock() with IRQs disabled

To change the active state of an MMIO, halt is requested for all vcpus of
the affected guest before modifying the IRQ state. This is done by calling
cond_resched_lock() in vgic_mmio_change_active(). However interrupts are
disabled at this point and we cannot reschedule a vcpu.

We actually don't need any of this, as kvm_arm_halt_guest ensures that
all the other vcpus are out of the guest. Let's just drop that useless
code.

Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Suggested-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm64: Add support for creating PUD hugepages at stage 2
Punit Agrawal [Tue, 11 Dec 2018 17:10:41 +0000 (17:10 +0000)]
KVM: arm64: Add support for creating PUD hugepages at stage 2

KVM only supports PMD hugepages at stage 2. Now that the various page
handling routines are updated, extend the stage 2 fault handling to
map in PUD hugepages.

Addition of PUD hugepage support enables additional page sizes (e.g.,
1G with 4K granule) which can be useful on cores that support mapping
larger block sizes in the TLB entries.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
[ Replace BUG() => WARN_ON(1) for arm32 PUD helpers ]
Signed-off-by: Suzuki Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm64: Update age handlers to support PUD hugepages
Punit Agrawal [Tue, 11 Dec 2018 17:10:40 +0000 (17:10 +0000)]
KVM: arm64: Update age handlers to support PUD hugepages

In preparation for creating larger hugepages at Stage 2, add support
to the age handling notifiers for PUD hugepages when encountered.

Provide trivial helpers for arm32 to allow sharing code.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
[ Replaced BUG() => WARN_ON(1) for arm32 PUD helpers ]
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm64: Support handling access faults for PUD hugepages
Punit Agrawal [Tue, 11 Dec 2018 17:10:39 +0000 (17:10 +0000)]
KVM: arm64: Support handling access faults for PUD hugepages

In preparation for creating larger hugepages at Stage 2, extend the
access fault handling at Stage 2 to support PUD hugepages when
encountered.

Provide trivial helpers for arm32 to allow sharing of code.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
[ Replaced BUG() => WARN_ON(1) in PUD helpers ]
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm64: Support PUD hugepage in stage2_is_exec()
Punit Agrawal [Tue, 11 Dec 2018 17:10:38 +0000 (17:10 +0000)]
KVM: arm64: Support PUD hugepage in stage2_is_exec()

In preparation for creating PUD hugepages at stage 2, add support for
detecting execute permissions on PUD page table entries. Faults due to
lack of execute permissions on page table entries is used to perform
i-cache invalidation on first execute.

Provide trivial implementations of arm32 helpers to allow sharing of
code.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
[ Replaced BUG() => WARN_ON(1) in arm32 PUD helpers ]
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm64: Support dirty page tracking for PUD hugepages
Punit Agrawal [Tue, 11 Dec 2018 17:10:37 +0000 (17:10 +0000)]
KVM: arm64: Support dirty page tracking for PUD hugepages

In preparation for creating PUD hugepages at stage 2, add support for
write protecting PUD hugepages when they are encountered. Write
protecting guest tables is used to track dirty pages when migrating
VMs.

Also, provide trivial implementations of required kvm_s2pud_* helpers
to allow sharing of code with arm32.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
[ Replaced BUG() => WARN_ON() in arm32 pud helpers ]
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: Introduce helpers to manipulate page table entries
Punit Agrawal [Tue, 11 Dec 2018 17:10:36 +0000 (17:10 +0000)]
KVM: arm/arm64: Introduce helpers to manipulate page table entries

Introduce helpers to abstract architectural handling of the conversion
of pfn to page table entries and marking a PMD page table entry as a
block entry.

The helpers are introduced in preparation for supporting PUD hugepages
at stage 2 - which are supported on arm64 but do not exist on arm.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: Re-factor setting the Stage 2 entry to exec on fault
Punit Agrawal [Tue, 11 Dec 2018 17:10:35 +0000 (17:10 +0000)]
KVM: arm/arm64: Re-factor setting the Stage 2 entry to exec on fault

Stage 2 fault handler marks a page as executable if it is handling an
execution fault or if it was a permission fault in which case the
executable bit needs to be preserved.

The logic to decide if the page should be marked executable is
duplicated for PMD and PTE entries. To avoid creating another copy
when support for PUD hugepages is introduced refactor the code to
share the checks needed to mark a page table entry as executable.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: Share common code in user_mem_abort()
Punit Agrawal [Tue, 11 Dec 2018 17:10:34 +0000 (17:10 +0000)]
KVM: arm/arm64: Share common code in user_mem_abort()

The code for operations such as marking the pfn as dirty, and
dcache/icache maintenance during stage 2 fault handling is duplicated
between normal pages and PMD hugepages.

Instead of creating another copy of the operations when we introduce
PUD hugepages, let's share them across the different pagesizes.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: vgic-v2: Set active_source to 0 when restoring state
Christoffer Dall [Tue, 11 Dec 2018 11:51:03 +0000 (12:51 +0100)]
KVM: arm/arm64: vgic-v2: Set active_source to 0 when restoring state

When restoring the active state from userspace, we don't know which CPU
was the source for the active state, and this is not architecturally
exposed in any of the register state.

Set the active_source to 0 in this case.  In the future, we can expand
on this and exposse the information as additional information to
userspace for GICv2 if anyone cares.

Cc: stable@vger.kernel.org
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: Log PSTATE for unhandled sysregs
Mark Rutland [Thu, 6 Dec 2018 12:31:44 +0000 (12:31 +0000)]
KVM: arm/arm64: Log PSTATE for unhandled sysregs

When KVM traps an unhandled sysreg/coproc access from a guest, it logs
the guest PC. To aid debugging, it would be helpful to know which
exception level the trap came from, along with other PSTATE/CPSR bits,
so let's log the PSTATE/CPSR too.

Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: Fix VMID alloc race by reverting to lock-less
Christoffer Dall [Tue, 11 Dec 2018 12:23:57 +0000 (13:23 +0100)]
KVM: arm/arm64: Fix VMID alloc race by reverting to lock-less

We recently addressed a VMID generation race by introducing a read/write
lock around accesses and updates to the vmid generation values.

However, kvm_arch_vcpu_ioctl_run() also calls need_new_vmid_gen() but
does so without taking the read lock.

As far as I can tell, this can lead to the same kind of race:

  VM 0, VCPU 0 VM 0, VCPU 1
  ------------ ------------
  update_vttbr (vmid 254)
   update_vttbr (vmid 1) // roll over
read_lock(kvm_vmid_lock);
force_vm_exit()
  local_irq_disable
  need_new_vmid_gen == false //because vmid gen matches

  enter_guest (vmid 254)
   kvm_arch.vttbr = <PGD>:<VMID 1>
read_unlock(kvm_vmid_lock);

   enter_guest (vmid 1)

Which results in running two VCPUs in the same VM with different VMIDs
and (even worse) other VCPUs from other VMs could now allocate clashing
VMID 254 from the new generation as long as VCPU 0 is not exiting.

Attempt to solve this by making sure vttbr is updated before another CPU
can observe the updated VMID generation.

Cc: stable@vger.kernel.org
Fixes: f0cf47d939d0 "KVM: arm/arm64: Close VMID generation race"
Reviewed-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoarm64: KVM: Consistently advance singlestep when emulating instructions
Mark Rutland [Fri, 9 Nov 2018 15:07:11 +0000 (15:07 +0000)]
arm64: KVM: Consistently advance singlestep when emulating instructions

When we emulate a guest instruction, we don't advance the hardware
singlestep state machine, and thus the guest will receive a software
step exception after a next instruction which is not emulated by the
host.

We bodge around this in an ad-hoc fashion. Sometimes we explicitly check
whether userspace requested a single step, and fake a debug exception
from within the kernel. Other times, we advance the HW singlestep state
rely on the HW to generate the exception for us. Thus, the observed step
behaviour differs for host and guest.

Let's make this simpler and consistent by always advancing the HW
singlestep state machine when we skip an instruction. Thus we can rely
on the hardware to generate the singlestep exception for us, and never
need to explicitly check for an active-pending step, nor do we need to
fake a debug exception from the guest.

Cc: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoarm64: KVM: Skip MMIO insn after emulation
Mark Rutland [Fri, 9 Nov 2018 15:07:10 +0000 (15:07 +0000)]
arm64: KVM: Skip MMIO insn after emulation

When we emulate an MMIO instruction, we advance the CPU state within
decode_hsr(), before emulating the instruction effects.

Having this logic in decode_hsr() is opaque, and advancing the state
before emulation is problematic. It gets in the way of applying
consistent single-step logic, and it prevents us from being able to fail
an MMIO instruction with a synchronous exception.

Clean this up by only advancing the CPU state *after* the effects of the
instruction are emulated.

Cc: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: s390: fix kmsg component kvm-s390
Michael Mueller [Mon, 3 Dec 2018 09:20:22 +0000 (10:20 +0100)]
KVM: s390: fix kmsg component kvm-s390

Relocate #define statement for kvm related kernel messages
before the include of printk to become effective.

Signed-off-by: Michael Mueller <mimu@linux.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
5 years agoKVM: s390: unregister debug feature on failing arch init
Michael Mueller [Fri, 30 Nov 2018 14:32:06 +0000 (15:32 +0100)]
KVM: s390: unregister debug feature on failing arch init

Make sure the debug feature and its allocated resources get
released upon unsuccessful architecture initialization.

A related indication of the issue will be reported as kernel
message.

Signed-off-by: Michael Mueller <mimu@linux.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Pierre Morel <pmorel@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20181130143215.69496-2-mimu@linux.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
5 years agoKVM: PPC: Book3S HV: Allow passthrough of an emulated device to an L3 guest
Suraj Jitindar Singh [Fri, 14 Dec 2018 05:29:10 +0000 (16:29 +1100)]
KVM: PPC: Book3S HV: Allow passthrough of an emulated device to an L3 guest

Previously when a device was being emulated by an L1 guest for an L2
guest, that device couldn't then be passed through to an L3 guest. This
was because the L1 guest had no method for accessing L3 memory.

The hcall H_COPY_TOFROM_GUEST provides this access. Thus this setup for
passthrough can now be allowed.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years agoKVM: PPC: Book3S: Introduce new hcall H_COPY_TOFROM_GUEST to access quadrants 1 & 2
Suraj Jitindar Singh [Fri, 14 Dec 2018 05:29:09 +0000 (16:29 +1100)]
KVM: PPC: Book3S: Introduce new hcall H_COPY_TOFROM_GUEST to access quadrants 1 & 2

A guest cannot access quadrants 1 or 2 as this would result in an
exception. Thus introduce the hcall H_COPY_TOFROM_GUEST to be used by a
guest when it wants to perform an access to quadrants 1 or 2, for
example when it wants to access memory for one of its nested guests.

Also provide an implementation for the kvm-hv module.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years agoKVM: PPC: Book3S HV: Allow passthrough of an emulated device to an L2 guest
Suraj Jitindar Singh [Fri, 14 Dec 2018 05:29:08 +0000 (16:29 +1100)]
KVM: PPC: Book3S HV: Allow passthrough of an emulated device to an L2 guest

Allow for a device which is being emulated at L0 (the host) for an L1
guest to be passed through to a nested (L2) guest.

The existing kvmppc_hv_emulate_mmio function can be used here. The main
challenge is that for a load the result must be stored into the L2 gpr,
not an L1 gpr as would normally be the case after going out to qemu to
complete the operation. This presents a challenge as at this point the
L2 gpr state has been written back into L1 memory.

To work around this we store the address in L1 memory of the L2 gpr
where the result of the load is to be stored and use the new io_gpr
value KVM_MMIO_REG_NESTED_GPR to indicate that this is a nested load for
which completion must be done when returning back into the kernel. Then
in kvmppc_complete_mmio_load() the resultant value is written into L1
memory at the location of the indicated L2 gpr.

Note that we don't currently let an L1 guest emulate a device for an L2
guest which is then passed through to an L3 guest.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years agoKVM: PPC: Update kvmppc_st and kvmppc_ld to use quadrants
Suraj Jitindar Singh [Fri, 14 Dec 2018 05:29:07 +0000 (16:29 +1100)]
KVM: PPC: Update kvmppc_st and kvmppc_ld to use quadrants

The functions kvmppc_st and kvmppc_ld are used to access guest memory
from the host using a guest effective address. They do so by translating
through the process table to obtain a guest real address and then using
kvm_read_guest or kvm_write_guest to make the access with the guest real
address.

This method of access however only works for L1 guests and will give the
incorrect results for a nested guest.

We can however use the store_to_eaddr and load_from_eaddr kvmppc_ops to
perform the access for a nested guesti (and a L1 guest). So attempt this
method first and fall back to the old method if this fails and we aren't
running a nested guest.

At this stage there is no fall back method to perform the access for a
nested guest and this is left as a future improvement. For now we will
return to the nested guest and rely on the fact that a translation
should be faulted in before retrying the access.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years agoKVM: PPC: Add load_from_eaddr and store_to_eaddr to the kvmppc_ops struct
Suraj Jitindar Singh [Fri, 14 Dec 2018 05:29:06 +0000 (16:29 +1100)]
KVM: PPC: Add load_from_eaddr and store_to_eaddr to the kvmppc_ops struct

The kvmppc_ops struct is used to store function pointers to kvm
implementation specific functions.

Introduce two new functions load_from_eaddr and store_to_eaddr to be
used to load from and store to a guest effective address respectively.

Also implement these for the kvm-hv module. If we are using the radix
mmu then we can call the functions to access quadrant 1 and 2.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years agoKVM: PPC: Book3S HV: Implement functions to access quadrants 1 & 2
Suraj Jitindar Singh [Fri, 14 Dec 2018 05:29:05 +0000 (16:29 +1100)]
KVM: PPC: Book3S HV: Implement functions to access quadrants 1 & 2

The POWER9 radix mmu has the concept of quadrants. The quadrant number
is the two high bits of the effective address and determines the fully
qualified address to be used for the translation. The fully qualified
address consists of the effective lpid, the effective pid and the
effective address. This gives then 4 possible quadrants 0, 1, 2, and 3.

When accessing these quadrants the fully qualified address is obtained
as follows:

Quadrant | Hypervisor | Guest
--------------------------------------------------------------------------
| EA[0:1] = 0b00 | EA[0:1] = 0b00
0 | effLPID = 0 | effLPID = LPIDR
| effPID  = PIDR | effPID  = PIDR
--------------------------------------------------------------------------
| EA[0:1] = 0b01 |
1 | effLPID = LPIDR | Invalid Access
| effPID  = PIDR |
--------------------------------------------------------------------------
| EA[0:1] = 0b10 |
2 | effLPID = LPIDR | Invalid Access
| effPID  = 0 |
--------------------------------------------------------------------------
| EA[0:1] = 0b11 | EA[0:1] = 0b11
3 | effLPID = 0 | effLPID = LPIDR
| effPID  = 0 | effPID  = 0
--------------------------------------------------------------------------

In the Guest;
Quadrant 3 is normally used to address the operating system since this
uses effPID=0 and effLPID=LPIDR, meaning the PID register doesn't need to
be switched.
Quadrant 0 is normally used to address user space since the effLPID and
effPID are taken from the corresponding registers.

In the Host;
Quadrant 0 and 3 are used as above, however the effLPID is always 0 to
address the host.

Quadrants 1 and 2 can be used by the host to address guest memory using
a guest effective address. Since the effLPID comes from the LPID register,
the host loads the LPID of the guest it would like to access (and the
PID of the process) and can perform accesses to a guest effective
address.

This means quadrant 1 can be used to address the guest user space and
quadrant 2 can be used to address the guest operating system from the
hypervisor, using a guest effective address.

Access to the quadrants can cause a Hypervisor Data Storage Interrupt
(HDSI) due to being unable to perform partition scoped translation.
Previously this could only be generated from a guest and so the code
path expects us to take the KVM trampoline in the interrupt handler.
This is no longer the case so we modify the handler to call
bad_page_fault() to check if we were expecting this fault so we can
handle it gracefully and just return with an error code. In the hash mmu
case we still raise an unknown exception since quadrants aren't defined
for the hash mmu.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years agoKVM: PPC: Book3S HV: Add function kvmhv_vcpu_is_radix()
Suraj Jitindar Singh [Fri, 14 Dec 2018 05:29:04 +0000 (16:29 +1100)]
KVM: PPC: Book3S HV: Add function kvmhv_vcpu_is_radix()

There exists a function kvm_is_radix() which is used to determine if a
kvm instance is using the radix mmu. However this only applies to the
first level (L1) guest. Add a function kvmhv_vcpu_is_radix() which can
be used to determine if the current execution context of the vcpu is
radix, accounting for if the vcpu is running a nested guest.

Currently all nested guests must be radix but this may change in the
future.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years agoKVM: PPC: Book3S: Only report KVM_CAP_SPAPR_TCE_VFIO on powernv machines
Suraj Jitindar Singh [Fri, 14 Dec 2018 05:29:03 +0000 (16:29 +1100)]
KVM: PPC: Book3S: Only report KVM_CAP_SPAPR_TCE_VFIO on powernv machines

The kvm capability KVM_CAP_SPAPR_TCE_VFIO is used to indicate the
availability of in kernel tce acceleration for vfio. However it is
currently the case that this is only available on a powernv machine,
not for a pseries machine.

Thus make this capability dependent on having the cpu feature
CPU_FTR_HVMODE.

[paulus@ozlabs.org - fixed compilation for Book E.]

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years agoKVM: PPC: Book3S HV: Flush guest mappings when turning dirty tracking on/off
Paul Mackerras [Wed, 12 Dec 2018 04:17:17 +0000 (15:17 +1100)]
KVM: PPC: Book3S HV: Flush guest mappings when turning dirty tracking on/off

This adds code to flush the partition-scoped page tables for a radix
guest when dirty tracking is turned on or off for a memslot.  Only the
guest real addresses covered by the memslot are flushed.  The reason
for this is to get rid of any 2M PTEs in the partition-scoped page
tables that correspond to host transparent huge pages, so that page
dirtiness is tracked at a system page (4k or 64k) granularity rather
than a 2M granularity.  The page tables are also flushed when turning
dirty tracking off so that the memslot's address space can be
repopulated with THPs if possible.

To do this, we add a new function kvmppc_radix_flush_memslot().  Since
this does what's needed for kvmppc_core_flush_memslot_hv() on a radix
guest, we now make kvmppc_core_flush_memslot_hv() call the new
kvmppc_radix_flush_memslot() rather than calling kvm_unmap_radix()
for each page in the memslot.  This has the effect of fixing a bug in
that kvmppc_core_flush_memslot_hv() was previously calling
kvm_unmap_radix() without holding the kvm->mmu_lock spinlock, which
is required to be held.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years agoKVM: PPC: Book3S HV: Cleanups - constify memslots, fix comments
Paul Mackerras [Wed, 12 Dec 2018 04:16:48 +0000 (15:16 +1100)]
KVM: PPC: Book3S HV: Cleanups - constify memslots, fix comments

This adds 'const' to the declarations for the struct kvm_memory_slot
pointer parameters of some functions, which will make it possible to
call those functions from kvmppc_core_commit_memory_region_hv()
in the next patch.

This also fixes some comments about locking.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years agoKVM: PPC: Book3S HV: Map single pages when doing dirty page logging
Paul Mackerras [Wed, 12 Dec 2018 04:16:17 +0000 (15:16 +1100)]
KVM: PPC: Book3S HV: Map single pages when doing dirty page logging

For radix guests, this makes KVM map guest memory as individual pages
when dirty page logging is enabled for the memslot corresponding to the
guest real address.  Having a separate partition-scoped PTE for each
system page mapped to the guest means that we have a separate dirty
bit for each page, thus making the reported dirty bitmap more accurate.
Without this, if part of guest memory is backed by transparent huge
pages, the dirty status is reported at a 2MB granularity rather than
a 64kB (or 4kB) granularity for that part, causing userspace to have
to transmit more data when migrating the guest.

Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years agoKVM: PPC: Pass change type down to memslot commit function
Bharata B Rao [Wed, 12 Dec 2018 04:15:30 +0000 (15:15 +1100)]
KVM: PPC: Pass change type down to memslot commit function

Currently, kvm_arch_commit_memory_region() gets called with a
parameter indicating what type of change is being made to the memslot,
but it doesn't pass it down to the platform-specific memslot commit
functions.  This adds the `change' parameter to the lower-level
functions so that they can use it in future.

[paulus@ozlabs.org - fix book E also.]

Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
5 years agokvm: selftests: ucall: improve ucall placement in memory, fix unsigned comparison
Paolo Bonzini [Fri, 14 Dec 2018 11:29:43 +0000 (12:29 +0100)]
kvm: selftests: ucall: improve ucall placement in memory, fix unsigned comparison

Based on a patch by Andrew Jones.

Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: x86: Dynamically allocate guest_fpu
Marc Orr [Tue, 6 Nov 2018 22:53:56 +0000 (14:53 -0800)]
kvm: x86: Dynamically allocate guest_fpu

Previously, the guest_fpu field was embedded in the kvm_vcpu_arch
struct. Unfortunately, the field is quite large, (e.g., 4352 bytes on my
current setup). This bloats the kvm_vcpu_arch struct for x86 into an
order 3 memory allocation, which can become a problem on overcommitted
machines. Thus, this patch moves the fpu state outside of the
kvm_vcpu_arch struct.

With this patch applied, the kvm_vcpu_arch struct is reduced to 15168
bytes for vmx on my setup when building the kernel with kvmconfig.

Suggested-by: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: Marc Orr <marcorr@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: x86: Use task structs fpu field for user
Marc Orr [Tue, 6 Nov 2018 22:53:55 +0000 (14:53 -0800)]
kvm: x86: Use task structs fpu field for user

Previously, x86's instantiation of 'struct kvm_vcpu_arch' added an fpu
field to save/restore fpu-related architectural state, which will differ
from kvm's fpu state. However, this is redundant to the 'struct fpu'
field, called fpu, embedded in the task struct, via the thread field.
Thus, this patch removes the user_fpu field from the kvm_vcpu_arch
struct and replaces it with the task struct's fpu field.

This change is significant because the fpu struct is actually quite
large. For example, on the system used to develop this patch, this
change reduces the size of the vcpu_vmx struct from 23680 bytes down to
19520 bytes, when building the kernel with kvmconfig. This reduction in
the size of the vcpu_vmx struct moves us closer to being able to
allocate the struct at order 2, rather than order 3.

Suggested-by: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: Marc Orr <marcorr@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Move the checks for Guest Non-Register States to a separate helper function
Krish Sadhukhan [Wed, 12 Dec 2018 18:30:12 +0000 (13:30 -0500)]
KVM: nVMX: Move the checks for Guest Non-Register States to a separate helper function

.. to improve readability and maintainability, and to align the code as per
the layout of the checks in chapter "VM Entries" in Intel SDM vol 3C.

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Reviewed-by: Mark Kanda <mark.kanda@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Move the checks for Host Control Registers and MSRs to a separate helper...
Krish Sadhukhan [Wed, 12 Dec 2018 18:30:11 +0000 (13:30 -0500)]
KVM: nVMX: Move the checks for Host Control Registers and MSRs to a separate helper function

.. to improve readability and maintainability, and to align the code as per
the layout of the checks in chapter "VM Entries" in Intel SDM vol 3C.

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Reviewed-by: Mark Kanda <mark.kanda@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Move the checks for VM-Entry Control Fields to a separate helper function
Krish Sadhukhan [Wed, 12 Dec 2018 18:30:10 +0000 (13:30 -0500)]
KVM: nVMX: Move the checks for VM-Entry Control Fields to a separate helper function

.. to improve readability and maintainability, and to align the code as per
the layout of the checks in chapter "VM Entries" in Intel SDM vol 3C.

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Reviewed-by: Mark Kanda <mark.kanda@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Move the checks for VM-Exit Control Fields to a separate helper function
Krish Sadhukhan [Wed, 12 Dec 2018 18:30:09 +0000 (13:30 -0500)]
KVM: nVMX: Move the checks for VM-Exit Control Fields to a separate helper function

.. to improve readability and maintainability, and to align the code as per
the layout of the checks in chapter "VM Entries" in Intel SDM vol 3C.

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Reviewed-by: Mark Kanda <mark.kanda@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Remove param indirection from nested_vmx_check_msr_switch()
Sean Christopherson [Wed, 12 Dec 2018 18:30:08 +0000 (13:30 -0500)]
KVM: nVMX: Remove param indirection from nested_vmx_check_msr_switch()

Passing the enum and doing an indirect lookup is silly when we can
simply pass the field directly.  Remove the "fast path" code in
nested_vmx_check_msr_switch_controls() as it's now nothing more than a
redundant check.

Remove the debug message rather than continue passing the enum for the
address field.  Having debug messages for the MSRs themselves is useful
as MSR legality is a huge space, whereas messing up a physical address
means the VMM is fundamentally broken.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Move the checks for VM-Execution Control Fields to a separate helper function
Krish Sadhukhan [Wed, 12 Dec 2018 18:30:07 +0000 (13:30 -0500)]
KVM: nVMX: Move the checks for VM-Execution Control Fields to a separate helper function

.. to improve readability and maintainability, and to align the code as per
the layout of the checks in chapter "VM Entries" in Intel SDM vol 3C.

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Reviewed-by: Mark Kanda <mark.kanda@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Prepend "nested_vmx_" to check_vmentry_{pre,post}reqs()
Krish Sadhukhan [Wed, 12 Dec 2018 18:30:06 +0000 (13:30 -0500)]
KVM: nVMX: Prepend "nested_vmx_" to check_vmentry_{pre,post}reqs()

.. as they are used only in nested vmx context.

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Reviewed-by: Liran Alon <liran.alon@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Reviewed-by: Mark Kanda <mark.kanda@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM/VMX: Check ept_pointer before flushing ept tlb
Lan Tianyu [Thu, 6 Dec 2018 07:34:36 +0000 (15:34 +0800)]
KVM/VMX: Check ept_pointer before flushing ept tlb

This patch is to initialize ept_pointer to INVALID_PAGE and check it
before flushing ept tlb. If ept_pointer is invalid, bypass the flush
request.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM nVMX: MSRs should not be stored if VM-entry fails during or after loading guest...
Krish Sadhukhan [Wed, 5 Dec 2018 00:00:13 +0000 (19:00 -0500)]
KVM nVMX: MSRs should not be stored if VM-entry fails during or after loading guest state

According to section "VM-entry Failures During or After Loading Guest State"
in Intel SDM vol 3C,

"No MSRs are saved into the VM-exit MSR-store area."

when bit 31 of the exit reason is set.

Reported-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Suggested-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: x86: Don't modify MSR_PLATFORM_INFO on vCPU reset
Jim Mattson [Tue, 30 Oct 2018 19:20:21 +0000 (12:20 -0700)]
kvm: x86: Don't modify MSR_PLATFORM_INFO on vCPU reset

If userspace has provided a different value for this MSR (e.g with the
turbo bits set), the userspace-provided value should survive a vCPU
reset. For backwards compatibility, MSR_PLATFORM_INFO is initialized
in kvm_arch_vcpu_setup.

Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Drew Schmitt <dasch@google.com>
Cc: Abhiroop Dabral <adabral@paloaltonetworks.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: vmx: add cpu into VMX preemption timer bug list
Wei Huang [Mon, 3 Dec 2018 20:13:32 +0000 (14:13 -0600)]
kvm: vmx: add cpu into VMX preemption timer bug list

This patch adds Intel "Xeon CPU E3-1220 V2", with CPUID.01H.EAX=0x000306A8,
into the list of known broken CPUs which fail to support VMX preemption
timer. This bug was found while running the APIC timer test of
kvm-unit-test on this specific CPU, even though the errata info can't be
located in the public domain for this CPU.

Signed-off-by: Wei Huang <wei@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: x86: Report STIBP on GET_SUPPORTED_CPUID
Eduardo Habkost [Wed, 5 Dec 2018 19:19:56 +0000 (17:19 -0200)]
kvm: x86: Report STIBP on GET_SUPPORTED_CPUID

Months ago, we have added code to allow direct access to MSR_IA32_SPEC_CTRL
to the guest, which makes STIBP available to guests.  This was implemented
by commits d28b387fb74d ("KVM/VMX: Allow direct access to
MSR_IA32_SPEC_CTRL") and b2ac58f90540 ("KVM/SVM: Allow direct access to
MSR_IA32_SPEC_CTRL").

However, we never updated GET_SUPPORTED_CPUID to let userspace know that
STIBP can be enabled in CPUID.  Fix that by updating
kvm_cpuid_8000_0008_ebx_x86_features and kvm_cpuid_7_0_edx_x86_features.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agox86/hyper-v: Stop caring about EOI for direct stimers
Vitaly Kuznetsov [Wed, 5 Dec 2018 15:36:21 +0000 (16:36 +0100)]
x86/hyper-v: Stop caring about EOI for direct stimers

Turns out we over-engineered Direct Mode for stimers a bit: unlike
traditional stimers where we may want to try to re-inject the message upon
EOI, Direct Mode stimers just set the irq in APIC and kvm_apic_set_irq()
fails only when APIC is disabled (see APIC_DM_FIXED case in
__apic_accept_irq()). Remove the redundant part.

Suggested-by: Roman Kagan <rkagan@virtuozzo.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agox86/kvm/hyper-v: avoid open-coding stimer_mark_pending() in kvm_hv_notify_acked_sint()
Vitaly Kuznetsov [Mon, 26 Nov 2018 15:47:32 +0000 (16:47 +0100)]
x86/kvm/hyper-v: avoid open-coding stimer_mark_pending() in kvm_hv_notify_acked_sint()

stimers_pending optimization only helps us to avoid multiple
kvm_make_request() calls. This doesn't happen very often and these
calls are very cheap in the first place, remove open-coded version of
stimer_mark_pending() from kvm_hv_notify_acked_sint().

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Roman Kagan <rkagan@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agox86/kvm/hyper-v: direct mode for synthetic timers
Vitaly Kuznetsov [Mon, 26 Nov 2018 15:47:31 +0000 (16:47 +0100)]
x86/kvm/hyper-v: direct mode for synthetic timers

Turns out Hyper-V on KVM (as of 2016) will only use synthetic timers
if direct mode is available. With direct mode we notify the guest by
asserting APIC irq instead of sending a SynIC message.

The implementation uses existing vec_bitmap for letting lapic code
know that we're interested in the particular IRQ's EOI request. We assume
that the same APIC irq won't be used by the guest for both direct mode
stimer and as sint source (especially with AutoEOI semantics). It is
unclear how things should be handled if that's not true.

Direct mode is also somewhat less expensive; in my testing
stimer_send_msg() takes not less than 1500 cpu cycles and
stimer_notify_direct() can usually be done in 300-400. WS2016 without
Hyper-V, however, always sticks to non-direct version.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Roman Kagan <rkagan@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agox86/kvm/hyper-v: use stimer config definition from hyperv-tlfs.h
Vitaly Kuznetsov [Mon, 26 Nov 2018 15:47:30 +0000 (16:47 +0100)]
x86/kvm/hyper-v: use stimer config definition from hyperv-tlfs.h

As a preparation to implementing Direct Mode for Hyper-V synthetic
timers switch to using stimer config definition from hyperv-tlfs.h.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agox86/hyper-v: move synic/stimer control structures definitions to hyperv-tlfs.h
Vitaly Kuznetsov [Mon, 26 Nov 2018 15:47:29 +0000 (16:47 +0100)]
x86/hyper-v: move synic/stimer control structures definitions to hyperv-tlfs.h

We implement Hyper-V SynIC and synthetic timers in KVM too so there's some
room for code sharing.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Michael Kelley <mikelley@microsoft.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: selftests: Add hyperv_cpuid test
Vitaly Kuznetsov [Mon, 10 Dec 2018 17:21:59 +0000 (18:21 +0100)]
KVM: selftests: Add hyperv_cpuid test

Add a simple (and stupid) hyperv_cpuid test: check that we got the
expected number of entries with and without Enlightened VMCS enabled
and that all currently reserved fields are zeroed.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: selftests: implement an unchecked version of vcpu_ioctl()
Vitaly Kuznetsov [Mon, 10 Dec 2018 17:21:58 +0000 (18:21 +0100)]
KVM: selftests: implement an unchecked version of vcpu_ioctl()

In case we want to test failing ioctls we need an option to not
fail. Following _vcpu_run() precedent implement _vcpu_ioctl().

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agox86/kvm/hyper-v: Introduce KVM_GET_SUPPORTED_HV_CPUID
Vitaly Kuznetsov [Mon, 10 Dec 2018 17:21:56 +0000 (18:21 +0100)]
x86/kvm/hyper-v: Introduce KVM_GET_SUPPORTED_HV_CPUID

With every new Hyper-V Enlightenment we implement we're forced to add a
KVM_CAP_HYPERV_* capability. While this approach works it is fairly
inconvenient: the majority of the enlightenments we do have corresponding
CPUID feature bit(s) and userspace has to know this anyways to be able to
expose the feature to the guest.

Add KVM_GET_SUPPORTED_HV_CPUID ioctl (backed by KVM_CAP_HYPERV_CPUID, "one
cap to rule them all!") returning all Hyper-V CPUID feature leaves.

Using the existing KVM_GET_SUPPORTED_CPUID doesn't seem to be possible:
Hyper-V CPUID feature leaves intersect with KVM's (e.g. 0x40000000,
0x40000001) and we would probably confuse userspace in case we decide to
return these twice.

KVM_CAP_HYPERV_CPUID's number is interim: we're intended to drop
KVM_CAP_HYPERV_STIMER_DIRECT and use its number instead.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agox86/kvm/hyper-v: Introduce nested_get_evmcs_version() helper
Vitaly Kuznetsov [Mon, 10 Dec 2018 17:21:55 +0000 (18:21 +0100)]
x86/kvm/hyper-v: Introduce nested_get_evmcs_version() helper

The upcoming KVM_GET_SUPPORTED_HV_CPUID ioctl will need to return
Enlightened VMCS version in HYPERV_CPUID_NESTED_FEATURES.EAX when
it was enabled.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agox86/hyper-v: Drop HV_X64_CONFIGURE_PROFILER definition
Vitaly Kuznetsov [Mon, 10 Dec 2018 17:21:54 +0000 (18:21 +0100)]
x86/hyper-v: Drop HV_X64_CONFIGURE_PROFILER definition

BIT(13) in HYPERV_CPUID_FEATURES.EBX is described as "ConfigureProfiler" in
TLFS v4.0 but starting 5.0 it is replaced with 'Reserved'. As we don't
currently us it in kernel it can just be dropped.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Michael Kelley <mikelley@microsoft.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>