openwrt/staging/blogic.git
5 years agoMerge tag 'kvm-s390-next-4.21-1' of git://git.kernel.org/pub/scm/linux/kernel/git...
Paolo Bonzini [Wed, 19 Dec 2018 21:17:09 +0000 (22:17 +0100)]
Merge tag 'kvm-s390-next-4.21-1' of git://git./linux/kernel/git/kvms390/linux into HEAD

KVM: s390: Fixes for 4.21

Just two small fixes.

5 years agoMerge tag 'kvmarm-for-v4.21' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm...
Paolo Bonzini [Wed, 19 Dec 2018 19:33:55 +0000 (20:33 +0100)]
Merge tag 'kvmarm-for-v4.21' of git://git./linux/kernel/git/kvmarm/kvmarm into HEAD

KVM/arm updates for 4.21

- Large PUD support for HugeTLB
- Single-stepping fixes
- Improved tracing
- Various timer and vgic fixups

5 years agoarm: KVM: Add S2_PMD_{MASK,SIZE} constants
Marc Zyngier [Wed, 19 Dec 2018 08:31:54 +0000 (08:31 +0000)]
arm: KVM: Add S2_PMD_{MASK,SIZE} constants

They were missing, and it turns out that we do need them now.

Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoarm/arm64: KVM: Add ARM_EXCEPTION_IS_TRAP macro
Marc Zyngier [Wed, 19 Dec 2018 08:28:38 +0000 (08:28 +0000)]
arm/arm64: KVM: Add ARM_EXCEPTION_IS_TRAP macro

32 and 64bit use different symbols to identify the traps.
32bit has a fine grained approach (prefetch abort, data abort and HVC),
while 64bit is pretty happy with just "trap".

This has been fine so far, except that we now need to decode some
of that in tracepoints that are common to both architectures.

Introduce ARM_EXCEPTION_IS_TRAP which abstracts the trap symbols
and make the tracepoint use it.

Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoarm64: KVM: Avoid setting the upper 32 bits of VTCR_EL2 to 1
Will Deacon [Thu, 13 Dec 2018 16:06:14 +0000 (16:06 +0000)]
arm64: KVM: Avoid setting the upper 32 bits of VTCR_EL2 to 1

Although bit 31 of VTCR_EL2 is RES1, we inadvertently end up setting all
of the upper 32 bits to 1 as well because we define VTCR_EL2_RES1 as
signed, which is sign-extended when assigning to kvm->arch.vtcr.

Lucky for us, the architecture currently treats these upper bits as RES0
so, whilst we've been naughty, we haven't set fire to anything yet.

Cc: <stable@vger.kernel.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: Fix unintended stage 2 PMD mappings
Christoffer Dall [Fri, 2 Nov 2018 07:53:22 +0000 (08:53 +0100)]
KVM: arm/arm64: Fix unintended stage 2 PMD mappings

There are two things we need to take care of when we create block
mappings in the stage 2 page tables:

  (1) The alignment within a PMD between the host address range and the
  guest IPA range must be the same, since otherwise we end up mapping
  pages with the wrong offset.

  (2) The head and tail of a memory slot may not cover a full block
  size, and we have to take care to not map those with block
  descriptors, since we could expose memory to the guest that the host
  did not intend to expose.

So far, we have been taking care of (1), but not (2), and our commentary
describing (1) was somewhat confusing.

This commit attempts to factor out the checks of both into a common
function, and if we don't pass the check, we won't attempt any PMD
mappings for neither hugetlbfs nor THP.

Note that we used to only check the alignment for THP, not for
hugetlbfs, but as far as I can tell the check needs to be applied to
both scenarios.

Cc: Ralph Palutke <ralph.palutke@fau.de>
Cc: Lukas Braun <koomi@moshbit.net>
Reported-by: Lukas Braun <koomi@moshbit.net>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoarm/arm64: KVM: vgic: Force VM halt when changing the active state of GICv3 PPIs...
Marc Zyngier [Tue, 18 Dec 2018 14:59:09 +0000 (14:59 +0000)]
arm/arm64: KVM: vgic: Force VM halt when changing the active state of GICv3 PPIs/SGIs

We currently only halt the guest when a vCPU messes with the active
state of an SPI. This is perfectly fine for GICv2, but isn't enough
for GICv3, where all vCPUs can access the state of any other vCPU.

Let's broaden the condition to include any GICv3 interrupt that
has an active state (i.e. all but LPIs).

Cc: stable@vger.kernel.org
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoarm64: KVM: Add trapped system register access tracepoint
Marc Zyngier [Tue, 4 Dec 2018 10:44:22 +0000 (10:44 +0000)]
arm64: KVM: Add trapped system register access tracepoint

We're pretty blind when it comes to system register tracing,
and rely on the ESR value displayed by kvm_handle_sys, which
isn't much.

Instead, let's add an actual name to the sysreg entries, so that
we can finally print it as we're about to perform the access
itself.

The new tracepoint is conveniently called kvm_sys_access.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm64: Make vcpu const in vcpu_read_sys_reg
Christoffer Dall [Thu, 29 Nov 2018 11:20:01 +0000 (12:20 +0100)]
KVM: arm64: Make vcpu const in vcpu_read_sys_reg

vcpu_read_sys_reg should not be modifying the VCPU structure.
Eventually, to handle EL2 sysregs for nested virtualization, we will
call vcpu_read_sys_reg from places that have a const vcpu pointer, which
will complain about the lack of the const modifier on the read path.

Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: arch_timer: Simplify kvm_timer_vcpu_terminate
Christoffer Dall [Tue, 11 Dec 2018 08:54:11 +0000 (09:54 +0100)]
KVM: arm/arm64: arch_timer: Simplify kvm_timer_vcpu_terminate

kvm_timer_vcpu_terminate can only be called in two scenarios:

 1. As part of cleanup during a failed VCPU create
 2. As part of freeing the whole VM (struct kvm refcount == 0)

In the first case, we cannot have programmed any timers or mapped any
IRQs, and therefore we do not have to cancel anything or unmap anything.

In the second case, the VCPU will have gone through kvm_timer_vcpu_put,
which will have canceled the emulated physical timer's hrtimer, and we
do not need to that here as well.  We also do not care if the irq is
recorded as mapped or not in the VGIC data structure, because the whole
VM is going away.  That leaves us only with having to ensure that we
cancel the bg_timer if we were blocking the last time we called
kvm_timer_vcpu_put().

Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: Remove arch timer workqueue
Christoffer Dall [Tue, 27 Nov 2018 12:48:08 +0000 (13:48 +0100)]
KVM: arm/arm64: Remove arch timer workqueue

The use of a work queue in the hrtimer expire function for the bg_timer
is a leftover from the time when we would inject interrupts when the
bg_timer expired.

Since we are no longer doing that, we can instead call
kvm_vcpu_wake_up() directly from the hrtimer function and remove all
workqueue functionality from the arch timer code.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: Fixup the kvm_exit tracepoint
Christoffer Dall [Mon, 3 Dec 2018 20:31:24 +0000 (21:31 +0100)]
KVM: arm/arm64: Fixup the kvm_exit tracepoint

The kvm_exit tracepoint strangely always reported exits as being IRQs.
This seems to be because either the __print_symbolic or the tracepoint
macros use a variable named idx.

Take this chance to update the fields in the tracepoint to reflect the
concepts in the arm64 architecture that we pass to the tracepoint and
move the exception type table to the same location and header files as
the exits code.

We also clear out the exception code to 0 for IRQ exits (which
translates to UNKNOWN in text) to make it slighyly less confusing to
parse the trace output.

Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: vgic: Consider priority and active state for pending irq
Christoffer Dall [Sat, 1 Dec 2018 21:21:47 +0000 (13:21 -0800)]
KVM: arm/arm64: vgic: Consider priority and active state for pending irq

When checking if there are any pending IRQs for the VM, consider the
active state and priority of the IRQs as well.

Otherwise we could be continuously scheduling a guest hypervisor without
it seeing an IRQ.

Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: vgic: Fix off-by-one bug in vgic_get_irq()
Gustavo A. R. Silva [Wed, 12 Dec 2018 20:11:23 +0000 (14:11 -0600)]
KVM: arm/arm64: vgic: Fix off-by-one bug in vgic_get_irq()

When using the nospec API, it should be taken into account that:

"...if the CPU speculates past the bounds check then
 * array_index_nospec() will clamp the index within the range of [0,
 * size)."

The above is part of the header for macro array_index_nospec() in
linux/nospec.h

Now, in this particular case, if intid evaluates to exactly VGIC_MAX_SPI
or to exaclty VGIC_MAX_PRIVATE, the array_index_nospec() macro ends up
returning VGIC_MAX_SPI - 1 or VGIC_MAX_PRIVATE - 1 respectively, instead
of VGIC_MAX_SPI or VGIC_MAX_PRIVATE, which, based on the original logic:

/* SGIs and PPIs */
if (intid <= VGIC_MAX_PRIVATE)
  return &vcpu->arch.vgic_cpu.private_irqs[intid];

  /* SPIs */
if (intid <= VGIC_MAX_SPI)
  return &kvm->arch.vgic.spis[intid - VGIC_NR_PRIVATE_IRQS];

are valid values for intid.

Fix this by calling array_index_nospec() macro with VGIC_MAX_PRIVATE + 1
and VGIC_MAX_SPI + 1 as arguments for its parameter size.

Fixes: 41b87599c743 ("KVM: arm/arm64: vgic: fix possible spectre-v1 in vgic_get_irq()")
Cc: stable@vger.kernel.org
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
[dropped the SPI part which was fixed separately]
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: vgic: Cap SPIs to the VM-defined maximum
Marc Zyngier [Tue, 4 Dec 2018 17:11:19 +0000 (17:11 +0000)]
KVM: arm/arm64: vgic: Cap SPIs to the VM-defined maximum

SPIs should be checked against the VMs specific configuration, and
not the architectural maximum.

Cc: stable@vger.kernel.org
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm64: Clarify explanation of STAGE2_PGTABLE_LEVELS
Christoffer Dall [Tue, 6 Nov 2018 12:33:38 +0000 (13:33 +0100)]
KVM: arm64: Clarify explanation of STAGE2_PGTABLE_LEVELS

In attempting to re-construct the logic for our stage 2 page table
layout I found the reasoning in the comment explaining how we calculate
the number of levels used for stage 2 page tables a bit backwards.

This commit attempts to clarify the comment, to make it slightly easier
to read without having the Arm ARM open on the right page.

While we're at it, fixup a typo in a comment that was recently changed.

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: vgic: Do not cond_resched_lock() with IRQs disabled
Julien Thierry [Mon, 26 Nov 2018 18:26:44 +0000 (18:26 +0000)]
KVM: arm/arm64: vgic: Do not cond_resched_lock() with IRQs disabled

To change the active state of an MMIO, halt is requested for all vcpus of
the affected guest before modifying the IRQ state. This is done by calling
cond_resched_lock() in vgic_mmio_change_active(). However interrupts are
disabled at this point and we cannot reschedule a vcpu.

We actually don't need any of this, as kvm_arm_halt_guest ensures that
all the other vcpus are out of the guest. Let's just drop that useless
code.

Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Suggested-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm64: Add support for creating PUD hugepages at stage 2
Punit Agrawal [Tue, 11 Dec 2018 17:10:41 +0000 (17:10 +0000)]
KVM: arm64: Add support for creating PUD hugepages at stage 2

KVM only supports PMD hugepages at stage 2. Now that the various page
handling routines are updated, extend the stage 2 fault handling to
map in PUD hugepages.

Addition of PUD hugepage support enables additional page sizes (e.g.,
1G with 4K granule) which can be useful on cores that support mapping
larger block sizes in the TLB entries.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
[ Replace BUG() => WARN_ON(1) for arm32 PUD helpers ]
Signed-off-by: Suzuki Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm64: Update age handlers to support PUD hugepages
Punit Agrawal [Tue, 11 Dec 2018 17:10:40 +0000 (17:10 +0000)]
KVM: arm64: Update age handlers to support PUD hugepages

In preparation for creating larger hugepages at Stage 2, add support
to the age handling notifiers for PUD hugepages when encountered.

Provide trivial helpers for arm32 to allow sharing code.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
[ Replaced BUG() => WARN_ON(1) for arm32 PUD helpers ]
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm64: Support handling access faults for PUD hugepages
Punit Agrawal [Tue, 11 Dec 2018 17:10:39 +0000 (17:10 +0000)]
KVM: arm64: Support handling access faults for PUD hugepages

In preparation for creating larger hugepages at Stage 2, extend the
access fault handling at Stage 2 to support PUD hugepages when
encountered.

Provide trivial helpers for arm32 to allow sharing of code.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
[ Replaced BUG() => WARN_ON(1) in PUD helpers ]
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm64: Support PUD hugepage in stage2_is_exec()
Punit Agrawal [Tue, 11 Dec 2018 17:10:38 +0000 (17:10 +0000)]
KVM: arm64: Support PUD hugepage in stage2_is_exec()

In preparation for creating PUD hugepages at stage 2, add support for
detecting execute permissions on PUD page table entries. Faults due to
lack of execute permissions on page table entries is used to perform
i-cache invalidation on first execute.

Provide trivial implementations of arm32 helpers to allow sharing of
code.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
[ Replaced BUG() => WARN_ON(1) in arm32 PUD helpers ]
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm64: Support dirty page tracking for PUD hugepages
Punit Agrawal [Tue, 11 Dec 2018 17:10:37 +0000 (17:10 +0000)]
KVM: arm64: Support dirty page tracking for PUD hugepages

In preparation for creating PUD hugepages at stage 2, add support for
write protecting PUD hugepages when they are encountered. Write
protecting guest tables is used to track dirty pages when migrating
VMs.

Also, provide trivial implementations of required kvm_s2pud_* helpers
to allow sharing of code with arm32.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
[ Replaced BUG() => WARN_ON() in arm32 pud helpers ]
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: Introduce helpers to manipulate page table entries
Punit Agrawal [Tue, 11 Dec 2018 17:10:36 +0000 (17:10 +0000)]
KVM: arm/arm64: Introduce helpers to manipulate page table entries

Introduce helpers to abstract architectural handling of the conversion
of pfn to page table entries and marking a PMD page table entry as a
block entry.

The helpers are introduced in preparation for supporting PUD hugepages
at stage 2 - which are supported on arm64 but do not exist on arm.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: Re-factor setting the Stage 2 entry to exec on fault
Punit Agrawal [Tue, 11 Dec 2018 17:10:35 +0000 (17:10 +0000)]
KVM: arm/arm64: Re-factor setting the Stage 2 entry to exec on fault

Stage 2 fault handler marks a page as executable if it is handling an
execution fault or if it was a permission fault in which case the
executable bit needs to be preserved.

The logic to decide if the page should be marked executable is
duplicated for PMD and PTE entries. To avoid creating another copy
when support for PUD hugepages is introduced refactor the code to
share the checks needed to mark a page table entry as executable.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: Share common code in user_mem_abort()
Punit Agrawal [Tue, 11 Dec 2018 17:10:34 +0000 (17:10 +0000)]
KVM: arm/arm64: Share common code in user_mem_abort()

The code for operations such as marking the pfn as dirty, and
dcache/icache maintenance during stage 2 fault handling is duplicated
between normal pages and PMD hugepages.

Instead of creating another copy of the operations when we introduce
PUD hugepages, let's share them across the different pagesizes.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: vgic-v2: Set active_source to 0 when restoring state
Christoffer Dall [Tue, 11 Dec 2018 11:51:03 +0000 (12:51 +0100)]
KVM: arm/arm64: vgic-v2: Set active_source to 0 when restoring state

When restoring the active state from userspace, we don't know which CPU
was the source for the active state, and this is not architecturally
exposed in any of the register state.

Set the active_source to 0 in this case.  In the future, we can expand
on this and exposse the information as additional information to
userspace for GICv2 if anyone cares.

Cc: stable@vger.kernel.org
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: Log PSTATE for unhandled sysregs
Mark Rutland [Thu, 6 Dec 2018 12:31:44 +0000 (12:31 +0000)]
KVM: arm/arm64: Log PSTATE for unhandled sysregs

When KVM traps an unhandled sysreg/coproc access from a guest, it logs
the guest PC. To aid debugging, it would be helpful to know which
exception level the trap came from, along with other PSTATE/CPSR bits,
so let's log the PSTATE/CPSR too.

Acked-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: arm/arm64: Fix VMID alloc race by reverting to lock-less
Christoffer Dall [Tue, 11 Dec 2018 12:23:57 +0000 (13:23 +0100)]
KVM: arm/arm64: Fix VMID alloc race by reverting to lock-less

We recently addressed a VMID generation race by introducing a read/write
lock around accesses and updates to the vmid generation values.

However, kvm_arch_vcpu_ioctl_run() also calls need_new_vmid_gen() but
does so without taking the read lock.

As far as I can tell, this can lead to the same kind of race:

  VM 0, VCPU 0 VM 0, VCPU 1
  ------------ ------------
  update_vttbr (vmid 254)
   update_vttbr (vmid 1) // roll over
read_lock(kvm_vmid_lock);
force_vm_exit()
  local_irq_disable
  need_new_vmid_gen == false //because vmid gen matches

  enter_guest (vmid 254)
   kvm_arch.vttbr = <PGD>:<VMID 1>
read_unlock(kvm_vmid_lock);

   enter_guest (vmid 1)

Which results in running two VCPUs in the same VM with different VMIDs
and (even worse) other VCPUs from other VMs could now allocate clashing
VMID 254 from the new generation as long as VCPU 0 is not exiting.

Attempt to solve this by making sure vttbr is updated before another CPU
can observe the updated VMID generation.

Cc: stable@vger.kernel.org
Fixes: f0cf47d939d0 "KVM: arm/arm64: Close VMID generation race"
Reviewed-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoarm64: KVM: Consistently advance singlestep when emulating instructions
Mark Rutland [Fri, 9 Nov 2018 15:07:11 +0000 (15:07 +0000)]
arm64: KVM: Consistently advance singlestep when emulating instructions

When we emulate a guest instruction, we don't advance the hardware
singlestep state machine, and thus the guest will receive a software
step exception after a next instruction which is not emulated by the
host.

We bodge around this in an ad-hoc fashion. Sometimes we explicitly check
whether userspace requested a single step, and fake a debug exception
from within the kernel. Other times, we advance the HW singlestep state
rely on the HW to generate the exception for us. Thus, the observed step
behaviour differs for host and guest.

Let's make this simpler and consistent by always advancing the HW
singlestep state machine when we skip an instruction. Thus we can rely
on the hardware to generate the singlestep exception for us, and never
need to explicitly check for an active-pending step, nor do we need to
fake a debug exception from the guest.

Cc: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoarm64: KVM: Skip MMIO insn after emulation
Mark Rutland [Fri, 9 Nov 2018 15:07:10 +0000 (15:07 +0000)]
arm64: KVM: Skip MMIO insn after emulation

When we emulate an MMIO instruction, we advance the CPU state within
decode_hsr(), before emulating the instruction effects.

Having this logic in decode_hsr() is opaque, and advancing the state
before emulation is problematic. It gets in the way of applying
consistent single-step logic, and it prevents us from being able to fail
an MMIO instruction with a synchronous exception.

Clean this up by only advancing the CPU state *after* the effects of the
instruction are emulated.

Cc: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
5 years agoKVM: s390: fix kmsg component kvm-s390
Michael Mueller [Mon, 3 Dec 2018 09:20:22 +0000 (10:20 +0100)]
KVM: s390: fix kmsg component kvm-s390

Relocate #define statement for kvm related kernel messages
before the include of printk to become effective.

Signed-off-by: Michael Mueller <mimu@linux.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
5 years agoKVM: s390: unregister debug feature on failing arch init
Michael Mueller [Fri, 30 Nov 2018 14:32:06 +0000 (15:32 +0100)]
KVM: s390: unregister debug feature on failing arch init

Make sure the debug feature and its allocated resources get
released upon unsuccessful architecture initialization.

A related indication of the issue will be reported as kernel
message.

Signed-off-by: Michael Mueller <mimu@linux.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Pierre Morel <pmorel@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20181130143215.69496-2-mimu@linux.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
5 years agokvm: selftests: ucall: improve ucall placement in memory, fix unsigned comparison
Paolo Bonzini [Fri, 14 Dec 2018 11:29:43 +0000 (12:29 +0100)]
kvm: selftests: ucall: improve ucall placement in memory, fix unsigned comparison

Based on a patch by Andrew Jones.

Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: x86: Dynamically allocate guest_fpu
Marc Orr [Tue, 6 Nov 2018 22:53:56 +0000 (14:53 -0800)]
kvm: x86: Dynamically allocate guest_fpu

Previously, the guest_fpu field was embedded in the kvm_vcpu_arch
struct. Unfortunately, the field is quite large, (e.g., 4352 bytes on my
current setup). This bloats the kvm_vcpu_arch struct for x86 into an
order 3 memory allocation, which can become a problem on overcommitted
machines. Thus, this patch moves the fpu state outside of the
kvm_vcpu_arch struct.

With this patch applied, the kvm_vcpu_arch struct is reduced to 15168
bytes for vmx on my setup when building the kernel with kvmconfig.

Suggested-by: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: Marc Orr <marcorr@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: x86: Use task structs fpu field for user
Marc Orr [Tue, 6 Nov 2018 22:53:55 +0000 (14:53 -0800)]
kvm: x86: Use task structs fpu field for user

Previously, x86's instantiation of 'struct kvm_vcpu_arch' added an fpu
field to save/restore fpu-related architectural state, which will differ
from kvm's fpu state. However, this is redundant to the 'struct fpu'
field, called fpu, embedded in the task struct, via the thread field.
Thus, this patch removes the user_fpu field from the kvm_vcpu_arch
struct and replaces it with the task struct's fpu field.

This change is significant because the fpu struct is actually quite
large. For example, on the system used to develop this patch, this
change reduces the size of the vcpu_vmx struct from 23680 bytes down to
19520 bytes, when building the kernel with kvmconfig. This reduction in
the size of the vcpu_vmx struct moves us closer to being able to
allocate the struct at order 2, rather than order 3.

Suggested-by: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: Marc Orr <marcorr@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Move the checks for Guest Non-Register States to a separate helper function
Krish Sadhukhan [Wed, 12 Dec 2018 18:30:12 +0000 (13:30 -0500)]
KVM: nVMX: Move the checks for Guest Non-Register States to a separate helper function

.. to improve readability and maintainability, and to align the code as per
the layout of the checks in chapter "VM Entries" in Intel SDM vol 3C.

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Reviewed-by: Mark Kanda <mark.kanda@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Move the checks for Host Control Registers and MSRs to a separate helper...
Krish Sadhukhan [Wed, 12 Dec 2018 18:30:11 +0000 (13:30 -0500)]
KVM: nVMX: Move the checks for Host Control Registers and MSRs to a separate helper function

.. to improve readability and maintainability, and to align the code as per
the layout of the checks in chapter "VM Entries" in Intel SDM vol 3C.

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Reviewed-by: Mark Kanda <mark.kanda@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Move the checks for VM-Entry Control Fields to a separate helper function
Krish Sadhukhan [Wed, 12 Dec 2018 18:30:10 +0000 (13:30 -0500)]
KVM: nVMX: Move the checks for VM-Entry Control Fields to a separate helper function

.. to improve readability and maintainability, and to align the code as per
the layout of the checks in chapter "VM Entries" in Intel SDM vol 3C.

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Reviewed-by: Mark Kanda <mark.kanda@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Move the checks for VM-Exit Control Fields to a separate helper function
Krish Sadhukhan [Wed, 12 Dec 2018 18:30:09 +0000 (13:30 -0500)]
KVM: nVMX: Move the checks for VM-Exit Control Fields to a separate helper function

.. to improve readability and maintainability, and to align the code as per
the layout of the checks in chapter "VM Entries" in Intel SDM vol 3C.

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Reviewed-by: Mark Kanda <mark.kanda@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Remove param indirection from nested_vmx_check_msr_switch()
Sean Christopherson [Wed, 12 Dec 2018 18:30:08 +0000 (13:30 -0500)]
KVM: nVMX: Remove param indirection from nested_vmx_check_msr_switch()

Passing the enum and doing an indirect lookup is silly when we can
simply pass the field directly.  Remove the "fast path" code in
nested_vmx_check_msr_switch_controls() as it's now nothing more than a
redundant check.

Remove the debug message rather than continue passing the enum for the
address field.  Having debug messages for the MSRs themselves is useful
as MSR legality is a huge space, whereas messing up a physical address
means the VMM is fundamentally broken.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Move the checks for VM-Execution Control Fields to a separate helper function
Krish Sadhukhan [Wed, 12 Dec 2018 18:30:07 +0000 (13:30 -0500)]
KVM: nVMX: Move the checks for VM-Execution Control Fields to a separate helper function

.. to improve readability and maintainability, and to align the code as per
the layout of the checks in chapter "VM Entries" in Intel SDM vol 3C.

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Reviewed-by: Mark Kanda <mark.kanda@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Prepend "nested_vmx_" to check_vmentry_{pre,post}reqs()
Krish Sadhukhan [Wed, 12 Dec 2018 18:30:06 +0000 (13:30 -0500)]
KVM: nVMX: Prepend "nested_vmx_" to check_vmentry_{pre,post}reqs()

.. as they are used only in nested vmx context.

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Reviewed-by: Liran Alon <liran.alon@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Reviewed-by: Mark Kanda <mark.kanda@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM/VMX: Check ept_pointer before flushing ept tlb
Lan Tianyu [Thu, 6 Dec 2018 07:34:36 +0000 (15:34 +0800)]
KVM/VMX: Check ept_pointer before flushing ept tlb

This patch is to initialize ept_pointer to INVALID_PAGE and check it
before flushing ept tlb. If ept_pointer is invalid, bypass the flush
request.

Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM nVMX: MSRs should not be stored if VM-entry fails during or after loading guest...
Krish Sadhukhan [Wed, 5 Dec 2018 00:00:13 +0000 (19:00 -0500)]
KVM nVMX: MSRs should not be stored if VM-entry fails during or after loading guest state

According to section "VM-entry Failures During or After Loading Guest State"
in Intel SDM vol 3C,

"No MSRs are saved into the VM-exit MSR-store area."

when bit 31 of the exit reason is set.

Reported-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Suggested-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: x86: Don't modify MSR_PLATFORM_INFO on vCPU reset
Jim Mattson [Tue, 30 Oct 2018 19:20:21 +0000 (12:20 -0700)]
kvm: x86: Don't modify MSR_PLATFORM_INFO on vCPU reset

If userspace has provided a different value for this MSR (e.g with the
turbo bits set), the userspace-provided value should survive a vCPU
reset. For backwards compatibility, MSR_PLATFORM_INFO is initialized
in kvm_arch_vcpu_setup.

Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Drew Schmitt <dasch@google.com>
Cc: Abhiroop Dabral <adabral@paloaltonetworks.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: vmx: add cpu into VMX preemption timer bug list
Wei Huang [Mon, 3 Dec 2018 20:13:32 +0000 (14:13 -0600)]
kvm: vmx: add cpu into VMX preemption timer bug list

This patch adds Intel "Xeon CPU E3-1220 V2", with CPUID.01H.EAX=0x000306A8,
into the list of known broken CPUs which fail to support VMX preemption
timer. This bug was found while running the APIC timer test of
kvm-unit-test on this specific CPU, even though the errata info can't be
located in the public domain for this CPU.

Signed-off-by: Wei Huang <wei@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: x86: Report STIBP on GET_SUPPORTED_CPUID
Eduardo Habkost [Wed, 5 Dec 2018 19:19:56 +0000 (17:19 -0200)]
kvm: x86: Report STIBP on GET_SUPPORTED_CPUID

Months ago, we have added code to allow direct access to MSR_IA32_SPEC_CTRL
to the guest, which makes STIBP available to guests.  This was implemented
by commits d28b387fb74d ("KVM/VMX: Allow direct access to
MSR_IA32_SPEC_CTRL") and b2ac58f90540 ("KVM/SVM: Allow direct access to
MSR_IA32_SPEC_CTRL").

However, we never updated GET_SUPPORTED_CPUID to let userspace know that
STIBP can be enabled in CPUID.  Fix that by updating
kvm_cpuid_8000_0008_ebx_x86_features and kvm_cpuid_7_0_edx_x86_features.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agox86/hyper-v: Stop caring about EOI for direct stimers
Vitaly Kuznetsov [Wed, 5 Dec 2018 15:36:21 +0000 (16:36 +0100)]
x86/hyper-v: Stop caring about EOI for direct stimers

Turns out we over-engineered Direct Mode for stimers a bit: unlike
traditional stimers where we may want to try to re-inject the message upon
EOI, Direct Mode stimers just set the irq in APIC and kvm_apic_set_irq()
fails only when APIC is disabled (see APIC_DM_FIXED case in
__apic_accept_irq()). Remove the redundant part.

Suggested-by: Roman Kagan <rkagan@virtuozzo.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agox86/kvm/hyper-v: avoid open-coding stimer_mark_pending() in kvm_hv_notify_acked_sint()
Vitaly Kuznetsov [Mon, 26 Nov 2018 15:47:32 +0000 (16:47 +0100)]
x86/kvm/hyper-v: avoid open-coding stimer_mark_pending() in kvm_hv_notify_acked_sint()

stimers_pending optimization only helps us to avoid multiple
kvm_make_request() calls. This doesn't happen very often and these
calls are very cheap in the first place, remove open-coded version of
stimer_mark_pending() from kvm_hv_notify_acked_sint().

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Roman Kagan <rkagan@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agox86/kvm/hyper-v: direct mode for synthetic timers
Vitaly Kuznetsov [Mon, 26 Nov 2018 15:47:31 +0000 (16:47 +0100)]
x86/kvm/hyper-v: direct mode for synthetic timers

Turns out Hyper-V on KVM (as of 2016) will only use synthetic timers
if direct mode is available. With direct mode we notify the guest by
asserting APIC irq instead of sending a SynIC message.

The implementation uses existing vec_bitmap for letting lapic code
know that we're interested in the particular IRQ's EOI request. We assume
that the same APIC irq won't be used by the guest for both direct mode
stimer and as sint source (especially with AutoEOI semantics). It is
unclear how things should be handled if that's not true.

Direct mode is also somewhat less expensive; in my testing
stimer_send_msg() takes not less than 1500 cpu cycles and
stimer_notify_direct() can usually be done in 300-400. WS2016 without
Hyper-V, however, always sticks to non-direct version.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Roman Kagan <rkagan@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agox86/kvm/hyper-v: use stimer config definition from hyperv-tlfs.h
Vitaly Kuznetsov [Mon, 26 Nov 2018 15:47:30 +0000 (16:47 +0100)]
x86/kvm/hyper-v: use stimer config definition from hyperv-tlfs.h

As a preparation to implementing Direct Mode for Hyper-V synthetic
timers switch to using stimer config definition from hyperv-tlfs.h.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agox86/hyper-v: move synic/stimer control structures definitions to hyperv-tlfs.h
Vitaly Kuznetsov [Mon, 26 Nov 2018 15:47:29 +0000 (16:47 +0100)]
x86/hyper-v: move synic/stimer control structures definitions to hyperv-tlfs.h

We implement Hyper-V SynIC and synthetic timers in KVM too so there's some
room for code sharing.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Michael Kelley <mikelley@microsoft.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: selftests: Add hyperv_cpuid test
Vitaly Kuznetsov [Mon, 10 Dec 2018 17:21:59 +0000 (18:21 +0100)]
KVM: selftests: Add hyperv_cpuid test

Add a simple (and stupid) hyperv_cpuid test: check that we got the
expected number of entries with and without Enlightened VMCS enabled
and that all currently reserved fields are zeroed.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: selftests: implement an unchecked version of vcpu_ioctl()
Vitaly Kuznetsov [Mon, 10 Dec 2018 17:21:58 +0000 (18:21 +0100)]
KVM: selftests: implement an unchecked version of vcpu_ioctl()

In case we want to test failing ioctls we need an option to not
fail. Following _vcpu_run() precedent implement _vcpu_ioctl().

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agox86/kvm/hyper-v: Introduce KVM_GET_SUPPORTED_HV_CPUID
Vitaly Kuznetsov [Mon, 10 Dec 2018 17:21:56 +0000 (18:21 +0100)]
x86/kvm/hyper-v: Introduce KVM_GET_SUPPORTED_HV_CPUID

With every new Hyper-V Enlightenment we implement we're forced to add a
KVM_CAP_HYPERV_* capability. While this approach works it is fairly
inconvenient: the majority of the enlightenments we do have corresponding
CPUID feature bit(s) and userspace has to know this anyways to be able to
expose the feature to the guest.

Add KVM_GET_SUPPORTED_HV_CPUID ioctl (backed by KVM_CAP_HYPERV_CPUID, "one
cap to rule them all!") returning all Hyper-V CPUID feature leaves.

Using the existing KVM_GET_SUPPORTED_CPUID doesn't seem to be possible:
Hyper-V CPUID feature leaves intersect with KVM's (e.g. 0x40000000,
0x40000001) and we would probably confuse userspace in case we decide to
return these twice.

KVM_CAP_HYPERV_CPUID's number is interim: we're intended to drop
KVM_CAP_HYPERV_STIMER_DIRECT and use its number instead.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agox86/kvm/hyper-v: Introduce nested_get_evmcs_version() helper
Vitaly Kuznetsov [Mon, 10 Dec 2018 17:21:55 +0000 (18:21 +0100)]
x86/kvm/hyper-v: Introduce nested_get_evmcs_version() helper

The upcoming KVM_GET_SUPPORTED_HV_CPUID ioctl will need to return
Enlightened VMCS version in HYPERV_CPUID_NESTED_FEATURES.EAX when
it was enabled.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agox86/hyper-v: Drop HV_X64_CONFIGURE_PROFILER definition
Vitaly Kuznetsov [Mon, 10 Dec 2018 17:21:54 +0000 (18:21 +0100)]
x86/hyper-v: Drop HV_X64_CONFIGURE_PROFILER definition

BIT(13) in HYPERV_CPUID_FEATURES.EBX is described as "ConfigureProfiler" in
TLFS v4.0 but starting 5.0 it is replaced with 'Reserved'. As we don't
currently us it in kernel it can just be dropped.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Michael Kelley <mikelley@microsoft.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agox86/hyper-v: Do some housekeeping in hyperv-tlfs.h
Vitaly Kuznetsov [Mon, 10 Dec 2018 17:21:53 +0000 (18:21 +0100)]
x86/hyper-v: Do some housekeeping in hyperv-tlfs.h

hyperv-tlfs.h is a bit messy: CPUID feature bits are not always sorted,
it's hard to get which CPUID they belong to, some items are duplicated
(e.g. HV_X64_MSR_CRASH_CTL_NOTIFY/HV_CRASH_CTL_CRASH_NOTIFY).

Do some housekeeping work. While on it, replace all (1 << X) with BIT(X)
macro.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Michael Kelley <mikelley@microsoft.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agox86/hyper-v: Mark TLFS structures packed
Vitaly Kuznetsov [Wed, 12 Dec 2018 17:57:01 +0000 (18:57 +0100)]
x86/hyper-v: Mark TLFS structures packed

The TLFS structures are used for hypervisor-guest communication and must
exactly meet the specification.

Compilers can add alignment padding to structures or reorder struct members
for randomization and optimization, which would break the hypervisor ABI.

Mark the structures as packed to prevent this. 'struct hv_vp_assist_page'
and 'struct hv_enlightened_vmcs' need to be properly padded to support the
change.

Suggested-by: Nadav Amit <nadav.amit@gmail.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Nadav Amit <nadav.amit@gmail.com>
Reviewed-by: Michael Kelley <mikelley@microsoft.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agox86: kvm: hyperv: don't retry message delivery for periodic timers
Roman Kagan [Mon, 10 Dec 2018 18:47:27 +0000 (18:47 +0000)]
x86: kvm: hyperv: don't retry message delivery for periodic timers

The SynIC message delivery protocol allows the message originator to
request, should the message slot be busy, to be notified when it's free.

However, this is unnecessary and even undesirable for messages generated
by SynIC timers in periodic mode: if the period is short enough compared
to the time the guest spends in the timer interrupt handler, so the
timer ticks start piling up, the excessive interactions due to this
notification and retried message delivery only makes the things worse.

[This was observed, in particular, with Windows L2 guests setting
(temporarily) the periodic timer to 2 kHz, and spending hundreds of
microseconds in the timer interrupt handler due to several L2->L1 exits;
under some load in L0 this could exceed 500 us so the timer ticks
started to pile up and the guest livelocked.]

Relieve the situation somewhat by not retrying message delivery for
periodic SynIC timers.  This appears to remain within the "lazy" lost
ticks policy for SynIC timers as implemented in KVM.

Note that it doesn't solve the fundamental problem of livelocking the
guest with a periodic timer whose period is smaller than the time needed
to process a tick, but it makes it a bit less likely to be triggered.

Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agox86: kvm: hyperv: simplify SynIC message delivery
Roman Kagan [Mon, 10 Dec 2018 18:47:26 +0000 (18:47 +0000)]
x86: kvm: hyperv: simplify SynIC message delivery

SynIC message delivery is somewhat overengineered: it pretends to follow
the ordering rules when grabbing the message slot, using atomic
operations and all that, but does it incorrectly and unnecessarily.

The correct order would be to first set .msg_pending, then atomically
replace .message_type if it was zero, and then clear .msg_pending if
the previous step was successful.  But this all is done in vcpu context
so the whole update looks atomic to the guest (it's assumed to only
access the message page from this cpu), and therefore can be done in
whatever order is most convenient (and is also the reason why the
incorrect order didn't trigger any bugs so far).

While at this, also switch to kvm_vcpu_{read,write}_guest_page, and drop
the no longer needed synic_clear_sint_msg_pending.

Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: x86: remove unnecessary recalculate_apic_map
Peng Hao [Tue, 4 Dec 2018 09:42:50 +0000 (17:42 +0800)]
kvm: x86: remove unnecessary recalculate_apic_map

In the previous code, the variable apic_sw_disabled influences
recalculate_apic_map. But in "KVM: x86: simplify kvm_apic_map"
(commit: 3b5a5ffa928a3f875b0d5dd284eeb7c322e1688a),
the access to apic_sw_disabled in recalculate_apic_map has been
deleted.

Signed-off-by: Peng Hao <peng.hao2@zte.com.cn>
Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: svm: remove unused struct definition
Peng Hao [Fri, 14 Dec 2018 08:18:03 +0000 (16:18 +0800)]
kvm: svm: remove unused struct definition

structure svm_init_data is never used. So remove it.

Signed-off-by: Peng Hao <peng.hao2@zte.com.cn>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: vmx: Skip all SYSCALL MSRs in setup_msrs() when !EFER.SCE
Jim Mattson [Wed, 5 Dec 2018 23:29:01 +0000 (15:29 -0800)]
kvm: vmx: Skip all SYSCALL MSRs in setup_msrs() when !EFER.SCE

Like IA32_STAR, IA32_LSTAR and IA32_FMASK only need to contain guest
values on VM-entry when the guest is in long mode and EFER.SCE is set.

Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Peter Shier <pshier@google.com>
Reviewed-by: Marc Orr <marcorr@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: vmx: Don't set hardware IA32_CSTAR MSR on VM-entry
Jim Mattson [Wed, 5 Dec 2018 23:29:00 +0000 (15:29 -0800)]
kvm: vmx: Don't set hardware IA32_CSTAR MSR on VM-entry

SYSCALL raises #UD in compatibility mode on Intel CPUs, so it's
pointless to load the guest's IA32_CSTAR value into the hardware MSR.

IA32_CSTAR still provides 48 bits of storage on Intel CPUs that have
CPUID.80000001:EDX.LM[bit 29] set, so we cannot remove it from the
vmx_msr_index[] array.

Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Peter Shier <pshier@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: vmx: Document the need for MSR_STAR in i386 builds
Jim Mattson [Wed, 5 Dec 2018 23:28:59 +0000 (15:28 -0800)]
kvm: vmx: Document the need for MSR_STAR in i386 builds

Add a comment explaining why MSR_STAR must be included in
vmx_msr_index[] even for i386 builds.

The elided comment has not been relevant since move_msr_up() was
introduced in commit a75beee6e4f5d ("KVM: VMX: Avoid saving and
restoring msrs on lightweight vmexit").

Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Peter Shier <pshier@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: vmx: Set IA32_TSC_AUX for legacy mode guests
Jim Mattson [Wed, 5 Dec 2018 23:28:58 +0000 (15:28 -0800)]
kvm: vmx: Set IA32_TSC_AUX for legacy mode guests

RDTSCP is supported in legacy mode as well as long mode. The
IA32_TSC_AUX MSR should be set to the correct guest value before
entering any guest that supports RDTSCP.

Fixes: 4e47c7a6d714 ("KVM: VMX: Add instruction rdtscp support for guest")
Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Peter Shier <pshier@google.com>
Reviewed-by: Marc Orr <marcorr@google.com>
Reviewed-by: Liran Alon <liran.alon@oracle.com>
Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Move nested code to dedicated files
Sean Christopherson [Mon, 3 Dec 2018 21:53:18 +0000 (13:53 -0800)]
KVM: nVMX: Move nested code to dedicated files

From a functional perspective, this is (supposed to be) a straight
copy-paste of code.  Code was moved piecemeal to nested.c as not all
code that could/should be moved was obviously nested-only.  The nested
code was then re-ordered as needed to compile, i.e. stats may not show
this is being a "pure" move despite there not being any intended changes
in functionality.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: VMX: Expose nested_vmx_allowed() to nested VMX as a non-inline
Sean Christopherson [Mon, 3 Dec 2018 21:53:17 +0000 (13:53 -0800)]
KVM: VMX: Expose nested_vmx_allowed() to nested VMX as a non-inline

Exposing only the function allows @nested, i.e. the module param, to be
statically defined in vmx.c, ensuring we aren't unnecessarily checking
said variable in the nested code.  nested_vmx_allowed() is exposed due
to the need to verify nested support in vmx_{get,set}_nested_state().
The downside is that nested_vmx_allowed() likely won't be inlined in
vmx_{get,set}_nested_state(), but that should be a non-issue as they're
not a hot path.  Keeping vmx_{get,set}_nested_state() in vmx.c isn't a
viable option as they need access to several nested-only functions.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: VMX: Expose various getters and setters to nested VMX
Sean Christopherson [Mon, 3 Dec 2018 21:53:16 +0000 (13:53 -0800)]
KVM: VMX: Expose various getters and setters to nested VMX

...as they're used directly by the nested code.  This will allow
moving the bulk of the nested code out of vmx.c without concurrent
changes to vmx.h.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: VMX: Expose misc variables needed for nested VMX
Sean Christopherson [Mon, 3 Dec 2018 21:53:15 +0000 (13:53 -0800)]
KVM: VMX: Expose misc variables needed for nested VMX

Exposed vmx_msr_index, vmx_return and host_efer via vmx.h so that the
nested code can be moved out of vmx.c.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Move "vmcs12 to shadow/evmcs sync" to helper function
Sean Christopherson [Mon, 3 Dec 2018 21:53:14 +0000 (13:53 -0800)]
KVM: nVMX: Move "vmcs12 to shadow/evmcs sync" to helper function

...so that the function doesn't need to be created when moving the
nested code out of vmx.c.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Call nested_vmx_setup_ctls_msrs() iff @nested is true
Sean Christopherson [Mon, 3 Dec 2018 21:53:13 +0000 (13:53 -0800)]
KVM: nVMX: Call nested_vmx_setup_ctls_msrs() iff @nested is true

...so that it doesn't need access to @nested. The only case where the
provided struct isn't already zeroed is the call from vmx_create_vcpu()
as setup_vmcs_config() zeroes the struct in the other use cases.  This
will allow @nested to be statically defined in vmx.c, i.e. this removes
the last direct reference from nested code.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Set callbacks for nested functions during hardware setup
Sean Christopherson [Mon, 3 Dec 2018 21:53:12 +0000 (13:53 -0800)]
KVM: nVMX: Set callbacks for nested functions during hardware setup

...in nested-specific code so that they can eventually be moved out of
vmx.c, e.g. into nested.c.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: VMX: Move the hardware {un}setup functions to the bottom
Sean Christopherson [Mon, 3 Dec 2018 21:53:11 +0000 (13:53 -0800)]
KVM: VMX: Move the hardware {un}setup functions to the bottom

...so that future patches can reference e.g. @kvm_vmx_exit_handlers
without having to simultaneously move a big chunk of code.  Speaking
from experience, resolving merge conflicts is an absolute nightmare
without pre-moving the code.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: nVMX: Allow nested_enable_evmcs to be NULL
Sean Christopherson [Mon, 3 Dec 2018 21:53:10 +0000 (13:53 -0800)]
KVM: x86: nVMX: Allow nested_enable_evmcs to be NULL

...so that it can conditionally set by the VMX code, i.e. iff @nested is
true.  This will in turn allow it to be moved out of vmx.c and into a
nested-specified file.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: VMX: Move nested hardware/vcpu {un}setup to helper functions
Sean Christopherson [Mon, 3 Dec 2018 21:53:09 +0000 (13:53 -0800)]
KVM: VMX: Move nested hardware/vcpu {un}setup to helper functions

Eventually this will allow us to move the nested VMX code out of vmx.c.
Note that this also effectively wraps @enable_shadow_vmcs with @nested
so that it too can be moved out of vmx.c.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: VMX: Move VMX instruction wrappers to a dedicated header file
Sean Christopherson [Mon, 3 Dec 2018 21:53:07 +0000 (13:53 -0800)]
KVM: VMX: Move VMX instruction wrappers to a dedicated header file

VMX has a few hundred lines of code just to wrap various VMX specific
instructions, e.g. VMWREAD, INVVPID, etc...  Move them to a dedicated
header so it's easier to find/isolate the boilerplate.

With this change, more inlines can be moved from vmx.c to vmx.h.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: VMX: Move eVMCS code to dedicated files
Sean Christopherson [Mon, 3 Dec 2018 21:53:06 +0000 (13:53 -0800)]
KVM: VMX: Move eVMCS code to dedicated files

The header, evmcs.h, already exists and contains a fair amount of code,
but there are a few pieces in vmx.c that can be moved verbatim.  In
addition, move an array definition to evmcs.c to prepare for multiple
consumers of evmcs.h.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: VMX: Add vmx.h to hold VMX definitions
Sean Christopherson [Mon, 3 Dec 2018 21:53:08 +0000 (13:53 -0800)]
KVM: VMX: Add vmx.h to hold VMX definitions

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Move vmcs12 code to dedicated files
Sean Christopherson [Mon, 3 Dec 2018 21:53:05 +0000 (13:53 -0800)]
KVM: nVMX: Move vmcs12 code to dedicated files

vmcs12 is the KVM-defined struct used to track a nested VMCS, e.g. a
VMCS created by L1 for L2.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: VMX: Move VMCS definitions to dedicated file
Sean Christopherson [Mon, 3 Dec 2018 21:53:04 +0000 (13:53 -0800)]
KVM: VMX: Move VMCS definitions to dedicated file

This isn't intended to be a pure reflection of hardware, e.g. struct
loaded_vmcs and struct vmcs_host_state are KVM-defined constructs.
Similar to capabilities.h, this is a standalone file to avoid circular
dependencies between yet-to-be-created vmx.h and nested.h files.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: VMX: Expose various module param vars via capabilities.h
Sean Christopherson [Mon, 3 Dec 2018 21:53:03 +0000 (13:53 -0800)]
KVM: VMX: Expose various module param vars via capabilities.h

Expose the variables associated with various module params that are
needed by the nested VMX code.  There is no ulterior logic for what
variables are/aren't exposed, this is purely "what's needed by the
nested code".

Note that @nested is intentionally not exposed.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: VMX: Move capabilities structs and helpers to dedicated file
Sean Christopherson [Mon, 3 Dec 2018 21:53:02 +0000 (13:53 -0800)]
KVM: VMX: Move capabilities structs and helpers to dedicated file

Defining a separate capabilities.h as opposed to putting this code in
e.g. vmx.h avoids circular dependencies between (the yet-to-be-added)
vmx.h and nested.h.  The aforementioned circular dependencies are why
struct nested_vmx_msrs also resides in capabilities instead of e.g.
nested.h.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: VMX: Pass vmx_capability struct to setup_vmcs_config()
Sean Christopherson [Mon, 3 Dec 2018 21:53:01 +0000 (13:53 -0800)]
KVM: VMX: Pass vmx_capability struct to setup_vmcs_config()

...instead of referencing the global struct.  This will allow moving
setup_vmcs_config() to a separate file that may not have access to
the global variable.  Modify nested_vmx_setup_ctls_msrs() appropriately
since vmx_capability.ept may not be accurate when called by
vmx_check_processor_compat().

No functional change intended.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: VMX: Properly handle dynamic VM Entry/Exit controls
Sean Christopherson [Mon, 3 Dec 2018 21:53:00 +0000 (13:53 -0800)]
KVM: VMX: Properly handle dynamic VM Entry/Exit controls

EFER and PERF_GLOBAL_CTRL MSRs have dedicated VM Entry/Exit controls
that KVM dynamically toggles based on whether or not the guest's value
for each MSRs differs from the host.  Handle the dynamic behavior by
adding a helper that clears the dynamic bits so the bits aren't set
when initializing the VMCS field outside of the dynamic toggling flow.
This makes the handling consistent with similar behavior for other
controls, e.g. pin, exec and sec_exec.  More importantly, it eliminates
two global bools that are stealthily modified by setup_vmcs_config.

Opportunistically clean up a comment and print related to errata for
IA32_PERF_GLOBAL_CTRL.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: VMX: Move caching of MSR_IA32_XSS to hardware_setup()
Sean Christopherson [Mon, 3 Dec 2018 21:52:59 +0000 (13:52 -0800)]
KVM: VMX: Move caching of MSR_IA32_XSS to hardware_setup()

MSR_IA32_XSS has no relation to the VMCS whatsoever, it doesn't belong
in setup_vmcs_config() and its reference to host_xss prevents moving
setup_vmcs_config() to a dedicated file.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: VMX: Drop the "vmx" prefix from vmx_evmcs.h
Sean Christopherson [Mon, 3 Dec 2018 21:52:58 +0000 (13:52 -0800)]
KVM: VMX: Drop the "vmx" prefix from vmx_evmcs.h

VMX specific files now reside in a dedicated subdirectory, i.e. the
file name prefix is redundant.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: VMX: rename vmx_shadow_fields.h to vmcs_shadow_fields.h
Sean Christopherson [Mon, 3 Dec 2018 21:52:57 +0000 (13:52 -0800)]
KVM: VMX: rename vmx_shadow_fields.h to vmcs_shadow_fields.h

VMX specific files now reside in a dedicated subdirectory.  Drop the
"vmx" prefix, which is redundant, and add a "vmcs" prefix to clarify
that the file is referring to VMCS shadow fields.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: VMX: Move VMX specific files to a "vmx" subdirectory
Sean Christopherson [Mon, 3 Dec 2018 21:52:56 +0000 (13:52 -0800)]
KVM: VMX: Move VMX specific files to a "vmx" subdirectory

...to prepare for shattering vmx.c into multiple files without having
to prepend "vmx_" to all new files.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: Add requisite includes to hyperv.h
Sean Christopherson [Mon, 3 Dec 2018 21:52:55 +0000 (13:52 -0800)]
KVM: x86: Add requisite includes to hyperv.h

Until this point vmx.c has been the only consumer and included the
file after many others.  Prepare for multiple consumers, i.e. the
shattering of vmx.c

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: x86: Add requisite includes to kvm_cache_regs.h
Sean Christopherson [Mon, 3 Dec 2018 21:52:54 +0000 (13:52 -0800)]
KVM: x86: Add requisite includes to kvm_cache_regs.h

Until this point vmx.c has been the only consumer and included the
file after many others.  Prepare for multiple consumers, i.e. the
shattering of vmx.c

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: VMX: Alphabetize the includes in vmx.c
Sean Christopherson [Mon, 3 Dec 2018 21:52:53 +0000 (13:52 -0800)]
KVM: VMX: Alphabetize the includes in vmx.c

...to prepare for the creation of a "vmx" subdirectory that will contain
a variety of headers.  Clean things up now to avoid making a bigger mess
in the future.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Allocate and configure VM{READ,WRITE} bitmaps iff enable_shadow_vmcs
Sean Christopherson [Mon, 3 Dec 2018 21:52:52 +0000 (13:52 -0800)]
KVM: nVMX: Allocate and configure VM{READ,WRITE} bitmaps iff enable_shadow_vmcs

...and make enable_shadow_vmcs depend on nested.  Aside from the obvious
memory savings, this will allow moving the relevant code out of vmx.c in
the future, e.g. to a nested specific file.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoKVM: nVMX: Free the VMREAD/VMWRITE bitmaps if alloc_kvm_area() fails
Sean Christopherson [Mon, 3 Dec 2018 21:52:51 +0000 (13:52 -0800)]
KVM: nVMX: Free the VMREAD/VMWRITE bitmaps if alloc_kvm_area() fails

Fixes: 34a1cd60d17f ("kvm: x86: vmx: move some vmx setting from vmx_init() to hardware_setup()")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: introduce manual dirty log reprotect
Paolo Bonzini [Tue, 23 Oct 2018 00:36:47 +0000 (02:36 +0200)]
kvm: introduce manual dirty log reprotect

There are two problems with KVM_GET_DIRTY_LOG.  First, and less important,
it can take kvm->mmu_lock for an extended period of time.  Second, its user
can actually see many false positives in some cases.  The latter is due
to a benign race like this:

  1. KVM_GET_DIRTY_LOG returns a set of dirty pages and write protects
     them.
  2. The guest modifies the pages, causing them to be marked ditry.
  3. Userspace actually copies the pages.
  4. KVM_GET_DIRTY_LOG returns those pages as dirty again, even though
     they were not written to since (3).

This is especially a problem for large guests, where the time between
(1) and (3) can be substantial.  This patch introduces a new
capability which, when enabled, makes KVM_GET_DIRTY_LOG not
write-protect the pages it returns.  Instead, userspace has to
explicitly clear the dirty log bits just before using the content
of the page.  The new KVM_CLEAR_DIRTY_LOG ioctl can also operate on a
64-page granularity rather than requiring to sync a full memslot;
this way, the mmu_lock is taken for small amounts of time, and
only a small amount of time will pass between write protection
of pages and the sending of their content.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: rename last argument to kvm_get_dirty_log_protect
Paolo Bonzini [Tue, 23 Oct 2018 00:18:42 +0000 (02:18 +0200)]
kvm: rename last argument to kvm_get_dirty_log_protect

When manual dirty log reprotect will be enabled, kvm_get_dirty_log_protect's
pointer argument will always be false on exit, because no TLB flush is needed
until the manual re-protection operation.  Rename it from "is_dirty" to "flush",
which more accurately tells the caller what they have to do with it.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agokvm: make KVM_CAP_ENABLE_CAP_VM architecture agnostic
Paolo Bonzini [Thu, 16 Feb 2017 09:40:56 +0000 (10:40 +0100)]
kvm: make KVM_CAP_ENABLE_CAP_VM architecture agnostic

The first such capability to be handled in virt/kvm/ will be manual
dirty page reprotection.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
5 years agoMerge branch 'khdr_fix' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux...
Paolo Bonzini [Fri, 14 Dec 2018 11:33:31 +0000 (12:33 +0100)]
Merge branch 'khdr_fix' of git://git./linux/kernel/git/shuah/linux-kselftest into HEAD

Merge topic branch from Shuah.

5 years agoselftests: Fix test errors related to lib.mk khdr target
Shuah Khan [Thu, 13 Dec 2018 03:25:14 +0000 (20:25 -0700)]
selftests: Fix test errors related to lib.mk khdr target

Commit b2d35fa5fc80 ("selftests: add headers_install to lib.mk") added
khdr target to run headers_install target from the main Makefile. The
logic uses KSFT_KHDR_INSTALL and top_srcdir as controls to initialize
variables and include files to run headers_install from the top level
Makefile. There are a few problems with this logic.

1. Exposes top_srcdir to all tests
2. Common logic impacts all tests
3. Uses KSFT_KHDR_INSTALL, top_srcdir, and khdr in an adhoc way. Tests
   add "khdr" dependency in their Makefiles to TEST_PROGS_EXTENDED in
   some cases, and STATIC_LIBS in other cases. This makes this framework
   confusing to use.

The common logic that runs for all tests even when KSFT_KHDR_INSTALL
isn't defined by the test. top_srcdir is initialized to a default value
when test doesn't initialize it. It works for all tests without a sub-dir
structure and tests with sub-dir structure fail to build.

e.g: make -C sparc64/drivers/ or make -C drivers/dma-buf

../../lib.mk:20: ../../../../scripts/subarch.include: No such file or directory
make: *** No rule to make target '../../../../scripts/subarch.include'.  Stop.

There is no reason to require all tests to define top_srcdir and there is
no need to require tests to add khdr dependency using adhoc changes to
TEST_* and other variables.

Fix it with a consistent use of KSFT_KHDR_INSTALL and top_srcdir from tests
that have the dependency on headers_install.

Change common logic to include khdr target define and "all" target with
dependency on khdr when KSFT_KHDR_INSTALL is defined.

Only tests that have dependency on headers_install have to define just
the KSFT_KHDR_INSTALL, and top_srcdir variables and there is no need to
specify khdr dependency in the test Makefiles.

Fixes: b2d35fa5fc80 ("selftests: add headers_install to lib.mk")
Cc: stable@vger.kernel.org
Signed-off-by: Shuah Khan <shuah@kernel.org>