Xiao Guangrong [Tue, 28 Sep 2010 09:03:14 +0000 (17:03 +0800)]
KVM: MMU: move access code parsing to FNAME(walk_addr) function
Move access code parsing from caller site to FNAME(walk_addr) function
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Xiao Guangrong [Mon, 27 Sep 2010 10:09:29 +0000 (18:09 +0800)]
KVM: MMU: audit: check whether have unsync sps after root sync
After root synced, all unsync sps are synced, this patch add a check to make
sure it's no unsync sps in VCPU's page table
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Xiao Guangrong [Mon, 27 Sep 2010 10:07:59 +0000 (18:07 +0800)]
KVM: MMU: audit: introduce audit_printk to cleanup audit code
Introduce audit_printk, and record audit point instead audit name
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Xiao Guangrong [Mon, 27 Sep 2010 10:07:07 +0000 (18:07 +0800)]
KVM: MMU: audit: unregister audit tracepoints before module unloaded
fix:
Call Trace:
[<
ffffffffa01e46ba>] ? kvm_mmu_pte_write+0x229/0x911 [kvm]
[<
ffffffffa01c6ba9>] ? gfn_to_memslot+0x39/0xa0 [kvm]
[<
ffffffffa01c6c26>] ? mark_page_dirty+0x16/0x2e [kvm]
[<
ffffffffa01c6d6f>] ? kvm_write_guest_page+0x67/0x7f [kvm]
[<
ffffffff81066fbd>] ? local_clock+0x2a/0x3b
[<
ffffffffa01d52ce>] emulator_write_phys+0x46/0x54 [kvm]
......
Code: Bad RIP value.
RIP [<
ffffffffa0172056>] 0xffffffffa0172056
RSP <
ffff880134f69a70>
CR2:
ffffffffa0172056
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Xiao Guangrong [Mon, 27 Sep 2010 10:06:16 +0000 (18:06 +0800)]
KVM: MMU: audit: fix vcpu's spte walking
After nested nested paging, it may using long mode to shadow 32/PAE paging
guest, so this patch fix it
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Xiao Guangrong [Mon, 27 Sep 2010 10:05:00 +0000 (18:05 +0800)]
KVM: MMU: set access bit for direct mapping
Set access bit while setup up direct page table if it's nonpaing or npt enabled,
it's good for CPU's speculate access
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Xiao Guangrong [Mon, 27 Sep 2010 10:03:27 +0000 (18:03 +0800)]
KVM: MMU: cleanup for error mask set while walk guest page table
Small cleanup for set page fault error code
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Xiao Guangrong [Mon, 27 Sep 2010 10:02:12 +0000 (18:02 +0800)]
KVM: MMU: update 'root_hpa' out of loop in PAE shadow path
The value of 'vcpu->arch.mmu.pae_root' is not modified, so we can update
'root_hpa' out of the loop.
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Sheng Yang [Tue, 28 Sep 2010 08:33:32 +0000 (16:33 +0800)]
KVM: x86 emulator: Eliminate compilation warning in x86_decode_insn()
Eliminate:
arch/x86/kvm/emulate.c:801: warning: ‘sv’ may be used uninitialized in this
function
on gcc 4.1.2
Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Jan Kiszka [Sun, 26 Sep 2010 11:00:53 +0000 (13:00 +0200)]
KVM: x86: Fix constant type in kvm_get_time_scale
Older gcc versions complain about the improper type (for x86-32), 4.5
seems to fix this silently. However, we should better use the right type
initially.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Jan Kiszka [Tue, 28 Sep 2010 14:37:42 +0000 (16:37 +0200)]
KVM: VMX: Add AX to list of registers clobbered by guest switch
By chance this caused no harm so far. We overwrite AX during switch
to/from guest context, so we must declare this.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Arjan Koers [Mon, 2 Aug 2010 21:35:28 +0000 (23:35 +0200)]
KVM guest: Move a printk that's using the clock before it's ready
Fix a hang during SMP kernel boot on KVM that showed up
after commit
489fb490dbf8dab0249ad82b56688ae3842a79e8
(2.6.35) and
59aab522154a2f17b25335b63c1cf68a51fb6ae0
(2.6.34.1). The problem only occurs when
CONFIG_PRINTK_TIME is set.
KVM-Stable-Tag.
Signed-off-by: Arjan Koers <0h61vkll2ly8@xutrox.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Zachary Amsden [Sun, 19 Sep 2010 00:38:15 +0000 (14:38 -1000)]
KVM: x86: TSC catchup mode
Negate the effects of AN TYM spell while kvm thread is preempted by tracking
conversion factor to the highest TSC rate and catching the TSC up when it has
fallen behind the kernel view of time. Note that once triggered, we don't
turn off catchup mode.
A slightly more clever version of this is possible, which only does catchup
when TSC rate drops, and which specifically targets only CPUs with broken
TSC, but since these all are considered unstable_tsc(), this patch covers
all necessary cases.
Signed-off-by: Zachary Amsden <zamsden@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Zachary Amsden [Sun, 19 Sep 2010 00:38:14 +0000 (14:38 -1000)]
KVM: x86: Rename timer function
This just changes some names to better reflect the usage they
will be given. Separated out to keep confusion to a minimum.
Signed-off-by: Zachary Amsden <zamsden@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Zachary Amsden [Sun, 19 Sep 2010 00:38:13 +0000 (14:38 -1000)]
KVM: x86: Make math work for other scales
The math in kvm_get_time_scale relies on the fact that
NSEC_PER_SEC < 2^32. To use the same function to compute
arbitrary time scales, we must extend the first reduction
step to shrink the base rate to a 32-bit value, and
possibly reduce the scaled rate into a 32-bit as well.
Note we must take care to avoid an arithmetic overflow
when scaling up the tps32 value (this could not happen
with the fixed scaled value of NSEC_PER_SEC, but can
happen with scaled rates above 2^31.
Signed-off-by: Zachary Amsden <zamsden@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Avi Kivity [Tue, 21 Sep 2010 17:59:44 +0000 (19:59 +0200)]
KVM: cpu_relax() during spin waiting for reboot
It doesn't really matter, but if we spin, we should spin in a more relaxed
manner. This way, if something goes wrong at least it won't contribute to
global warming.
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Avi Kivity [Sun, 19 Sep 2010 12:34:08 +0000 (14:34 +0200)]
KVM: VMX: Respect interrupt window in big real mode
If an interrupt is pending, we need to stop emulation so we
can inject it.
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Mohammed Gamal [Sun, 19 Sep 2010 12:34:07 +0000 (14:34 +0200)]
KVM: VMX: Emulated real mode interrupt injection
Replace the inject-as-software-interrupt hack we currently have with
emulated injection.
Signed-off-by: Mohammed Gamal <m.gamal005@gmail.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Mohammed Gamal [Sun, 19 Sep 2010 12:34:06 +0000 (14:34 +0200)]
KVM: Add kvm_inject_realmode_interrupt() wrapper
This adds a wrapper function kvm_inject_realmode_interrupt() around the
emulator function emulate_int_real() to allow real mode interrupt injection.
[avi: initialize operand and address sizes before emulating interrupts]
[avi: initialize rip for real mode interrupt injection]
[avi: clear interrupt pending flag after emulating interrupt injection]
Signed-off-by: Mohammed Gamal <m.gamal005@gmail.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Mohammed Gamal [Sun, 19 Sep 2010 12:34:05 +0000 (14:34 +0200)]
KVM: x86 emulator: Expose emulate_int_real()
Signed-off-by: Mohammed Gamal <m.gamal005@gmail.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Hillf Danton [Sat, 18 Sep 2010 00:41:02 +0000 (08:41 +0800)]
KVM: MMU: fix counting of rmap entries in rmap_add()
It seems that rmap entries are under counted.
Signed-off-by: Hillf Danton <dhillf@gmail.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Xiao Guangrong [Mon, 20 Sep 2010 14:17:48 +0000 (22:17 +0800)]
KVM: document 'kvm.mmu_audit' parameter
Document this parameter into Documentation/kernel-parameters.txt
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Xiao Guangrong [Mon, 20 Sep 2010 14:16:45 +0000 (22:16 +0800)]
KVM: fix the description of kvm-amd.nested in documentation
The default state of 'kvm-amd.nested' is enabled now, so fix the documentation
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Gleb Natapov [Mon, 20 Sep 2010 08:15:32 +0000 (10:15 +0200)]
KVM: SVM: do not generate "external interrupt exit" if other exit is pending
Nested SVM checks for external interrupt after injecting nested exception.
In case there is external interrupt pending the code generates "external
interrupt exit" and overwrites previous exit info. If previously injected
exception already generated exit it will be lost.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Acked-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Avi Kivity [Sun, 19 Sep 2010 16:44:07 +0000 (18:44 +0200)]
KVM: Convert PIC lock from raw spinlock to ordinary spinlock
The PIC code used to be called from preempt_disable() context, which
wasn't very good for PREEMPT_RT. That is no longer the case, so move
back from raw_spinlock_t to spinlock_t.
Signed-off-by: Avi Kivity <avi@redhat.com>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Zachary Amsden [Sun, 19 Sep 2010 00:38:12 +0000 (14:38 -1000)]
KVM: x86: Fix kvmclock bug
If preempted after kvmclock values are updated, but before hardware
virtualization is entered, the last tsc time as read by the guest is
never set. It underflows the next time kvmclock is updated if there
has not yet been a successful entry / exit into hardware virt.
Fix this by simply setting last_tsc to the newly read tsc value so
that any computed nsec advance of kvmclock is nulled.
Signed-off-by: Zachary Amsden <zamsden@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Joerg Roedel [Tue, 14 Sep 2010 15:46:12 +0000 (17:46 +0200)]
KVM: MMU: Don't track nested fault info in error-code
This patch moves the detection whether a page-fault was
nested or not out of the error code and moves it into a
separate variable in the fault struct.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Avi Kivity [Thu, 22 Jul 2010 10:09:54 +0000 (13:09 +0300)]
KVM: VMX: Move fixup_rmode_irq() to avoid forward declaration
No code changes.
Signed-off-by: Avi Kivity <avi@redhat.com>
Avi Kivity [Tue, 20 Jul 2010 12:06:17 +0000 (15:06 +0300)]
KVM: Non-atomic interrupt injection
Change the interrupt injection code to work from preemptible, interrupts
enabled context. This works by adding a ->cancel_injection() operation
that undoes an injection in case we were not able to actually enter the guest
(this condition could never happen with atomic injection).
Signed-off-by: Avi Kivity <avi@redhat.com>
Avi Kivity [Tue, 20 Jul 2010 11:43:23 +0000 (14:43 +0300)]
KVM: VMX: Parameterize vmx_complete_interrupts() for both exit and entry
Currently vmx_complete_interrupts() can decode event information from vmx
exit fields into the generic kvm event queues. Make it able to decode
the information from the entry fields as well by parametrizing it.
Signed-off-by: Avi Kivity <avi@redhat.com>
Avi Kivity [Thu, 22 Jul 2010 09:54:21 +0000 (12:54 +0300)]
KVM: VMX: Move real-mode interrupt injection fixup to vmx_complete_interrupts()
This allows reuse of vmx_complete_interrupts() for cancelling injections.
Signed-off-by: Avi Kivity <avi@redhat.com>
Avi Kivity [Tue, 20 Jul 2010 11:31:20 +0000 (14:31 +0300)]
KVM: VMX: Split up vmx_complete_interrupts()
vmx_complete_interrupts() does too much, split it up:
- vmx_vcpu_run() gets the "cache important vmcs fields" part
- a new vmx_complete_atomic_exit() gets the parts that must be done atomically
- a new vmx_recover_nmi_blocking() does what its name says
- vmx_complete_interrupts() retains the event injection recovery code
This helps in reducing the work done in atomic context.
Signed-off-by: Avi Kivity <avi@redhat.com>
Avi Kivity [Tue, 27 Jul 2010 09:30:24 +0000 (12:30 +0300)]
KVM: Check for pending events before attempting injection
Instead of blindly attempting to inject an event before each guest entry,
check for a possible event first in vcpu->requests. Sites that can trigger
event injection are modified to set KVM_REQ_EVENT:
- interrupt, nmi window opening
- ppr updates
- i8259 output changes
- local apic irr changes
- rflags updates
- gif flag set
- event set on exit
This improves non-injecting entry performance, and sets the stage for
non-atomic injection.
Signed-off-by: Avi Kivity <avi@redhat.com>
Avi Kivity [Mon, 13 Sep 2010 14:45:28 +0000 (16:45 +0200)]
KVM: MMU: Fix regression with ept memory types merged into non-ept page tables
Commit "KVM: MMU: Make tdp_enabled a mmu-context parameter" made real-mode
set ->direct_map, and changed the code that merges in the memory type depend
on direct_map instead of tdp_enabled. However, in this case what really
matters is tdp, not direct_map, since tdp changes the pte format regardless
of whether the mapping is direct or not.
As a result, real-mode shadow mappings got corrupted with ept memory types.
The result was a huge slowdown, likely due to the cache being disabled.
Change it back as the simplest fix for the regression (real fix is to move
all that to vmx code, and not use tdp_enabled as a synonym for ept).
Signed-off-by: Avi Kivity <avi@redhat.com>
Avi Kivity [Sun, 12 Sep 2010 14:39:11 +0000 (16:39 +0200)]
KVM: Document that KVM_GET_SUPPORTED_CPUID may return emulated values
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:31:06 +0000 (17:31 +0200)]
KVM: X86: Report SVM bit to userspace only when supported
This patch fixes a bug in KVM where it _always_ reports the
support of the SVM feature to userspace. But KVM only
supports SVM on AMD hardware and only when it is enabled in
the kernel module. This patch fixes the wrong reporting.
Cc: stable@kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:31:05 +0000 (17:31 +0200)]
KVM: SVM: Report Nested Paging support to userspace
This patch implements the reporting of the nested paging
feature support to userspace.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:31:04 +0000 (17:31 +0200)]
KVM: SVM: Expect two more candiates for exit_int_info
This patch adds INTR and NMI intercepts to the list of
expected intercepts with an exit_int_info set. While this
can't happen on bare metal it is architectural legal and may
happen with KVMs SVM emulation.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:31:03 +0000 (17:31 +0200)]
KVM: SVM: Initialize Nested Nested MMU context on VMRUN
This patch adds code to initialize the Nested Nested Paging
MMU context when the L1 guest executes a VMRUN instruction
and has nested paging enabled in its VMCB.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:31:02 +0000 (17:31 +0200)]
KVM: SVM: Implement MMU helper functions for Nested Nested Paging
This patch adds the helper functions which will be used in
the mmu context for handling nested nested page faults.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:31:01 +0000 (17:31 +0200)]
KVM: MMU: Track NX state in struct kvm_mmu
With Nested Paging emulation the NX state between the two
MMU contexts may differ. To make sure that always the right
fault error code is recorded this patch moves the NX state
into struct kvm_mmu so that the code can distinguish between
L1 and L2 NX state.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:31:00 +0000 (17:31 +0200)]
KVM: MMU: Allow long mode shadows for legacy page tables
Currently the KVM softmmu implementation can not shadow a 32
bit legacy or PAE page table with a long mode page table.
This is a required feature for nested paging emulation
because the nested page table must alway be in host format.
So this patch implements the missing pieces to allow long
mode page tables for page table types.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:30:59 +0000 (17:30 +0200)]
KVM: MMU: Refactor mmu_alloc_roots function
This patch factors out the direct-mapping paths of the
mmu_alloc_roots function into a seperate function. This
makes it a lot easier to avoid all the unnecessary checks
done in the shadow path which may break when running direct.
In fact, this patch already fixes a problem when running PAE
guests on a PAE shadow page table.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:30:58 +0000 (17:30 +0200)]
KVM: MMU: Introduce kvm_pdptr_read_mmu
This function is implemented to load the pdptr pointers of
the currently running guest (l1 or l2 guest). Therefore it
takes care about the current paging mode and can read pdptrs
out of l2 guest physical memory.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:30:57 +0000 (17:30 +0200)]
KVM: MMU: Add kvm_mmu parameter to load_pdptrs function
This function need to be able to load the pdptrs from any
mmu context currently in use. So change this function to
take an kvm_mmu parameter to fit these needs.
As a side effect this patch also moves the cached pdptrs
from vcpu_arch into the kvm_mmu struct.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:30:56 +0000 (17:30 +0200)]
KVM: X86: Propagate fetch faults
KVM currently ignores fetch faults in the instruction
emulator. With nested-npt we could have such faults. This
patch adds the code to handle these.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:30:55 +0000 (17:30 +0200)]
KVM: MMU: Propagate the right fault back to the guest after gva_to_gpa
This patch implements logic to make sure that either a
page-fault/page-fault-vmexit or a nested-page-fault-vmexit
is propagated back to the guest.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:30:54 +0000 (17:30 +0200)]
KVM: MMU: Introduce init_kvm_nested_mmu()
This patch introduces the init_kvm_nested_mmu() function
which is used to re-initialize the nested mmu when the l2
guest changes its paging mode.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:30:53 +0000 (17:30 +0200)]
KVM: MMU: Introduce kvm_read_nested_guest_page()
This patch introduces the kvm_read_guest_page_x86 function
which reads from the physical memory of the guest. If the
guest is running in guest-mode itself with nested paging
enabled it will read from the guest's guest physical memory
instead.
The patch also changes changes the code to use this function
where it is necessary.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:30:52 +0000 (17:30 +0200)]
KVM: MMU: Make walk_addr_generic capable for two-level walking
This patch uses kvm_read_guest_page_tdp to make the
walk_addr_generic functions suitable for two-level page
table walking.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:30:51 +0000 (17:30 +0200)]
KVM: X86: Add kvm_read_guest_page_mmu function
This patch adds a function which can read from the guests
physical memory or from the guest's guest physical memory.
This will be used in the two-dimensional page table walker.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:30:50 +0000 (17:30 +0200)]
KVM: MMU: Implement nested gva_to_gpa functions
This patch adds the functions to do a nested l2_gva to
l1_gpa page table walk.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:30:49 +0000 (17:30 +0200)]
KVM: X86: Introduce pointer to mmu context used for gva_to_gpa
This patch introduces the walk_mmu pointer which points to
the mmu-context currently used for gva_to_gpa translations.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:30:48 +0000 (17:30 +0200)]
KVM: MMU: Add infrastructure for two-level page walker
This patch introduces a mmu-callback to translate gpa
addresses in the walk_addr code. This is later used to
translate l2_gpa addresses into l1_gpa addresses.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:30:47 +0000 (17:30 +0200)]
KVM: MMU: Introduce generic walk_addr function
This is the first patch in the series towards a generic
walk_addr implementation which could walk two-dimensional
page tables in the end. In this first step the walk_addr
function is renamed into walk_addr_generic which takes a
mmu context as an additional parameter.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:30:46 +0000 (17:30 +0200)]
KVM: MMU: Track page fault data in struct vcpu
This patch introduces a struct with two new fields in
vcpu_arch for x86:
* fault.address
* fault.error_code
This will be used to correctly propagate page faults back
into the guest when we could have either an ordinary page
fault or a nested page fault. In the case of a nested page
fault the fault-address is different from the original
address that should be walked. So we need to keep track
about the real fault-address.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:30:45 +0000 (17:30 +0200)]
KVM: MMU: Let is_rsvd_bits_set take mmu context instead of vcpu
This patch changes is_rsvd_bits_set() function prototype to
take only a kvm_mmu context instead of a full vcpu.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:30:44 +0000 (17:30 +0200)]
KVM: MMU: Introduce kvm_init_shadow_mmu helper function
Some logic of the init_kvm_softmmu function is required to
build the Nested Nested Paging context. So factor the
required logic into a seperate function and export it.
Also make the whole init path suitable for more than one mmu
context.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:30:43 +0000 (17:30 +0200)]
KVM: MMU: Introduce inject_page_fault function pointer
This patch introduces an inject_page_fault function pointer
into struct kvm_mmu which will be used to inject a page
fault. This will be used later when Nested Nested Paging is
implemented.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:30:42 +0000 (17:30 +0200)]
KVM: MMU: Introduce get_cr3 function pointer
This function pointer in the MMU context is required to
implement Nested Nested Paging.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:30:41 +0000 (17:30 +0200)]
KVM: X86: Introduce a tdp_set_cr3 function
This patch introduces a special set_tdp_cr3 function pointer
in kvm_x86_ops which is only used for tpd enabled mmu
contexts. This allows to remove some hacks from svm code.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:30:40 +0000 (17:30 +0200)]
KVM: MMU: Make set_cr3 a function pointer in kvm_mmu
This is necessary to implement Nested Nested Paging. As a
side effect this allows some cleanups in the SVM nested
paging code.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:30:39 +0000 (17:30 +0200)]
KVM: MMU: Make tdp_enabled a mmu-context parameter
This patch changes the tdp_enabled flag from its global
meaning to the mmu-context and renames it to direct_map
there. This is necessary for Nested SVM with emulation of
Nested Paging where we need an extra MMU context to shadow
the Nested Nested Page Table.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Joerg Roedel [Fri, 10 Sep 2010 15:30:38 +0000 (17:30 +0200)]
KVM: MMU: Check for root_level instead of long mode
The walk_addr function checks for !is_long_mode in its 64
bit version. But what is meant here is a check for pae
paging. Change the condition to really check for pae paging
so that it also works with nested nested paging.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Jes Sorensen [Thu, 9 Sep 2010 10:06:46 +0000 (12:06 +0200)]
KVM: x86: Emulate MSR_EBC_FREQUENCY_ID
Some operating systems store data about the host processor at the
time of installation, and when booted on a more uptodate cpu tries
to read MSR_EBC_FREQUENCY_ID. This has been found with XP.
Signed-off-by: Jes Sorensen <Jes.Sorensen@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Jes Sorensen [Thu, 9 Sep 2010 10:06:45 +0000 (12:06 +0200)]
x86: Define MSR_EBC_FREQUENCY_ID
Signed-off-by: Jes Sorensen <Jes.Sorensen@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Roedel, Joerg [Fri, 3 Sep 2010 12:21:40 +0000 (14:21 +0200)]
KVM: SVM: Clean up rip handling in vmrun emulation
This patch changes the rip handling in the vmrun emulation
path from using next_rip to the generic kvm register access
functions.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Joerg Roedel [Thu, 2 Sep 2010 15:29:46 +0000 (17:29 +0200)]
KVM: SVM: Restore correct registers after sel_cr0 intercept emulation
This patch implements restoring of the correct rip, rsp, and
rax after the svm emulation in KVM injected a selective_cr0
write intercept into the guest hypervisor. The problem was
that the vmexit is emulated in the instruction emulation
which later commits the registers right after the write-cr0
instruction. So the l1 guest will continue to run with the
l2 rip, rsp and rax resulting in unpredictable behavior.
This patch is not the final word, it is just an easy patch
to fix the issue. The real fix will be done when the
instruction emulator is made aware of nested virtualization.
Until this is done this patch fixes the issue and provides
an easy way to fix this in -stable too.
Cc: stable@kernel.org
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Joerg Roedel [Thu, 2 Sep 2010 15:29:45 +0000 (17:29 +0200)]
KVM: MMU: Fix 32 bit legacy paging with NPT
This patch fixes 32 bit legacy paging with NPT enabled. The
mmu_check_root call on the top-level of the loop causes
root_gfn to take values (in the tdp_enabled path) which are
outside of guest memory. So the mmu_check_root call fails at
some point in the loop interation causing the guest to
tiple-fault.
This patch changes the mmu_check_root calls to the places
where they are really necessary. As a side-effect it
introduces a check for the root of a pae page table too.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Alexander Graf [Fri, 3 Sep 2010 08:22:19 +0000 (10:22 +0200)]
KVM: PPC: Move of include to __KERNEL__ section
We have to protect the include for linux/of.h by __KERNEL__ so it doesn't
accidently get referenced outside.
This patch fixes this and makes the tree compile again.
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
Alexander Graf [Tue, 31 Aug 2010 02:25:39 +0000 (04:25 +0200)]
KVM: PPC: Add documentation for magic page enhancements
This documents how to detect additional features inside the magic
page when a guest maps it.
Signed-off-by: Alexander Graf <agraf@suse.de>
Alexander Graf [Tue, 31 Aug 2010 01:45:39 +0000 (03:45 +0200)]
KVM: PPC: Fix compile error in e500_tlb.c
The e500_tlb.c file didn't compile for me due to the following error:
arch/powerpc/kvm/e500_tlb.c: In function ‘kvmppc_e500_shadow_map’:
arch/powerpc/kvm/e500_tlb.c:300: error: format ‘%lx’ expects type ‘long unsigned int’, but argument 2 has type ‘gfn_t’
So let's explicitly cast the argument to make printk happy.
Signed-off-by: Alexander Graf <agraf@suse.de>
Kyle Moffett [Mon, 30 Aug 2010 15:38:39 +0000 (11:38 -0400)]
KVM: PPC: e500_tlb: Fix a minor copy-paste tracing bug
The kvmppc_e500_stlbe_invalidate() function was trying to pass too many
parameters to trace_kvm_stlb_inval(). This appears to be a bad
copy-paste from a call to trace_kvm_stlb_write().
Signed-off-by: Kyle Moffett <Kyle.D.Moffett@boeing.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Alexander Graf [Tue, 31 Aug 2010 00:03:32 +0000 (02:03 +0200)]
KVM: PPC: Document KVM_INTERRUPT ioctl
This adds some documentation for the KVM_INTERRUPT special cases that
PowerPC now implements.
Signed-off-by: Alexander Graf <agraf@suse.de>
Alexander Graf [Mon, 30 Aug 2010 12:03:24 +0000 (14:03 +0200)]
KVM: PPC: Implement level interrupts for BookE
BookE also wants to support level based interrupts, so let's implement
all the necessary logic there. We need to trick a bit here because the
irqprios are 1:1 assigned to architecture defined values. But since there
is some space left there, we can just pick a random one and move it later
on - it's internal anyways.
Signed-off-by: Alexander Graf <agraf@suse.de>
Alexander Graf [Mon, 30 Aug 2010 11:50:45 +0000 (13:50 +0200)]
KVM: PPC: Expose level based interrupt cap
Now that we have all the level interrupt magic in place, let's
expose the capability to user space, so it can make use of it!
Signed-off-by: Alexander Graf <agraf@suse.de>
Alexander Graf [Mon, 30 Aug 2010 08:44:15 +0000 (10:44 +0200)]
KVM: PPC: Implement Level interrupts on Book3S
The current interrupt logic is just completely broken. We get a notification
from user space, telling us that an interrupt is there. But then user space
expects us that we just acknowledge an interrupt once we deliver it to the
guest.
This is not how real hardware works though. On real hardware, the interrupt
controller pulls the external interrupt line until it gets notified that the
interrupt was received.
So in reality we have two events: pulling and letting go of the interrupt line.
To maintain backwards compatibility, I added a new request for the pulling
part. The letting go part was implemented earlier already.
With this in place, we can now finally start guests that do not randomly stall
and stop to work at random times.
This patch implements above logic for Book3S.
Signed-off-by: Alexander Graf <agraf@suse.de>
Alexander Graf [Tue, 17 Aug 2010 20:08:39 +0000 (22:08 +0200)]
KVM: PPC: Enable napping only for Book3s_64
Before I incorrectly enabled napping also for BookE, which would result in
needless dcache flushes. Since we only need to force enable napping on
Book3s_64 because it doesn't go into MSR_POW otherwise, we can just #ifdef
that code to this particular platform.
Reported-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Hollis Blanchard [Sat, 7 Aug 2010 17:33:58 +0000 (10:33 -0700)]
KVM: PPC: allow ppc440gp to pass the compatibility check
Match only the first part of cur_cpu_spec->platform.
440GP (the first 440 processor) is identified by the string "ppc440gp", while
all later 440 processors use simply "ppc440".
Signed-off-by: Hollis Blanchard <hollis_blanchard@mentor.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Hollis Blanchard [Sat, 7 Aug 2010 17:33:57 +0000 (10:33 -0700)]
KVM: PPC: fix compilation of "dump tlbs" debug function
Missing local variable.
Signed-off-by: Hollis Blanchard <hollis_blanchard@mentor.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Hollis Blanchard [Sat, 7 Aug 2010 17:33:56 +0000 (10:33 -0700)]
KVM: PPC: initialize IVORs in addition to IVPR
Developers can now tell at a glace the exact type of the premature interrupt,
instead of just knowing that there was some premature interrupt.
Signed-off-by: Hollis Blanchard <hollis_blanchard@mentor.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Alexander Graf [Sun, 15 Aug 2010 06:39:19 +0000 (08:39 +0200)]
KVM: PPC: Don't put MSR_POW in MSR
On Book3S a mtmsr with the MSR_POW bit set indicates that the OS is in
idle and only needs to be waked up on the next interrupt.
Now, unfortunately we let that bit slip into the stored MSR value which
is not what the real CPU does, so that we ended up executing code like
this:
r = mfmsr();
/* r containts MSR_POW */
mtmsr(r | MSR_EE);
This obviously breaks, as we're going into idle mode in code sections that
don't expect to be idling.
This patch masks MSR_POW out of the stored MSR value on wakeup, making
guests happy again.
Signed-off-by: Alexander Graf <agraf@suse.de>
Alexander Graf [Sun, 15 Aug 2010 06:04:24 +0000 (08:04 +0200)]
KVM: PPC: Implement correct SID mapping on Book3s_32
Up until now we were doing segment mappings wrong on Book3s_32. For Book3s_64
we were using a trick where we know that a single mmu_context gives us 16 bits
of context ids.
The mm system on Book3s_32 instead uses a clever algorithm to distribute VSIDs
across the available range, so a context id really only gives us 16 available
VSIDs.
To keep at least a few guest processes in the SID shadow, let's map a number of
contexts that we can use as VSID pool. This makes the code be actually correct
and shouldn't hurt performance too much.
Signed-off-by: Alexander Graf <agraf@suse.de>
Alexander Graf [Tue, 17 Aug 2010 09:41:44 +0000 (11:41 +0200)]
KVM: PPC: Force enable nap on KVM
There are some heuristics in the PPC power management code that try to find
out if the particular hardware we're running on supports proper power management
or just hangs the machine when going into nap mode.
Since we know that KVM is safe with nap, let's force enable it in the PV code
once we're certain that we are on a KVM VM.
Signed-off-by: Alexander Graf <agraf@suse.de>
Alexander Graf [Thu, 5 Aug 2010 13:44:41 +0000 (15:44 +0200)]
KVM: PPC: Make PV mtmsrd L=1 work with r30 and r31
We had an arbitrary limitation in mtmsrd L=1 that kept us from using r30 and
r31 as input registers. Let's get rid of that and get more potential speedups!
Signed-off-by: Alexander Graf <agraf@suse.de>
Alexander Graf [Thu, 5 Aug 2010 10:24:40 +0000 (12:24 +0200)]
KVM: PPC: Update int_pending also on dequeue
When having a decrementor interrupt pending, the dequeuing happens manually
through an mtdec instruction. This instruction simply calls dequeue on that
interrupt, so the int_pending hint doesn't get updated.
This patch enables updating the int_pending hint also on dequeue, thus
correctly enabling guests to stay in guest contexts more often.
Signed-off-by: Alexander Graf <agraf@suse.de>
Alexander Graf [Thu, 5 Aug 2010 09:26:04 +0000 (11:26 +0200)]
KVM: PPC: Make PV mtmsr work with r30 and r31
So far we've been restricting ourselves to r0-r29 as registers an mtmsr
instruction could use. This was bad, as there are some code paths in
Linux actually using r30.
So let's instead handle all registers gracefully and get rid of that
stupid limitation
Signed-off-by: Alexander Graf <agraf@suse.de>
Alexander Graf [Tue, 3 Aug 2010 08:39:35 +0000 (10:39 +0200)]
KVM: PPC: Add mtsrin PV code
This is the guest side of the mtsr acceleration. Using this a guest can now
call mtsrin with almost no overhead as long as it ensures that it only uses
it with (MSR_IR|MSR_DR) == 0. Linux does that, so we're good.
Signed-off-by: Alexander Graf <agraf@suse.de>
Alexander Graf [Tue, 3 Aug 2010 00:29:27 +0000 (02:29 +0200)]
KVM: PPC: Put segment registers in shared page
Now that the actual mtsr doesn't do anything anymore, we can move the sr
contents over to the shared page, so a guest can directly read and write
its sr contents from guest context.
Signed-off-by: Alexander Graf <agraf@suse.de>
Alexander Graf [Mon, 2 Aug 2010 23:06:11 +0000 (01:06 +0200)]
KVM: PPC: Interpret SR registers on demand
Right now we're examining the contents of Book3s_32's segment registers when
the register is written and put the interpreted contents into a struct.
There are two reasons this is bad. For starters, the struct has worse real-time
performance, as it occupies more ram. But the more important part is that with
segment registers being interpreted from their raw values, we can put them in
the shared page, allowing guests to mess with them directly.
This patch makes the internal representation of SRs be u32s.
Signed-off-by: Alexander Graf <agraf@suse.de>
Alexander Graf [Mon, 2 Aug 2010 21:23:04 +0000 (23:23 +0200)]
KVM: PPC: Move BAT handling code into spr handler
The current approach duplicates the spr->bat finding logic and makes it harder
to reuse the actually used variables. So let's move everything down to the spr
handler.
Signed-off-by: Alexander Graf <agraf@suse.de>
Alexander Graf [Tue, 3 Aug 2010 09:32:56 +0000 (11:32 +0200)]
KVM: PPC: Add feature bitmap for magic page
We will soon add SR PV support to the shared page, so we need some
infrastructure that allows the guest to query for features KVM exports.
This patch adds a second return value to the magic mapping that
indicated to the guest which features are available.
Signed-off-by: Alexander Graf <agraf@suse.de>
Alexander Graf [Mon, 2 Aug 2010 20:05:00 +0000 (22:05 +0200)]
KVM: PPC: Remove unused define
The define VSID_ALL is unused. Let's remove it.
Signed-off-by: Alexander Graf <agraf@suse.de>
Alexander Graf [Mon, 2 Aug 2010 19:48:53 +0000 (21:48 +0200)]
KVM: PPC: Revert "KVM: PPC: Use kernel hash function"
It turns out the in-kernel hash function is sub-optimal for our subtle
hash inputs where every bit is significant. So let's revert to the original
hash functions.
This reverts commit
05340ab4f9a6626f7a2e8f9fe5397c61d494f445.
Signed-off-by: Alexander Graf <agraf@suse.de>
Alexander Graf [Mon, 2 Aug 2010 19:25:33 +0000 (21:25 +0200)]
KVM: PPC: Move slb debugging to tracepoints
This patch moves debugging printks for shadow SLB debugging over to tracepoints.
Signed-off-by: Alexander Graf <agraf@suse.de>
Alexander Graf [Mon, 2 Aug 2010 19:24:48 +0000 (21:24 +0200)]
KVM: PPC: Make invalidation code more reliable
There is a race condition in the pte invalidation code path where we can't
be sure if a pte was invalidated already. So let's move the spin lock around
to get rid of the race.
Signed-off-by: Alexander Graf <agraf@suse.de>
Alexander Graf [Mon, 2 Aug 2010 18:11:39 +0000 (20:11 +0200)]
KVM: PPC: Don't flush PTEs on NX/RO hit
When hitting a no-execute or read-only data/inst storage interrupt we were
flushing the respective PTE so we're sure it gets properly overwritten next.
According to the spec, this is unnecessary though. The guest issues a tlbie
anyways, so we're safe to just keep the PTE around and have it manually removed
from the guest, saving us a flush.
Signed-off-by: Alexander Graf <agraf@suse.de>
Alexander Graf [Mon, 2 Aug 2010 14:08:22 +0000 (16:08 +0200)]
KVM: PPC: Preload magic page when in kernel mode
When the guest jumps into kernel mode and has the magic page mapped, theres a
very high chance that it will also use it. So let's detect that scenario and
map the segment accordingly.
Signed-off-by: Alexander Graf <agraf@suse.de>
Alexander Graf [Mon, 2 Aug 2010 11:40:30 +0000 (13:40 +0200)]
KVM: PPC: Add tracepoints for generic spte flushes
The different ways of flusing shadow ptes have their own debug prints which use
stupid old printk.
Let's move them to tracepoints, making them easier available, faster and
possible to activate on demand
Signed-off-by: Alexander Graf <agraf@suse.de>
Alexander Graf [Mon, 2 Aug 2010 11:38:18 +0000 (13:38 +0200)]
KVM: PPC: Fix sid map search after flush
After a flush the sid map contained lots of entries with 0 for their gvsid and
hvsid value. Unfortunately, 0 can be a real value the guest searches for when
looking up a vsid so it would incorrectly find the host's 0 hvsid mapping which
doesn't belong to our sid space.
So let's also check for the valid bit that indicated that the sid we're
looking at actually contains useful data.
Signed-off-by: Alexander Graf <agraf@suse.de>