x86/kvm/lapic: preserve gfn_to_hva_cache len on cache reinit
authorVitaly Kuznetsov <vkuznets@redhat.com>
Tue, 16 Oct 2018 16:50:06 +0000 (18:50 +0200)
committerPaolo Bonzini <pbonzini@redhat.com>
Tue, 16 Oct 2018 22:30:17 +0000 (00:30 +0200)
vcpu->arch.pv_eoi is accessible through both HV_X64_MSR_VP_ASSIST_PAGE and
MSR_KVM_PV_EOI_EN so on migration userspace may try to restore them in any
order. Values match, however, kvm_lapic_enable_pv_eoi() uses different
length: for Hyper-V case it's the whole struct hv_vp_assist_page, for KVM
native case it is 8. In case we restore KVM-native MSR last cache will
be reinitialized with len=8 so trying to access VP assist page beyond
8 bytes with kvm_read_guest_cached() will fail.

Check if we re-initializing cache for the same address and preserve length
in case it was greater.

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
arch/x86/kvm/lapic.c

index 79358fd6a71c68b94253affd2752a0161f06b31c..3cd227ff807fadd27284e6bc81ef2c6f8a60f3d1 100644 (file)
@@ -2647,14 +2647,22 @@ int kvm_hv_vapic_msr_read(struct kvm_vcpu *vcpu, u32 reg, u64 *data)
 int kvm_lapic_enable_pv_eoi(struct kvm_vcpu *vcpu, u64 data, unsigned long len)
 {
        u64 addr = data & ~KVM_MSR_ENABLED;
+       struct gfn_to_hva_cache *ghc = &vcpu->arch.pv_eoi.data;
+       unsigned long new_len;
+
        if (!IS_ALIGNED(addr, 4))
                return 1;
 
        vcpu->arch.pv_eoi.msr_val = data;
        if (!pv_eoi_enabled(vcpu))
                return 0;
-       return kvm_gfn_to_hva_cache_init(vcpu->kvm, &vcpu->arch.pv_eoi.data,
-                                        addr, len);
+
+       if (addr == ghc->gpa && len <= ghc->len)
+               new_len = ghc->len;
+       else
+               new_len = len;
+
+       return kvm_gfn_to_hva_cache_init(vcpu->kvm, ghc, addr, new_len);
 }
 
 void kvm_apic_accept_events(struct kvm_vcpu *vcpu)