[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH] Patches to make PVHVM VCPU hotplug work with VCPUOP_register_info (v1).


The first patch is a good candidate for stable, the other are more for
cleanups that I spotted.

The patch:
 [PATCH 1/3] xen/vcpu/pvhvm: Fix vcpu hotplugging hanging.

along with the two earlier patches:
  66ff0fe xen/smp/spinlock: Fix leakage of the spinlock interrupt line for 
every CPU online/offline
  888b65b xen/smp: Fix leakage of timer interrupt line for every CPU 

make it possible to do VCPU hotplug in an PVHVM guest. This means that
with Xen 4.1 it works. Xen 4.2 and Xen 4.3 had a regression wherein the
VCPUOP_register_info hypercall did not work in HVM mode which meant:
 - No events delievered to more than 32 VCPUs. I am not exactly
   sure what that means except I think that IPIs would stop working with
   guests with more than 32 VCPUs.

 - Could not take advantage of the per-CPU page allocation for events
   offered by the hypercall.

Anyhow, the regression is fixed in Xen 4.3 (and should appear in Xen 4.2.?)
and with these attached patches the VCPU hotplug mechanism will work.

There are also miscellaneous patches here that.

Note that during testing I found that this combination:

maxvcpus >= vcpus for PVHVM with v3.9 hits a generic bug. This generic
bug is dead-lock in the microcode. Asked x86 folks for assistance on that
as it would seem to appear on generic platforms too.

 arch/x86/xen/enlighten.c | 39 ++++++++++++++++++++++++++++++++++++++-
 arch/x86/xen/spinlock.c  |  2 +-
 2 files changed, 39 insertions(+), 2 deletions(-)

Konrad Rzeszutek Wilk (4):
      xen/vcpu/pvhvm: Fix vcpu hotplugging hanging.
      xen/vcpu: Document the xen_vcpu_info and xen_vcpu
      xen/smp/pvhvm: Don't point per_cpu(xen_vpcu, 33 and larger) to shared_info
      xen/spinlock: Fix check from greater than to be also be greater or equal 

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.