[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH][QEMU] Fix HVM guest hang in save/restore with the network load



Hi Keir,

I attach a new patch which saves only irq_state[] and calculates
irq_count[] by calling pci_set_irq().
I also add version check code in pci_device_save()/load() for backward
compatibility.

Thanks,
KAZ

Signed-off-by: Kazuhiro Suzuki <kaz@xxxxxxxxxxxxxx>

From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH][QEMU] Fix HVM guest hang in save/restore with 
the network load
Date: Tue, 22 Apr 2008 10:14:38 +0100

> I like this in principle, but I see no need to save irq_count[]. That should
> be derivable from the individual devices' irq_state[] values, shouldn't it?
> For example, call pci_set_irq() for each irq_state[] entry during pci-device
> load?
> 
> Or... Could we avoid changing the qemu-dm save format by doing the following
> in i440fx_load():
>  for ( dev = 0; dev < 32; dev++ )
>     for ( intx = 0; intx < 4; intx++ )
>        xc_hvm_set_pci_intx_level(..., dev, intx, 0);
> 
> Which forcibly resets INTx levels down in the hypervisor? Would work as long
> as hypervisor state is loaded before qemu-dm state, and does avoid an
> annoying state format change which would prevent loading old state on new
> qemu-dm (unless we add some versioning to your patch).
> 
>  -- Keir
> 
> On 22/4/08 05:52, "SUZUKI Kazuhiro" <kaz@xxxxxxxxxxxxxx> wrote:
> 
> > Hi all,
> > 
> > When we repeat migration(save/restore) of the HVM guest with the
> > network load, the guest hangs. The attached patch fixes it.
> > 
> > We need to save PCIDevice.irq_state[] and PCIBus.irq_count[] in QEMU.
> > Otherwise, xc_hvm_set_pci_intx_level() in
> > tools/ioemu/target-i386-dm/piix_pci-dm.c:59 could not be called with
> > level = 0 which calls hvm_pci_intx_deassert() in
> > xen/arch/x86/hvm/irq.c:83 by hypercall.
> > If gsi_assert_count[] could not be decreased because this function is
> > not called, then vioapic_deliver() in xen/arch/x86/hvm/vioapic.c:473
> > is called every time.
> > So the guest enters endless IRQ loop and hangs.
> > 
> > Thanks,
> > KAZ
> > 
> > Signed-off-by: Kazuhiro Suzuki <kaz@xxxxxxxxxxxxxx>
> > diff -r a464af87c9db tools/ioemu/hw/pci.c
> > --- a/tools/ioemu/hw/pci.c Fri Apr 11 17:29:26 2008 +0100
> > +++ b/tools/ioemu/hw/pci.c Mon Apr 14 16:15:33 2008 +0900
> > @@ -81,6 +81,7 @@ void pci_device_save(PCIDevice *s, QEMUF
> >  {
> >      qemu_put_be32(f, 1); /* PCI device version */
> >      qemu_put_buffer(f, s->config, 256);
> > +    qemu_put_buffer(f, (uint8_t*)s->irq_state, sizeof(s->irq_state));
> >  }
> >  
> >  int pci_device_load(PCIDevice *s, QEMUFile *f)
> > @@ -91,6 +92,18 @@ int pci_device_load(PCIDevice *s, QEMUFi
> >          return -EINVAL;
> >      qemu_get_buffer(f, s->config, 256);
> >      pci_update_mappings(s);
> > +    qemu_get_buffer(f, (uint8_t*)s->irq_state, sizeof(s->irq_state));
> > +    return 0;
> > +}
> > +
> > +void pci_bus_save(PCIBus *bus, QEMUFile *f, int nirq)
> > +{
> > +    qemu_put_buffer(f, (uint8_t*)bus->irq_count, nirq * sizeof(int));
> > +}
> > +
> > +int pci_bus_load(PCIBus *bus, QEMUFile *f, int nirq)
> > +{
> > +    qemu_get_buffer(f, (uint8_t*)bus->irq_count, nirq * sizeof(int));
> >      return 0;
> >  }
> >  
> > diff -r a464af87c9db tools/ioemu/target-i386-dm/piix_pci-dm.c
> > --- a/tools/ioemu/target-i386-dm/piix_pci-dm.c Fri Apr 11 17:29:26 2008 
> > +0100
> > +++ b/tools/ioemu/target-i386-dm/piix_pci-dm.c Mon Apr 14 16:15:33 2008 
> > +0900
> > @@ -64,6 +64,7 @@ static void i440fx_save(QEMUFile* f, voi
> >  {
> >      PCIDevice *d = opaque;
> >      pci_device_save(d, f);
> > +    pci_bus_save(d->bus, f, 128);
> >  #ifndef CONFIG_DM
> >      qemu_put_8s(f, &smm_enabled);
> >  #endif /* !CONFIG_DM */
> > @@ -79,6 +80,7 @@ static int i440fx_load(QEMUFile* f, void
> >      ret = pci_device_load(d, f);
> >      if (ret < 0)
> >          return ret;
> > +    pci_bus_load(d->bus, f, 128);
> >  #ifndef CONFIG_DM
> >      i440fx_update_memory_mappings(d);
> >      qemu_get_8s(f, &smm_enabled);
> > diff -r a464af87c9db tools/ioemu/vl.h
> > --- a/tools/ioemu/vl.h Fri Apr 11 17:29:26 2008 +0100
> > +++ b/tools/ioemu/vl.h Mon Apr 14 16:15:33 2008 +0900
> > @@ -843,6 +843,9 @@ void pci_device_save(PCIDevice *s, QEMUF
> >  void pci_device_save(PCIDevice *s, QEMUFile *f);
> >  int pci_device_load(PCIDevice *s, QEMUFile *f);
> >  
> > +void pci_bus_save(PCIBus *bus, QEMUFile *f, int nirq);
> > +int pci_bus_load(PCIBus *bus, QEMUFile *f, int nirq);
> > +
> >  typedef void (*pci_set_irq_fn)(void *pic, int irq_num, int level);
> >  typedef int (*pci_map_irq_fn)(PCIDevice *pci_dev, int irq_num);
> >  PCIBus *pci_register_bus(pci_set_irq_fn set_irq, pci_map_irq_fn map_irq,
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
diff -r 2ebb7f79e3bb tools/ioemu/hw/pci.c
--- a/tools/ioemu/hw/pci.c      Tue Apr 22 19:07:48 2008 +0100
+++ b/tools/ioemu/hw/pci.c      Wed Apr 23 16:03:27 2008 +0900
@@ -79,18 +79,25 @@ int pci_bus_num(PCIBus *s)
 
 void pci_device_save(PCIDevice *s, QEMUFile *f)
 {
-    qemu_put_be32(f, 1); /* PCI device version */
+    qemu_put_be32(f, 2); /* PCI device version */
     qemu_put_buffer(f, s->config, 256);
+    qemu_put_buffer(f, (uint8_t*)s->irq_state, sizeof(int)*4);
 }
 
 int pci_device_load(PCIDevice *s, QEMUFile *f)
 {
     uint32_t version_id;
     version_id = qemu_get_be32(f);
-    if (version_id != 1)
+    if (version_id != 1 && version_id != 2)
         return -EINVAL;
     qemu_get_buffer(f, s->config, 256);
     pci_update_mappings(s);
+    if (version_id == 2) {
+        int i, irq_state[4];
+        qemu_get_buffer(f, (uint8_t*)irq_state, sizeof(int)*4);
+        for (i = 0; i < 4; i++)
+            pci_set_irq(s, i, irq_state[i]);
+    }
     return 0;
 }
 
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.