WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH][QEMU] Fix HVM guest hang in save/restore with th

To: SUZUKI Kazuhiro <kaz@xxxxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH][QEMU] Fix HVM guest hang in save/restore with the network load
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Tue, 22 Apr 2008 10:14:38 +0100
Delivery-date: Tue, 22 Apr 2008 02:15:27 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20080422.135211.98585988.kaz@xxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcikWUn2iFmuqxBMEd2NMQAX8io7RQ==
Thread-topic: [Xen-devel] [PATCH][QEMU] Fix HVM guest hang in save/restore with the network load
User-agent: Microsoft-Entourage/11.4.0.080122
I like this in principle, but I see no need to save irq_count[]. That should
be derivable from the individual devices' irq_state[] values, shouldn't it?
For example, call pci_set_irq() for each irq_state[] entry during pci-device
load?

Or... Could we avoid changing the qemu-dm save format by doing the following
in i440fx_load():
 for ( dev = 0; dev < 32; dev++ )
    for ( intx = 0; intx < 4; intx++ )
       xc_hvm_set_pci_intx_level(..., dev, intx, 0);

Which forcibly resets INTx levels down in the hypervisor? Would work as long
as hypervisor state is loaded before qemu-dm state, and does avoid an
annoying state format change which would prevent loading old state on new
qemu-dm (unless we add some versioning to your patch).

 -- Keir

On 22/4/08 05:52, "SUZUKI Kazuhiro" <kaz@xxxxxxxxxxxxxx> wrote:

> Hi all,
> 
> When we repeat migration(save/restore) of the HVM guest with the
> network load, the guest hangs. The attached patch fixes it.
> 
> We need to save PCIDevice.irq_state[] and PCIBus.irq_count[] in QEMU.
> Otherwise, xc_hvm_set_pci_intx_level() in
> tools/ioemu/target-i386-dm/piix_pci-dm.c:59 could not be called with
> level = 0 which calls hvm_pci_intx_deassert() in
> xen/arch/x86/hvm/irq.c:83 by hypercall.
> If gsi_assert_count[] could not be decreased because this function is
> not called, then vioapic_deliver() in xen/arch/x86/hvm/vioapic.c:473
> is called every time.
> So the guest enters endless IRQ loop and hangs.
> 
> Thanks,
> KAZ
> 
> Signed-off-by: Kazuhiro Suzuki <kaz@xxxxxxxxxxxxxx>
> diff -r a464af87c9db tools/ioemu/hw/pci.c
> --- a/tools/ioemu/hw/pci.c Fri Apr 11 17:29:26 2008 +0100
> +++ b/tools/ioemu/hw/pci.c Mon Apr 14 16:15:33 2008 +0900
> @@ -81,6 +81,7 @@ void pci_device_save(PCIDevice *s, QEMUF
>  {
>      qemu_put_be32(f, 1); /* PCI device version */
>      qemu_put_buffer(f, s->config, 256);
> +    qemu_put_buffer(f, (uint8_t*)s->irq_state, sizeof(s->irq_state));
>  }
>  
>  int pci_device_load(PCIDevice *s, QEMUFile *f)
> @@ -91,6 +92,18 @@ int pci_device_load(PCIDevice *s, QEMUFi
>          return -EINVAL;
>      qemu_get_buffer(f, s->config, 256);
>      pci_update_mappings(s);
> +    qemu_get_buffer(f, (uint8_t*)s->irq_state, sizeof(s->irq_state));
> +    return 0;
> +}
> +
> +void pci_bus_save(PCIBus *bus, QEMUFile *f, int nirq)
> +{
> +    qemu_put_buffer(f, (uint8_t*)bus->irq_count, nirq * sizeof(int));
> +}
> +
> +int pci_bus_load(PCIBus *bus, QEMUFile *f, int nirq)
> +{
> +    qemu_get_buffer(f, (uint8_t*)bus->irq_count, nirq * sizeof(int));
>      return 0;
>  }
>  
> diff -r a464af87c9db tools/ioemu/target-i386-dm/piix_pci-dm.c
> --- a/tools/ioemu/target-i386-dm/piix_pci-dm.c Fri Apr 11 17:29:26 2008 +0100
> +++ b/tools/ioemu/target-i386-dm/piix_pci-dm.c Mon Apr 14 16:15:33 2008 +0900
> @@ -64,6 +64,7 @@ static void i440fx_save(QEMUFile* f, voi
>  {
>      PCIDevice *d = opaque;
>      pci_device_save(d, f);
> +    pci_bus_save(d->bus, f, 128);
>  #ifndef CONFIG_DM
>      qemu_put_8s(f, &smm_enabled);
>  #endif /* !CONFIG_DM */
> @@ -79,6 +80,7 @@ static int i440fx_load(QEMUFile* f, void
>      ret = pci_device_load(d, f);
>      if (ret < 0)
>          return ret;
> +    pci_bus_load(d->bus, f, 128);
>  #ifndef CONFIG_DM
>      i440fx_update_memory_mappings(d);
>      qemu_get_8s(f, &smm_enabled);
> diff -r a464af87c9db tools/ioemu/vl.h
> --- a/tools/ioemu/vl.h Fri Apr 11 17:29:26 2008 +0100
> +++ b/tools/ioemu/vl.h Mon Apr 14 16:15:33 2008 +0900
> @@ -843,6 +843,9 @@ void pci_device_save(PCIDevice *s, QEMUF
>  void pci_device_save(PCIDevice *s, QEMUFile *f);
>  int pci_device_load(PCIDevice *s, QEMUFile *f);
>  
> +void pci_bus_save(PCIBus *bus, QEMUFile *f, int nirq);
> +int pci_bus_load(PCIBus *bus, QEMUFile *f, int nirq);
> +
>  typedef void (*pci_set_irq_fn)(void *pic, int irq_num, int level);
>  typedef int (*pci_map_irq_fn)(PCIDevice *pci_dev, int irq_num);
>  PCIBus *pci_register_bus(pci_set_irq_fn set_irq, pci_map_irq_fn map_irq,
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>