[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [PATCH]vtd: Fix for irq bind failure after PCI attaching 32 times



>        |                           +--> iommu_update_ire_from_msi
>        |                               (should clean the vtd binding, like 
> pt_irq_destroy_bind_vtd)

Stefano, iommu_update_ire_from_msi() maps to 
intremap.c/msi_msg_write_remap_rte() with vt-d hardware.  This function 
strictly deals with VT-d interrupt remapping HW related tasks.  It does not do 
any irq cleanup for xen.

Allen

-----Original Message-----
From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx 
[mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Stefano Stabellini
Sent: Thursday, February 03, 2011 7:23 AM
To: Stefano Stabellini
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx; Zhang, Xiantao; Zhang, Fengzhe
Subject: RE: [Xen-devel] [PATCH]vtd: Fix for irq bind failure after PCI 
attaching 32 times

On Wed, 2 Feb 2011, Stefano Stabellini wrote:
> On Thu, 27 Jan 2011, Zhang, Fengzhe wrote:
> > Hi, Stefano,
> > 
> > Here is the calling graph that cause the bug:
> > 
> > unregister_real_device (ioemu)
> >     |
> >     +----> pt_msix_disable (ioemu)
> >             |
> >             +----> xc_domain_unbind_msi_irq (ioemu)
> >             |       |
> >             |       +----> do_domctl (xen) ----> arch_do_domctl (xen) ----> 
> > pt_irq_destroy_bind_vtd (xen)
> >             |              |
> >             |              +----> unmap_domain_pirq_emuirq (xen)  //freed 
> > pirq_to_emuirq
> >             |
> >             +----> xc_physdev_unmap_pirq (ioemu)
> >                    |
> >                    +----> do_physdev_op (xen) 
> >                            |
> >                            +----> physdev_unmap_pirq (xen)
> >                                    |
> >                                    +----> unmap_domain_pirq_emuirq (xen)  
> > //found pirq_to_emuirq already freed, abort
> >                                    |
> >                                    +----> unmap_domain_pirq (xen)    //not 
> > called
> > 
> > The code path you mentioned is not taken for VF dev as its 
> > ptdev->machine_irq is 0.
> 
> It has just occurred to me that all this only happens with guest
> cooperation: unregister_real_device is called on pci hotplug in response
> to guest's action.
> That means that a guest that doesn't support pci hot-unplug (or a
> malicious guest) won't do anything in response to the acpi SCI interrupt
> we send, therefore unregister_real_device will never be called and we
> will be leaking MSIs in the host!
> 
> Of course we could solve it adding a new xenstore command to qemu that
> calls unregister_real_device directly, but it seems to me that relying
> on qemu to free some hypervisor/dom0 resources is not a good idea.
> 
> Xen knows all the pirq remapped to this domain, so wouldn't it be
> possible for Xen to call pt_irq_destroy_bind_vtd and physdev_unmap_pirq
> on domain_kill?
> I think that Xen shouldn't leak pirqs no matter what the toolstack or
> qemu do.
> 

actually it looks like xen is cleaning up after itself:

arch_domain_destroy
        |
        +--> pci_release_devices
        |            |
        |            +--> pci_cleanup_msi
        |                   |
        |                   +--> msi_free_irq
        |                           |
        |                           +--> iommu_update_ire_from_msi
        |                               (should clean the vtd binding, like 
pt_irq_destroy_bind_vtd)
        |            
        |            
        +--> free_domain_pirqs
                     |
                     +--> unmap_domain_pirq


so it doesn't actually matter if the guest supports pci hotplug or not,
because if it doesn't, xen won't leak any resources anyway.
Am I right? Could you please confirm this?

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.