|
|
|
|
|
|
|
|
|
|
xen-devel
RE: [Xen-devel] [PATCH]vtd: Fix for irq bind failure after PCI attaching
On Thu, 27 Jan 2011, Zhang, Fengzhe wrote:
> Hi, Stefano,
>
> Here is the calling graph that cause the bug:
>
> unregister_real_device (ioemu)
> |
> +----> pt_msix_disable (ioemu)
> |
> +----> xc_domain_unbind_msi_irq (ioemu)
> | |
> | +----> do_domctl (xen) ----> arch_do_domctl (xen) ---->
> pt_irq_destroy_bind_vtd (xen)
> | |
> | +----> unmap_domain_pirq_emuirq (xen) //freed
> pirq_to_emuirq
> |
> +----> xc_physdev_unmap_pirq (ioemu)
> |
> +----> do_physdev_op (xen)
> |
> +----> physdev_unmap_pirq (xen)
> |
> +----> unmap_domain_pirq_emuirq (xen)
> //found pirq_to_emuirq already freed, abort
> |
> +----> unmap_domain_pirq (xen) //not
> called
>
> The code path you mentioned is not taken for VF dev as its ptdev->machine_irq
> is 0.
It has just occurred to me that all this only happens with guest
cooperation: unregister_real_device is called on pci hotplug in response
to guest's action.
That means that a guest that doesn't support pci hot-unplug (or a
malicious guest) won't do anything in response to the acpi SCI interrupt
we send, therefore unregister_real_device will never be called and we
will be leaking MSIs in the host!
Of course we could solve it adding a new xenstore command to qemu that
calls unregister_real_device directly, but it seems to me that relying
on qemu to free some hypervisor/dom0 resources is not a good idea.
Xen knows all the pirq remapped to this domain, so wouldn't it be
possible for Xen to call pt_irq_destroy_bind_vtd and physdev_unmap_pirq
on domain_kill?
I think that Xen shouldn't leak pirqs no matter what the toolstack or
qemu do.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- RE: [Xen-devel] [PATCH]vtd: Fix for irq bind failure after PCI attaching 32 times,
Stefano Stabellini <=
|
|
|
|
|