[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [Bugfix 3/3] xen/irq: Override ACPI IRQ management callback __acpi_unregister_gsi



Xen overrides __acpi_register_gsi and leaves __acpi_unregister_gsi as is.
That means, an IRQ allocated by acpi_register_gsi_xen_hvm() or
acpi_register_gsi_xen() will be freed by acpi_unregister_gsi_ioapic(),
which may cause undesired effects. So override __acpi_unregister_gsi to
NULL for safety.

Signed-off-by: Jiang Liu <jiang.liu@xxxxxxxxxxxxxxx>
Tested-by: Sander Eikelenboom <linux@xxxxxxxxxxxxxx>
---
 arch/x86/include/asm/acpi.h |    1 +
 arch/x86/pci/xen.c          |    2 ++
 2 files changed, 3 insertions(+)

diff --git a/arch/x86/include/asm/acpi.h b/arch/x86/include/asm/acpi.h
index 0ab4f9fd2687..3a45668f6dc3 100644
--- a/arch/x86/include/asm/acpi.h
+++ b/arch/x86/include/asm/acpi.h
@@ -50,6 +50,7 @@ void acpi_pic_sci_set_trigger(unsigned int, u16);
 
 extern int (*__acpi_register_gsi)(struct device *dev, u32 gsi,
                                  int trigger, int polarity);
+extern void (*__acpi_unregister_gsi)(u32 gsi);
 
 static inline void disable_acpi(void)
 {
diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c
index 6e5e89c3c644..9098d880c476 100644
--- a/arch/x86/pci/xen.c
+++ b/arch/x86/pci/xen.c
@@ -458,6 +458,7 @@ int __init pci_xen_hvm_init(void)
         * just how GSIs get registered.
         */
        __acpi_register_gsi = acpi_register_gsi_xen_hvm;
+       __acpi_unregister_gsi = NULL;
 #endif
 
 #ifdef CONFIG_PCI_MSI
@@ -482,6 +483,7 @@ int __init pci_xen_initial_domain(void)
        pci_msi_ignore_mask = 1;
 #endif
        __acpi_register_gsi = acpi_register_gsi_xen;
+       __acpi_unregister_gsi = NULL;
        /* Pre-allocate legacy irqs */
        for (irq = 0; irq < nr_legacy_irqs(); irq++) {
                int trigger, polarity;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.