WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] [PATCH] [RFC] VT-d: always clean up dpci timers.

To: <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] [PATCH] [RFC] VT-d: always clean up dpci timers.
From: Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Date: Mon, 18 Jul 2011 17:38:48 +0100
Cc: Allen Kay <allen.m.kay@xxxxxxxxx>, keir@xxxxxxx
Delivery-date: Mon, 18 Jul 2011 09:43:36 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.21 (2010-09-15)
If a VM has all its PCI devices deassigned, need_iommu(d) becomes false
but it might still have DPCI EOI timers that were init_timer()d but not
yet kill_timer()d.  That causes xen to crash later because the linked
list of inactive timers gets corrupted, e.g.:

(XEN) Xen call trace:
(XEN)    [<ffff82c480126256>] set_timer+0x1c2/0x24f
(XEN)    [<ffff82c48011fbf8>] schedule+0x129/0x5dd
(XEN)    [<ffff82c480122c1e>] __do_softirq+0x7e/0x89
(XEN)    [<ffff82c480122c9d>] do_softirq+0x26/0x28
(XEN)    [<ffff82c480153c85>] idle_loop+0x5a/0x5c
(XEN)    
(XEN) 
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Assertion 'entry->next->prev == entry' failed at 
/local/scratch/tdeegan/xen-unstable.hg/xen/include:172
(XEN) ****************************************

The following patch makes sure that the domain destruction path always
clears up the DPCI state even if !needs_iommu(d). 

Although it fixes the crash for me, I'm sufficiently confused by this
code that I don't know whether it's enough.  If the dpci timer state
gets freed earlier than pci_clean_dpci_irqs() then there's still a race,
and some other function (reassign_device_ownership() ?) needs to sort
out the timers when the PCI card is deassigned.

Allen, can you comment?

Signed-off-by: Tim Deegan <Tim.Deegan@xxxxxxxxxx>

diff -r ab6551e30841 xen/drivers/passthrough/pci.c
--- a/xen/drivers/passthrough/pci.c     Mon Jul 18 10:59:44 2011 +0100
+++ b/xen/drivers/passthrough/pci.c     Mon Jul 18 17:22:48 2011 +0100
@@ -269,7 +269,7 @@ static void pci_clean_dpci_irqs(struct d
     if ( !iommu_enabled )
         return;
 
-    if ( !is_hvm_domain(d) || !need_iommu(d) )
+    if ( !is_hvm_domain(d) )
         return;
 
     spin_lock(&d->event_lock);

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel