[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/hvm: implement save/restore for posted interrupts



Hi Olaf,

I think I had meet the same problem when I tried to backport apic-v to xen 
4.1.2.

When apic-v enabled, suse11 guest will hang after restore. And I tried other 
OSs, the result like this

Suse11 sp1,suse11 sp2, suse11 sp3 have this problem
Ubuntu12, suse10sp1,redhat 5.5 do not have this problem

And here is the odd thing:
If you disable all network device before save, then suse11 will not hang either.

So I think it is a pvdriver issue, and after some investigation, I pretty sure 
it is eventchannel handle issue in certain kernel version,
after this commit 
https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?id=76d2160147f43f982dfe881404cfde9fd0a9da21

I don't understand what happens extractly, it seems that when pvdriver try to 
suspend device it will disable irq, but because above kernel commit, the 
disable irq function not work.
So in vioapic route table of guest, remote_irr field of this irq is not 
cleared, and after restore xen will not deliver event channel irq to this guest 
because remote_irr has pending irq.

I'm not sure is this the same problem, but you can try my patch. This patch is 
base on xen 4.1.2, and it is a temporary solution. Hope it helps.

diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index acc9197..0d064db 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -270,6 +270,9 @@ void hvm_set_callback_via(struct domain *d, uint64_t via)
     unsigned int gsi=0, pdev=0, pintx=0;
     uint8_t via_type;

+    struct hvm_hw_vioapic *vioapic = domain_vioapic(d);
+    union vioapic_redir_entry *ent = NULL;
+
     via_type = (uint8_t)(via >> 56) + 1;
     if ( ((via_type == HVMIRQ_callback_gsi) && (via == 0)) ||
          (via_type > HVMIRQ_callback_vector) )
@@ -290,6 +293,14 @@ void hvm_set_callback_via(struct domain *d, uint64_t via)
         case HVMIRQ_callback_pci_intx:
             pdev  = hvm_irq->callback_via.pci.dev;
             pintx = hvm_irq->callback_via.pci.intx;
+
+            gsi = hvm_pci_intx_gsi(pdev, pintx);
+            ent = &vioapic->redirtbl[gsi];
+            if ( !ent->fields.mask && ent->fields.trig_mode != 
VIOAPIC_EDGE_TRIG ){
+                printk("clear remote_irr when set HVMIRQ callback\n");
+                ent->fields.remote_irr = 0;
+            }
+
             __hvm_pci_intx_deassert(d, pdev, pintx);
             break;
         default:


> -----Original Message-----
> From: xen-devel-bounces@xxxxxxxxxxxxx
> [mailto:xen-devel-bounces@xxxxxxxxxxxxx] On Behalf Of Jan Beulich
> Sent: Monday, July 28, 2014 5:02 PM
> To: Olaf Hering
> Cc: Kevin Tian; Keir Fraser; Eddie Dong; Donald D Dugger;
> xen-devel@xxxxxxxxxxxxx; Dongxiao Xu; Jun Nakajima; Yang Z Zhang
> Subject: Re: [Xen-devel] [PATCH] x86/hvm: implement save/restore for
> posted interrupts
> 
> >>> On 28.07.14 at 10:17, <yang.z.zhang@xxxxxxxxx> wrote:
> > Jan Beulich wrote on 2014-07-28:
> >>>>> On 25.07.14 at 23:31, <kevin.tian@xxxxxxxxx> wrote:
> >>> Well, my read of this patch is that it hides some problem other
> >>> place by forcing posted injection at restore. As Yang has pointed
> >>> out, it is not necessary once the notification has been synced to
> >>> IRR it's done (will be noted before vmentry). So at restore path,
> >>> it's just about how pending IRR is handled. Now the problem is that
> >>> Yang can't reproduce the problem locally (let's see any change with
> >>> Olaf's further information), so we need Olaf's help to figure out
> >>> the real culprit with
> >> our input.
> >>
> >> Searching the SDM I can't find any reference to IRR uses during
> >> interrupt recognition. The only reference I can find is that during
> >> delivery the IRR bit gets cleared and the new RVI determined by
> >> scanning IRR. Hence I wonder whether setting RVI post-restore instead of
> > sync-ing IRR to PIR is what is needed?
> >
> > vmx_intr_assist() will do it.
> 
> Olaf,
> 
> any chance you could check that this is actually happening?
> 
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.