WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] hang on restore in 3.3.1

To: <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] hang on restore in 3.3.1
From: "James Harper" <james.harper@xxxxxxxxxxxxxxxx>
Date: Tue, 10 Feb 2009 12:45:43 +1100
Delivery-date: Mon, 09 Feb 2009 17:46:28 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcmLIUlAfaq+PtOOS4GuTmMb1z1qKQ==
Thread-topic: hang on restore in 3.3.1
I am having problems with save/restore under 3.3.1 in the GPLPV drivers.
I call hvm_shutdown(xpdd, SHUTDOWN_suspend), but as soon as I lower IRQL
(enabling interrupts), qemu goes to 100% CPU and the DomU load goes
right up too.

Xentrace is showing a whole lot of this going on:


CPU0  200130258143212 (+     770)  hypercall  [ rip =
0x000000008020632a, eax = 0xffffffff ]
CPU0  200130258151107 (+    7895)  hypercall  [ rip =
0x00000000802062eb, eax = 0xffffffff ]
CPU0  200130258156293 (+    5186)  hypercall  [ rip =
0x00000000802062eb, eax = 0xffffffff ]
CPU0  200130258161233 (+    4940)  hypercall  [ rip =
0x00000000802062eb, eax = 0xffffffff ]
CPU0  200130258165467 (+    4234)  hypercall  [ rip =
0x000000008020640a, eax = 0xffffffff ]
CPU0  200130258167202 (+    1735)  domain_wake       [ domid =
0x00000062, edomid = 0x00000000 ]
CPU0  200130258168511 (+    1309)  switch_infprev    [ old_domid =
0x00000000, runtime = 31143 ]
CPU0  200130258168716 (+     205)  switch_infnext    [ new_domid =
0x00000062, time = 786, r_time = 30000000 ]
CPU0  200130258169338 (+     622)  __enter_scheduler [
prev<domid:edomid> = 0x00000000 : 0x00000000, next<domid:edomid> =
0x00000062 : 0x00000000 ]
CPU0  200130258175532 (+    6194)  VMENTRY     [ dom:vcpu = 0x00000062 ]
CPU0  200130258179633 (+    4101)  VMEXIT      [ dom:vcpu = 0x00000062,
exitcode = 0x0000004e, rIP  = 0x0000000080a562b9 ]
CPU0  0 (+       0)  MMIO_AST_WR [ address = 0xfee000b0, data =
0x00000000 ]
CPU0  0 (+       0)  PF_XEN      [ dom:vcpu = 0x00000062, errorcode =
0x0b, virt = 0xfffe00b0 ]
CPU0  0 (+       0)  INJ_VIRQ    [ dom:vcpu = 0x00000062, vector = 0x00,
fake = 1 ]
CPU0  200130258185932 (+    6299)  VMENTRY     [ dom:vcpu = 0x00000062 ]
CPU0  200130258189737 (+    3805)  VMEXIT      [ dom:vcpu = 0x00000062,
exitcode = 0x00000064, rIP  = 0x0000000080a560ad ]
CPU0  0 (+       0)  INJ_VIRQ    [ dom:vcpu = 0x00000062, vector = 0x83,
fake = 0 ]
CPU0  200130258190990 (+    1253)  VMENTRY     [ dom:vcpu = 0x00000062 ]
CPU0  200130258194791 (+    3801)  VMEXIT      [ dom:vcpu = 0x00000062,
exitcode = 0x0000007b, rIP  = 0x0000000080a5a29e ]
CPU0  0 (+       0)  IO_ASSIST   [ dom:vcpu = 0x0000c202, data = 0x0000
]
CPU0  200130258198944 (+    4153)  switch_infprev    [ old_domid =
0x00000062, runtime = 17087 ]
CPU0  200130258199132 (+     188)  switch_infnext    [ new_domid =
0x00000000, time = 17087, r_time = 30000000 ]
CPU0  200130258199702 (+     570)  __enter_scheduler [
prev<domid:edomid> = 0x00000062 : 0x00000000, next<domid:edomid> =
0x00000000 : 0x00000000 ]
CPU0  200130258206470 (+    6768)  hypercall  [ rip =
0x00000000802062eb, eax = 0xffffffff ]
CPU0  200130258210964 (+    4494)  hypercall  [ rip =
0x00000000802062eb, eax = 0xffffffff ]
CPU0  200130258214767 (+    3803)  hypercall  [ rip =
0x00000000802062eb, eax = 0xffffffff ]
CPU0  200130258218019 (+    3252)  hypercall  [ rip =
0x00000000802062eb, eax = 0xffffffff ]
CPU0  200130258227419 (+    9400)  hypercall  [ rip =
0x00000000802062eb, eax = 0xffffffff ]

It kind of looks like vector 0x83 is being fired over and over, which
would explain why things hang once I enable interrupts again. I will
look into what vector 0x83 is attached to, but does anyone have any
ideas?

Thanks

James

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel