WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] Error restoring DomU when using GPLPV

To: <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] Error restoring DomU when using GPLPV
From: "James Harper" <james.harper@xxxxxxxxxxxxxxxx>
Date: Tue, 4 Aug 2009 11:41:44 +1000
Cc: Joshua West <jwest@xxxxxxxxxxxx>
Delivery-date: Mon, 03 Aug 2009 18:42:11 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <AEC6C66638C05B468B556EA548C1A77D016DE026@trantor>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <AEC6C66638C05B468B556EA548C1A77D016DE026@trantor>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcoUog43ZhCRzTxFSPuGfTBNvaTQKgAAoXaw
Thread-topic: [Xen-devel] Error restoring DomU when using GPLPV
It seems that somewhere along the line Xen started using an event
channel to trigger a suspend, as opposed to the 'shutdown' xenstore
value. Is there anything else there I need to know about?

Thanks

James

> -----Original Message-----
> From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-devel-
> bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of James Harper
> Sent: Tuesday, 4 August 2009 11:23
> To: xen-devel@xxxxxxxxxxxxxxxxxxx
> Cc: Joshua West
> Subject: [Xen-devel] Error restoring DomU when using GPLPV
> 
> A user (Joshua) is reporting that 'xm restore' isn't working when
GPLPV
> is involved. I've checked the logs generated by GPLPV and there are no
> problems on the save side of things that I can see. Is there anything
> extra that the suspend or restore needs to do since 3.4.x?
> 
> Joshua has captured the following:
> 
> On the dom0 I initiated a "xm save" of the VM.  No problems here, but
> when I initiate an "xm restore", I receive the following error:
> 
> Error: /usr/lib64/xen/bin/xc_restore 56 103 2 3 1 1 1 failed
> 
> And in /var/log/xen/xend.log, I see (pertaining to this event):
> 
> [2009-08-02 15:12:44 4839] INFO (image:745) Need to create platform
> device.[domid:103]
> [2009-08-02 15:12:44 4839] DEBUG (XendCheckpoint:261)
> restore:shadow=0x9, _static_max=0x40000000, _static_min=0x0,
> [2009-08-02 15:12:44 4839] DEBUG (balloon:166) Balloon: 31589116 KiB
> free; need 1061888; done.
> [2009-08-02 15:12:44 4839] DEBUG (XendCheckpoint:278) [xc_restore]:
> /usr/lib64/xen/bin/xc_restore 56 103 2 3 1 1 1
> [2009-08-02 15:12:44 4839] INFO (XendCheckpoint:417) xc_domain_restore
> start: p2m_size = 100000
> [2009-08-02 15:12:44 4839] INFO (XendCheckpoint:417) Reloading memory
> pages:   0%
> [2009-08-02 15:12:52 4839] INFO (XendCheckpoint:417) Failed allocation
> for dom 103: 1024 extents of order 0
> [2009-08-02 15:12:52 4839] INFO (XendCheckpoint:417) ERROR Internal
> error: Failed to allocate memory for batch.!
> [2009-08-02 15:12:52 4839] INFO (XendCheckpoint:417)
> [2009-08-02 15:12:52 4839] INFO (XendCheckpoint:417) Restore exit with
> rc=1
> [2009-08-02 15:12:52 4839] DEBUG (XendDomainInfo:2724)
> XendDomainInfo.destroy: domid=103
> [2009-08-02 15:12:52 4839] ERROR (XendDomainInfo:2738)
> XendDomainInfo.destroy: domain destruction failed.
> Traceback (most recent call last):
>   File
"/usr/lib64/python2.4/site-packages/xen/xend/XendDomainInfo.py",
> line 2731, in destroy
>     xc.domain_pause(self.domid)
> Error: (3, 'No such process')
> [2009-08-02 15:12:52 4839] DEBUG (XendDomainInfo:2204) No device model
> [2009-08-02 15:12:52 4839] DEBUG (XendDomainInfo:2206) Releasing
devices
> [2009-08-02 15:12:52 4839] DEBUG (XendDomainInfo:2219) Removing
vbd/768
> [2009-08-02 15:12:52 4839] DEBUG (XendDomainInfo:1134)
> XendDomainInfo.destroyDevice: deviceClass = vbd, device = vbd/768
> [2009-08-02 15:12:52 4839] DEBUG (XendDomainInfo:2219) Removing vfb/0
> [2009-08-02 15:12:52 4839] DEBUG (XendDomainInfo:1134)
> XendDomainInfo.destroyDevice: deviceClass = vfb, device = vfb/0
> [2009-08-02 15:12:52 4839] DEBUG (XendDomainInfo:2219) Removing
> console/0
> [2009-08-02 15:12:52 4839] DEBUG (XendDomainInfo:1134)
> XendDomainInfo.destroyDevice: deviceClass = console, device =
console/0
> [2009-08-02 15:12:52 4839] ERROR (XendDomain:1149) Restore failed
> Traceback (most recent call last):
>   File "/usr/lib64/python2.4/site-packages/xen/xend/XendDomain.py",
line
> 1147, in domain_restore_fd
>     return XendCheckpoint.restore(self, fd, paused=paused,
> relocating=relocating)
>   File
"/usr/lib64/python2.4/site-packages/xen/xend/XendCheckpoint.py",
> line 282, in restore
>     forkHelper(cmd, fd, handler.handler, True)
>   File
"/usr/lib64/python2.4/site-packages/xen/xend/XendCheckpoint.py",
> line 405, in forkHelper
>     raise XendError("%s failed" % string.join(cmd))
> XendError: /usr/lib64/xen/bin/xc_restore 56 103 2 3 1 1 1 failed
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel