This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-devel] Re: [PATCH][HVM] fix VNIF restore failure on HVMguest wi

To: "Keir Fraser" <keir@xxxxxxxxxxxxx>, "Zhai, Edwin" <edwin.zhai@xxxxxxxxx>
Subject: RE: [Xen-devel] Re: [PATCH][HVM] fix VNIF restore failure on HVMguest with heavy workload
From: "Zhao, Fan" <fan.zhao@xxxxxxxxx>
Date: Thu, 12 Apr 2007 10:47:21 +0800
Cc: Tim Deegan <Tim.Deegan@xxxxxxxxxxxxx>, Ian Pratt <Ian.Pratt@xxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 11 Apr 2007 19:46:24 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <C242DA4C.D2A4%keir@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acd8F1GxkEamROgKEduOGgAX8io7RQAPiQwwAAJ88t8ADvaaYA==
Thread-topic: [Xen-devel] Re: [PATCH][HVM] fix VNIF restore failure on HVMguest with heavy workload
Hi Keir,
Make a correction, the phenomenon is that the guest could not be saved and the 
guest console print "PV stuff on HVM resume successfully!" as soon as xm save 
command was typed. This happens with cset 14773 only on ia32e guest, and I 
noticed that this will be reproduced when the pv modules have been inserted in 
the guest, and even not need to do the xm mem-set. For the latest cset 14797, 
pv drivers built failure on ia32e platform and I can not try.

Xend.log shows:
[2007-04-12 11:40:09 4875] DEBUG (XendDomainInfo:824) Storing domain details: 
{'console/port': '6', 'cpu/3/availability': 'online', 'name': 
'migrating-ExampleHVMDomain', 'console/limit': '1048576', 'cpu/2/availability': 
'online', 'vm': '/vm/ba2d6693-56eb-22cc-2b51-cc0643e37d32', 'domid': '1', 
'cpu/0/availability': 'online', 'memory/target': '262144', 
'control/platform-feature-multiprocessor-suspend': '1', 'store/ring-ref': 
'65534', 'cpu/1/availability': 'online', 'store/port': '5'}
[2007-04-12 11:40:09 4875] INFO (XendCheckpoint:81) save hvm domain
[2007-04-12 11:40:09 4875] DEBUG (XendCheckpoint:95) [xc_save]: 
/usr/lib64/xen/bin/xc_save 22 1 0 0 4
[2007-04-12 11:40:09 4875] DEBUG (XendCheckpoint:307) suspend
[2007-04-12 11:40:09 4875] DEBUG (XendCheckpoint:98) In saveInputHandler suspend
[2007-04-12 11:40:09 4875] DEBUG (XendCheckpoint:100) Suspending 1 ...
[2007-04-12 11:40:09 4875] DEBUG (XendDomainInfo:439) 
[2007-04-12 11:40:09 4875] INFO (XendCheckpoint:336) xc_hvm_save: dom=1, 
max_iters=0, max_factor=0, flags=0x4, live=0, debug=0.
[2007-04-12 11:40:09 4875] DEBUG (XendDomainInfo:905) 
[2007-04-12 11:40:09 4875] INFO (XendCheckpoint:336) saved hvm domain info: 
max_memkb=0x44000, nr_pages=0x107e0
[2007-04-12 11:40:09 4875] DEBUG (XendDomainInfo:905) 

Best regards,
-----Original Message-----
From: Keir Fraser [mailto:keir@xxxxxxxxxxxxx] 
Sent: 2007年4月12日 1:33
To: Zhao, Fan; Zhai, Edwin
Cc: Tim Deegan; Ian Pratt; xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] Re: [PATCH][HVM] fix VNIF restore failure on HVMguest 
with heavy workload

On 11/4/07 17:24, "Zhao, Fan" <fan.zhao@xxxxxxxxx> wrote:

> I noticed that with cset 14773, if I use xm mem-set to adjust the memory of
> hvm guest with balloon driver by xm mem-set, and then save the guest, the xm
> save will fail, so does xm migrate. A white window will pop up, and the guest
> still exists through xm li. So will your great fixes also include the fixing
> for this issue? Thanks!

It works for me (Linux guest, initial alloc 256MB, ballooned to 128MB).

Is it definitely the save that fails for you? Is there any interesting
output in xend.log?

 -- Keir

Xen-devel mailing list