[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] an issue with 'xm save'



Konrad Rzeszutek Wilk wrote:
On Fri, Sep 21, 2012 at 05:41:27PM +0800, Zhenzhong Duan wrote:
  
Hi maintainers,

I found there is an issue when 'xm save' a pvm guest. See below:

When I do save then restore once, CPU(%) in xentop showed around 99%.
When I do that second time, CPU(%) showed 199%

top in dom0 showed:
    PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
   20946 root      18  -2 10984 1284  964 S 19.8  0.3   0:48.93 block
   4939 root      18  -2 10984 1288  964 S 19.5  0.3   1:34.68 block

I could kill the block process, then all look normal again.
    

What is the 'block' process? If you attach 'perf' to it do you get an idea
of what it is spinning at?
  
It's /etc/xen/scripts/block
I add 'set -x' to /etc/xen/scripts/block, found it blocked at claim_lock.
When domU was created first time, claim_lock/release_lock finished quickly,
when 'xm save' was called, claim_lock spin in its own while loop.
I can ensure no other domU create/save/etc happen when I test.
  
xen and xen-tools are both generated with xen-unstable.
I tried xl, but it segfault.
    

It segfaulted? When doing 'xl save'  or 'xl resume'? Or just allocating
the guest?
  
When xl create vm.cfg
  
I also tried ovm3.1.1(xen-4.1.2-18.el5.1 and xen-tools-4.1.2-18.el5.1),
can't reproduce.
    

So the issue is only present with Xen-unstable?
  
Yes,  I found in /etc/xen/scripts/locking.sh of ovm3.1.1, func claim_lock is quite different to xen-unstable
Maybe this is why ovm3.1.1 work with save/restore.
Did you clear _any_ older Xen libraries/tools when you installed Xen-unstable?
  
No, I built xen and xen-tools on el5, then installed to ovm3.1.1 on other partition.
thanks
zduan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.