| 
Hello Xin.
In our case, the disk I/O load was always in the DomU (via a  
passthrough block device from Dom0), and was always associated with  
high network traffic, as our SAN is ethernet attached (AoE, not iSCSI). 
We can crash the system through DomU disk I/O and associated network  
traffic alone. Accessing /proc/slabinfo greatly exacerbated the problem. 
P.S. You're missing 'cat' in the script your showed:
!#/bin/sh
while [ 1 ] do
cat /proc/slabinfo >> ./slabinfo.txt
end
--
-- Tom Mornini, CTO
-- Engine Yard, Ruby on Rails Hosting
-- Support, Scalability, Reliability
-- (866) 518-YARD (9273)
On Jun 27, 2007, at 3:39 AM, Xin Chen wrote:
 
Thanks Tom,
I did these experiments on domain 0 today:
First, I shutdown the guest systems running on domain 0, in case  
domain 0 reboots. 
1, write a script:
!#/bin/sh
while [ 1 ] do
/proc/slabinfo >> ./slabinfo.txt
end
then keep this script running around 20 mins, until slabinfo.txt  
about 4gb, domain 0 still running ok. 
2, choose a 10gb file called temp, and do scp
#scp temp localhost:/tmp/temp
copying speed: 25mb/s, domain 0 still running ok.
3, again, like yesterday, scp a large file from remote server to  
domain 0.   I did it twice, and domain 0 still running ok. 
I am not sure what it tells, I need to test again. Just may give a  
idea that it only happens when there is a guest running??? 
xin
Tom Mornini wrote:
 This sounds very similar to bugs we've seen with Xen version  
before 3.1
We believe the problem to be disk I/O related, but in our   
environment, disk I/O is also network I/O, so it's hard to tell. 
We believe this problem to be corrected in 3.1, but we still  
haven't  done enough testing to satisfy ourselves entirely on this. 
See this previous thread:
http://lists.xensource.com/archives/html/xen-users/2007-03/ 
msg00073.html 
 
 
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
 |