WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Xen Performance

Peter Booth <peter_booth@xxxxxxx> writes:

> Here's more context. The VMs weren't page scanning. They did show non-
> trivial %steal (where non-trivial is > 1%)
> These VMs are commercially hosted on five quad core hosts with approx
> 14 VMs per host and just under 1GB RAM per VM. Thats not a lot of
> memory, but then the workload of one nginx and three mongrels per VM
> is comfortably under 512MB of RSS.

I guess I don't know much about mongrel, but if someone was complaining to me
about performance of a modern web application in an image with only 1GB ram, 
CPU would not be the first thing I'd look at.    

so steal was >1%?   what was idle?  what was iowait?   if steal was only 10%
and iowait was 50%, I'd still add more ram before I added more CPU.
(see, more ram, if it's not required by the application, will be used as
disk cache, and in most cases help to mitigate slow or overused disk.)

> I have heard numerous mentions of similar behavior from users of other
> utility platforms . There is a recent (Feb 2009) report by IBM that
> also describes this behavior once #domU exceeds six.

Yeah, about a month ago I had a customer complaining about this, wanting
more CPU.   I talked him into getting more ram (based on his iowait numbers)
and his performance improved.  Disk is orders of magnitude slower than
just about anything else (besides maybe network)  so whenever you can
exchange disk access for ram access, you see dramatic performance 
improvements.

> My point, however, is that Xen performance is not well understood in
> general, and there are situations where virtualization doesn't perform
> well.

>From what I have seen, the overhead of using phy:// disks is pretty small,
when you are the only VM trying to access the disk, but having a bunch of 
other guys hitting the same disk as you are can really slow you down. - 
it seems to me that it is turning all your sequential access into random
access.

Also note, I've seen better worst-case performance by giving each VM fewer
VCPUs, and the xen guys are not kidding about dedicating a core to the
Dom0.   setting cpus="1-7" in your xm config file (assuming an 8 core box)
and giving dom0 only 1 vcpu makes a world of difference on heavily 
loaded boxes.  


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>