WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

RE: [Xen-users] VCPUs with many (20?) domains

To: "'Robert Hulme'" <rob@xxxxxxxxxxxx>, "'Xen-users'" <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-users] VCPUs with many (20?) domains
From: "Sylvain Coutant" <sco@xxxxxxxxxx>
Date: Tue, 23 May 2006 12:23:33 +0200
Delivery-date: Tue, 23 May 2006 03:24:40 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <e50d039c0605230316l6dffecafsb423ba54c8538a4f@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Organization: ADVISEO
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcZ+UWbfwb6dP5SkQm+IgbH5rxwyPgAAMD9Q
> Anyone have any hints / suggestions / ideas?

Yep. VCPU 0 will take a huge it for I/Os as they are not well balanced. So just 
make sure you balancer all your VPCU 0s on as much physical CPU as you can.

Unless you have huge needs for processing power into domUs, I'm not sure you'd 
benefit from 4 VCPUs par server. Having two of them would balance I/O on VPCU 0 
and processes will still be able to run in parallel on VCPU 1. Could be a nice 
deal.


BR,

--
Sylvain COUTANT

ADVISEO
http://www.adviseo.fr/
http://www.open-sp.fr/



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>