This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] Release 0.8.9 of GPL PV drivers for Windows
From: jim burns <jim_burn@xxxxxxxxxxxxx>
Date: Tue, 6 May 2008 05:36:48 -0400
Delivery-date: Tue, 06 May 2008 02:41:11 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20080506072129.GE20425@xxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <AEC6C66638C05B468B556EA548C1A77D013DC578@trantor> <200805052032.46956.jim_burn@xxxxxxxxxxxxx> <20080506072129.GE20425@xxxxxxxxxxxxxxx> (sfid-20080506_032337_962486_1210061F)
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.9.9
On Tuesday May 06 2008 03:21:29 am Pasi Kärkkäinen wrote:
> > - dom0 vs. domu: obviously, the standard to match is dom0 performance. (I
> > suspect, tho', that non-xen kernel performance would be even better.)
> > Looking at the 4k pattern numbers above, hvm severely lags dom0.
> > Interestingly enough, for the 32k pattern, hvm is doing better than dom0.
> domU doing better than dom0 usually happens when you use file backed disks
> on dom0.. then the memory cache of dom0 will affect the domU results.

Interesting that that didn't happen with the 4k pattern numbers, tho'.

> I think you should re-test with vcpu=1.
> Configure dom0 for 1 vcpu and domU for 1 vcpu and pin the domains to have a
> dedicated core. This way you're not sharing any pcpu's between the domains.
> I think this is the "recommended" setup from xen developers for getting
> maximum performance.
> I think the performance will be worse when you have more vcpus in use than
> your actual pcpu count..

Will try that later, after I've tested out a new (non-xen) kernel update. 
Having more vcpus than pcpus would be very easy tho', if you have many 
domains. I can try this with just the hvm domain running.

Xen-users mailing list

<Prev in Thread] Current Thread [Next in Thread>