This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-users] redhat native vs. redhat on XCP

To: Boris Quiroz <bquiroz.work@xxxxxxxxx>
Subject: Re: [Xen-users] redhat native vs. redhat on XCP
From: Grant McWilliams <grantmasterflash@xxxxxxxxx>
Date: Mon, 17 Jan 2011 20:54:03 -0800
Cc: Henrik Andersson <henrik.j.andersson@xxxxxxxxx>, xenList <xen-users@xxxxxxxxxxxxxxxxxxx>, Javier Guerra Giraldez <javier@xxxxxxxxxxx>
Delivery-date: Mon, 17 Jan 2011 20:56:03 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc:content-type; bh=eQhvdG2YS/BXHr+hsYS3wT7J0NW+60LUkCvABwVclPg=; b=j3hpS7VRr/16GV3lYyAxF0u5isbgFk47e8F+zXg++tg+hkrXNNID5SI7I7cFqqXXj3 yiDWAVazc9q9aLPaKJuGi8IPvN8LwtHhua839vPrDaZMawPkRGsBGUBeIgZjlJYP4hSE N7tFWYfoCqZO4dnNmJUQ5ggjRvx1NNAWK67rM=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; b=Qjy80q4TIvTVSSTzpL9lZ/VdGYy75JyNSXIQb6Rg3m2tYuqlQ2Cj0JzAc9rStZ8irO WWBe0O88QQJTSONKE0WOzskZD6fw9hEnZrvtENF3UJEBj70OjKP83bMbcQvmRTmzBVzd +XGyWFQ8AbHGbGA+OA2mQz+s1XBAu832wyzFo=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <AANLkTi=jSkaD_MtgiF+n7M=zOThUpe-M0JDAnb_JTy_7@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <AANLkTinXdOFE+wtNTOyv9C1C2_x7qKkLj-sh+Kby_yZU@xxxxxxxxxxxxxx> <AANLkTikiX+FgS+RgYrd-OASUUoryDcexnu37UwGS1g+2@xxxxxxxxxxxxxx> <4D2E0D51.9070700@xxxxxxxxxxx> <AANLkTi=UcX-Y+-ymujgtnkF9BDF=1FMcLqa6PLTZtFbB@xxxxxxxxxxxxxx> <AANLkTikFdpna7=u2rpLP99msn4TMYEPjnovfXOZ3m0Y7@xxxxxxxxxxxxxx> <AANLkTi=GFWyWwrnqGtMHYfbCzH1mRHKwU34wQTpQDPq7@xxxxxxxxxxxxxx> <AANLkTik+=EtqStD1hcyB3py+0gPmrh5ev46C4RutHNO1@xxxxxxxxxxxxxx> <AANLkTikFuBq9rqtC-PO5JAFP7w_pR-iw6LCFWKYXz7Rh@xxxxxxxxxxxxxx> <AANLkTi=PE2a-Lp9H2xTCAaq49RPgRHU1O8M5_iQx51ru@xxxxxxxxxxxxxx> <AANLkTi=jSkaD_MtgiF+n7M=zOThUpe-M0JDAnb_JTy_7@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx

On Mon, Jan 17, 2011 at 11:22 AM, Boris Quiroz <bquiroz.work@xxxxxxxxx> wrote:
2011/1/16 Grant McWilliams <grantmasterflash@xxxxxxxxx>:
> On Sun, Jan 16, 2011 at 7:22 AM, Javier Guerra Giraldez <javier@xxxxxxxxxxx>
> wrote:
>> On Sun, Jan 16, 2011 at 1:39 AM, Grant McWilliams
>> <grantmasterflash@xxxxxxxxx> wrote:
>> > As long as I use an LVM volume I get very very near real performance ie.
>> > mysqlbench comes in at about 99% of native.
>> without any real load on other DomUs, i guess
>> in my settings the biggest 'con' of virtualizing some loads is the
>> sharing of resources, not the hypervisor overhead.  Since it's easier
>> (and cheaper) to get hardware oversized on CPU and RAM than on IO
>> speed (specially on IOPS), that means that i have some database
>> servers that I can't virtualize on the near term.
> But that is the same as just putting more than one service on one box. I
> believe he was wondering what the overhead was to virtualizing as apposed to
> bare metal. Anytime you have more than one process running on a box you have
> to think about the resources they use and how they'll interact with each
> other. This has nothing to do with virtualizing itself unless the hypervisor
> has a bad scheduler.
>> Of course, most of this would be solved by dedicating spindles instead
>> of LVs to VMs;  maybe when (if?) i get most boxes with lots of 2.5"
>> bays, instead of the current 3.5" ones.  Not using LVM is a real
>> drawback, but it still seems to be better than dedicating whole boxes.
>> --
>> Javier
> I've moved all my VMs to running on LVs on SSDs for this purpose. The
> overhead of LV over just bare drives is very very little unless you're doing
> a lot of snapshots.
> Grant McWilliams
> Some people, when confronted with a problem, think "I know, I'll use
> Windows."
> Now they have two problems.

Hi list,

I did a preliminary test using [1], and the result was near to what I
expect. This was a very very small test, because I've a lot of things
to do before I can setup a good and representative test, but I think
it is a good start.

Using the tool stress I started with the default command: stress --cpu
8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s. Here's the output of
both xen and non-xen servers:

[root@xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s
stress: info: [3682] dispatching hogs: 8 cpu, 4 io, 2 vm, 0 hdd
stress: info: [3682] successful run completed in 10s

[root@non-xen ~]#  stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s
stress: info: [5284] dispatching hogs: 8 cpu, 4 io, 2 vm, 0 hdd
stress: info: [5284] successful run completed in 10s

As you can see, the result is the same, but what happen when I include
hdd i/o to the test? Here's the output:

[root@xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --hdd 10
--timeout 10s
stress: info: [3700] dispatching hogs: 8 cpu, 4 io, 2 vm, 10 hdd
stress: info: [3700] successful run completed in 59s

[root@non-xen ~]#  stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --hdd
10 --timeout 10s
stress: info: [5332] dispatching hogs: 8 cpu, 4 io, 2 vm, 10 hdd
stress: info: [5332] successful run completed in 37s

Including some HDD stress, the result is different. Both servers (xen
and non-xen) are using LVM, but to be honest, I was expecting this
kind of result because of the disk access.

Later this week I'll continue with the tests (well designed tests :P)
and I'll share the results.


1. http://freshmeat.net/projects/stress/


You weren't specific about whether the Xen tests were done on a Dom0 or DomU. I could assume DomU since there should be next to zero overhead for a Xen Dom0 over a non-xen host. Can you post your DomU config please?

Grant McWilliams

Xen-users mailing list