WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] redhat native vs. redhat on XCP

To: Grant McWilliams <grantmasterflash@xxxxxxxxx>
Subject: Re: [Xen-users] redhat native vs. redhat on XCP
From: Boris Quiroz <bquiroz.work@xxxxxxxxx>
Date: Tue, 18 Jan 2011 09:31:49 -0300
Cc: Henrik Andersson <henrik.j.andersson@xxxxxxxxx>, xenList <xen-users@xxxxxxxxxxxxxxxxxxx>, Javier Guerra Giraldez <javier@xxxxxxxxxxx>
Delivery-date: Tue, 18 Jan 2011 04:32:57 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=5dDPQ2QiGu/1da+ErYV2FX5oSfgWPKAe4sUcvT9sWUA=; b=ILBKL6uz4/mCW5V3HfrZqCrlGokk0CXFo2TcdGljddRPM6ME4tN1bP44ZRX9pyKmY8 PnUVKGb/MR/aMPkpXyebknflj0ArpaD07N2p1FTb5ndIg9D3BaeyN57+dgsrPvl6PDCh awgirwvb7OP4L89DznQcPmiXpSQLZMLd7o5vg=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=ULRdJK0FKec+HCnqaoeQypFgB063icjfEzkFMcLBTb0MRNwdKtlZmalOrLw5GiEaCg cz34ye/iaoLzQUBFLIe3tJD/kX6fLnjfkErNsbc7x2d3Nr1WPP8uM3nYeyXBEBDMlAUq VbwhoXBWd8WFEAg9e5PrWK1hVSyrVvyurJ+Sk=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <AANLkTik-6wFwTtxCnu6QFAoPR58nP5Heg62k_C2=NhWh@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <AANLkTinXdOFE+wtNTOyv9C1C2_x7qKkLj-sh+Kby_yZU@xxxxxxxxxxxxxx> <AANLkTikiX+FgS+RgYrd-OASUUoryDcexnu37UwGS1g+2@xxxxxxxxxxxxxx> <4D2E0D51.9070700@xxxxxxxxxxx> <AANLkTi=UcX-Y+-ymujgtnkF9BDF=1FMcLqa6PLTZtFbB@xxxxxxxxxxxxxx> <AANLkTikFdpna7=u2rpLP99msn4TMYEPjnovfXOZ3m0Y7@xxxxxxxxxxxxxx> <AANLkTi=GFWyWwrnqGtMHYfbCzH1mRHKwU34wQTpQDPq7@xxxxxxxxxxxxxx> <AANLkTik+=EtqStD1hcyB3py+0gPmrh5ev46C4RutHNO1@xxxxxxxxxxxxxx> <AANLkTikFuBq9rqtC-PO5JAFP7w_pR-iw6LCFWKYXz7Rh@xxxxxxxxxxxxxx> <AANLkTi=PE2a-Lp9H2xTCAaq49RPgRHU1O8M5_iQx51ru@xxxxxxxxxxxxxx> <AANLkTi=jSkaD_MtgiF+n7M=zOThUpe-M0JDAnb_JTy_7@xxxxxxxxxxxxxx> <AANLkTik-6wFwTtxCnu6QFAoPR58nP5Heg62k_C2=NhWh@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
2011/1/18 Grant McWilliams <grantmasterflash@xxxxxxxxx>:
>
>
>
> On Mon, Jan 17, 2011 at 11:22 AM, Boris Quiroz <bquiroz.work@xxxxxxxxx>
> wrote:
>>
>> 2011/1/16 Grant McWilliams <grantmasterflash@xxxxxxxxx>:
>> >
>> >
>> > On Sun, Jan 16, 2011 at 7:22 AM, Javier Guerra Giraldez
>> > <javier@xxxxxxxxxxx>
>> > wrote:
>> >>
>> >> On Sun, Jan 16, 2011 at 1:39 AM, Grant McWilliams
>> >> <grantmasterflash@xxxxxxxxx> wrote:
>> >> > As long as I use an LVM volume I get very very near real performance
>> >> > ie.
>> >> > mysqlbench comes in at about 99% of native.
>> >>
>> >> without any real load on other DomUs, i guess
>> >>
>> >> in my settings the biggest 'con' of virtualizing some loads is the
>> >> sharing of resources, not the hypervisor overhead.  Since it's easier
>> >> (and cheaper) to get hardware oversized on CPU and RAM than on IO
>> >> speed (specially on IOPS), that means that i have some database
>> >> servers that I can't virtualize on the near term.
>> >>
>> > But that is the same as just putting more than one service on one box. I
>> > believe he was wondering what the overhead was to virtualizing as
>> > apposed to
>> > bare metal. Anytime you have more than one process running on a box you
>> > have
>> > to think about the resources they use and how they'll interact with each
>> > other. This has nothing to do with virtualizing itself unless the
>> > hypervisor
>> > has a bad scheduler.
>> >
>> >> Of course, most of this would be solved by dedicating spindles instead
>> >> of LVs to VMs;  maybe when (if?) i get most boxes with lots of 2.5"
>> >> bays, instead of the current 3.5" ones.  Not using LVM is a real
>> >> drawback, but it still seems to be better than dedicating whole boxes.
>> >>
>> >> --
>> >> Javier
>> >
>> > I've moved all my VMs to running on LVs on SSDs for this purpose. The
>> > overhead of LV over just bare drives is very very little unless you're
>> > doing
>> > a lot of snapshots.
>> >
>> >
>> > Grant McWilliams
>> >
>> > Some people, when confronted with a problem, think "I know, I'll use
>> > Windows."
>> > Now they have two problems.
>> >
>> >
>>
>> Hi list,
>>
>> I did a preliminary test using [1], and the result was near to what I
>> expect. This was a very very small test, because I've a lot of things
>> to do before I can setup a good and representative test, but I think
>> it is a good start.
>>
>> Using the tool stress I started with the default command: stress --cpu
>> 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s. Here's the output of
>> both xen and non-xen servers:
>>
>> [root@xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s
>> stress: info: [3682] dispatching hogs: 8 cpu, 4 io, 2 vm, 0 hdd
>> stress: info: [3682] successful run completed in 10s
>>
>> [root@non-xen ~]#  stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout
>> 10s
>> stress: info: [5284] dispatching hogs: 8 cpu, 4 io, 2 vm, 0 hdd
>> stress: info: [5284] successful run completed in 10s
>>
>> As you can see, the result is the same, but what happen when I include
>> hdd i/o to the test? Here's the output:
>>
>> [root@xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --hdd 10
>> --timeout 10s
>> stress: info: [3700] dispatching hogs: 8 cpu, 4 io, 2 vm, 10 hdd
>> stress: info: [3700] successful run completed in 59s
>>
>> [root@non-xen ~]#  stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --hdd
>> 10 --timeout 10s
>> stress: info: [5332] dispatching hogs: 8 cpu, 4 io, 2 vm, 10 hdd
>> stress: info: [5332] successful run completed in 37s
>>
>> Including some HDD stress, the result is different. Both servers (xen
>> and non-xen) are using LVM, but to be honest, I was expecting this
>> kind of result because of the disk access.
>>
>> Later this week I'll continue with the tests (well designed tests :P)
>> and I'll share the results.
>>
>> Cheers.
>>
>> 1. http://freshmeat.net/projects/stress/
>>
>> --
>> @cereal_bars
>
> You weren't specific about whether the Xen tests were done on a Dom0 or
> DomU. I could assume DomU since there should be next to zero overhead for a
> Xen Dom0 over a non-xen host. Can you post your DomU config please?
>
> Grant McWilliams
>
>

Sorry.. I forgot include that info.
And yes, the test were done in a DomU running over XCP 0.5. In [1] you
can find the output of xe vm-para-list command.

As I said, later this week or maybe next week I'll start with a well
designed test (not designed yet, so any comment/advice is welcome) and
prepare a little inform about it.

Thanks.

1. https://xen.privatepaste.com/2c123b90c1

-- 
@cereal_bars

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users