2011/1/18 Grant McWilliams <grantmasterflash@xxxxxxxxx>:
>
>
>
> On Mon, Jan 17, 2011 at 11:22 AM, Boris Quiroz <bquiroz.work@xxxxxxxxx>
> wrote:
>>
>> 2011/1/16 Grant McWilliams <grantmasterflash@xxxxxxxxx>:
>> >
>> >
>> > On Sun, Jan 16, 2011 at 7:22 AM, Javier Guerra Giraldez
>> > <javier@xxxxxxxxxxx>
>> > wrote:
>> >>
>> >> On Sun, Jan 16, 2011 at 1:39 AM, Grant McWilliams
>> >> <grantmasterflash@xxxxxxxxx> wrote:
>> >> > As long as I use an LVM volume I get very very near real performance
>> >> > ie.
>> >> > mysqlbench comes in at about 99% of native.
>> >>
>> >> without any real load on other DomUs, i guess
>> >>
>> >> in my settings the biggest 'con' of virtualizing some loads is the
>> >> sharing of resources, not the hypervisor overhead. Since it's easier
>> >> (and cheaper) to get hardware oversized on CPU and RAM than on IO
>> >> speed (specially on IOPS), that means that i have some database
>> >> servers that I can't virtualize on the near term.
>> >>
>> > But that is the same as just putting more than one service on one box. I
>> > believe he was wondering what the overhead was to virtualizing as
>> > apposed to
>> > bare metal. Anytime you have more than one process running on a box you
>> > have
>> > to think about the resources they use and how they'll interact with each
>> > other. This has nothing to do with virtualizing itself unless the
>> > hypervisor
>> > has a bad scheduler.
>> >
>> >> Of course, most of this would be solved by dedicating spindles instead
>> >> of LVs to VMs; maybe when (if?) i get most boxes with lots of 2.5"
>> >> bays, instead of the current 3.5" ones. Not using LVM is a real
>> >> drawback, but it still seems to be better than dedicating whole boxes.
>> >>
>> >> --
>> >> Javier
>> >
>> > I've moved all my VMs to running on LVs on SSDs for this purpose. The
>> > overhead of LV over just bare drives is very very little unless you're
>> > doing
>> > a lot of snapshots.
>> >
>> >
>> > Grant McWilliams
>> >
>> > Some people, when confronted with a problem, think "I know, I'll use
>> > Windows."
>> > Now they have two problems.
>> >
>> >
>>
>> Hi list,
>>
>> I did a preliminary test using [1], and the result was near to what I
>> expect. This was a very very small test, because I've a lot of things
>> to do before I can setup a good and representative test, but I think
>> it is a good start.
>>
>> Using the tool stress I started with the default command: stress --cpu
>> 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s. Here's the output of
>> both xen and non-xen servers:
>>
>> [root@xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s
>> stress: info: [3682] dispatching hogs: 8 cpu, 4 io, 2 vm, 0 hdd
>> stress: info: [3682] successful run completed in 10s
>>
>> [root@non-xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout
>> 10s
>> stress: info: [5284] dispatching hogs: 8 cpu, 4 io, 2 vm, 0 hdd
>> stress: info: [5284] successful run completed in 10s
>>
>> As you can see, the result is the same, but what happen when I include
>> hdd i/o to the test? Here's the output:
>>
>> [root@xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --hdd 10
>> --timeout 10s
>> stress: info: [3700] dispatching hogs: 8 cpu, 4 io, 2 vm, 10 hdd
>> stress: info: [3700] successful run completed in 59s
>>
>> [root@non-xen ~]# stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --hdd
>> 10 --timeout 10s
>> stress: info: [5332] dispatching hogs: 8 cpu, 4 io, 2 vm, 10 hdd
>> stress: info: [5332] successful run completed in 37s
>>
>> Including some HDD stress, the result is different. Both servers (xen
>> and non-xen) are using LVM, but to be honest, I was expecting this
>> kind of result because of the disk access.
>>
>> Later this week I'll continue with the tests (well designed tests :P)
>> and I'll share the results.
>>
>> Cheers.
>>
>> 1. http://freshmeat.net/projects/stress/
>>
>> --
>> @cereal_bars
>
> You weren't specific about whether the Xen tests were done on a Dom0 or
> DomU. I could assume DomU since there should be next to zero overhead for a
> Xen Dom0 over a non-xen host. Can you post your DomU config please?
>
> Grant McWilliams
>
>
Sorry.. I forgot include that info.
And yes, the test were done in a DomU running over XCP 0.5. In [1] you
can find the output of xe vm-para-list command.
As I said, later this week or maybe next week I'll start with a well
designed test (not designed yet, so any comment/advice is welcome) and
prepare a little inform about it.
Thanks.
1. https://xen.privatepaste.com/2c123b90c1
--
@cereal_bars
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|