WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

RE: [Xen-users] How many guests

To: "xen-users@xxxxxxxxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-users] How many guests
From: Matej Zary <zary@xxxxxxxxx>
Date: Mon, 7 Jun 2010 09:39:26 +0200
Accept-language: en-US
Acceptlanguage: en-US
Delivery-date: Mon, 07 Jun 2010 00:40:57 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4C0C9CE5.4090807@xxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <4C0AD5A6.9040307@xxxxxxxxxxx> <4C0BF478.6040903@xxxxxxxxxx> <4C0BF94E.10806@xxxxxxxxxxx> <201006062321.30750.bart.coninckx@xxxxxxxxxx> <4C0C9BE8.7050106@xxxxxxxxxx>,<4C0C9CE5.4090807@xxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcsGEbluj+8SmKS/SgeHRCbzCfbHhQAAZMZt
Thread-topic: [Xen-users] How many guests
Software iSCSCI is "free", but all the processing is done by your CPU - the 
load is not the crucial problem - problem is the speed - with iSCSI offload via 
the iSCSI HBA you can get significantly better performance (if the software 
iSCSI is the bottleneck of course) - but it depends on your needs, in many 
cases software iSCSI is perfectly fine (but there are various implementations - 
some better, some worse..). iSCSI HBAs (accelerators) are NOT cheap usually. :)


The good is, you can bench software iSCSI for "free" (well, the time is not for 
free :) ) - and it might be perfectly suitable for your application. 
Important part is to have quality NIC (fast with good offload) and to use 
network bonds (if not using iSCSI HBAs). 



Regards


Matej



________________________________________
From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx 
[xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Jonathan Tripathy 
[jonnyt@xxxxxxxxxxx]
Sent: 07 June 2010 09:16
To: Michael Schmidt; Xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] How many guests

Hi Michael,

You state that iSCSI is reliable but expensive. But isn't iSCSI nearly free?

I agree with you that Fibre Channel systems are very expensive

Would iSCSI over IP be ok?

Thanks


On 07/06/10 08:12, Michael Schmidt wrote:
> This is not completely correct.
> With a raid 1, you have the read performance of 2 disks and just the
> write performance of a single disk.
>
> To the other thinks following this thread:
> If you use a network storage, you have a bandwidth limit with the
> connection. But in the most cases, the raw bandwidth is not the
> bottleneck (instead of the IOs per second).
>
> Network Storages using NFS or NBD are not stable enough in my eyes.
> iSCSI and FC SANs but really stable and expansive as well. But there
> is another much less expensive way:
>
> You get the most servers with an external SAS port. There you can
> connect over a SAS link a JBOD with 12 - 16 disk bays (DAS).
> This disks can be managed by the servers raid controller.
>
> Best Regards
>
> Michael Schmidt
>
>
> Am 06.06.10 23:21, schrieb Bart Coninckx:
>> RAID1 does not perform better than a single disk. It will still
>> depend on what
>> those 5 to 10 VMs would do. It still might be stretching it. For 10
>> webservers
>> visited by 5 users per hour: I would say no problem. For 5 heavily used
>> database servers it will be another story.
>>
>> I guess the only real way to find out is to put your guests on there
>> and try.
>> If you clone them, you will know quite fast.
>>
>>
>> On Sunday 06 June 2010 21:38:54 Jonathan Tripathy wrote:
>>> Thanks Micael,
>>>
>>> I understand what you are saying.
>>>
>>> With a small setup such as a RAID1 array, how many VMs could I rent
>>> out?
>>>
>>> It doesn't matter if it's a small number, it's just to utilise the
>>> server a bit.
>>>
>>> Think it would cope with 5-10?
>>>
>>> Thanks
>>>
>>> Jonathan
>>>
>>> On 06/06/10 20:18, Michael Schmidt wrote:
>>>> Hi Jonathan,
>>>>
>>>> if you plan to migrate existing physical machines to xen VMs, or you
>>>> have some different machines for a comparison,
>>>> you can easy get runtime statistics and calculate the usage. Look at
>>>> the running iostats and cpu usage.
>>>>
>>>> If you plan to rent generic VMs on this server to customers, you disk
>>>> / raid setup will be absolutely the bottleneck.
>>>> A solution at this point is not easy. If you have much write IOs, use
>>>> raid 10 with 4 to 8 disks. With many reads - raid 6 or 50 with the
>>>> same amount of disks.
>>>> In each case i can suggest you 15k rpm SAS disks.
>>>>
>>>> Then you can run 29 VMs. Or 60 VMs with 16GB memory and 2 CPUs.
>>>>
>>>> But note: You cannot set disk priority to the VMs. So if one VM does
>>>> heavy disk IO, all off the other VMs slowed down.
>>>>
>>>> Best Regards
>>>>
>>>> Michael Schmidt
>>>>
>>>> Am 06.06.10 20:45, schrieb Jonathan Tripathy:
>>>>> Hi Michael,
>>>>>
>>>>> Thanks for your email.
>>>>>
>>>>> This is just an idea that I have floating around in my head that
>>>>> maybe I'd like to rent out some VPSs to customers, just to utilise my
>>>>> machine which will be sitting in a co-lo nearly idle.
>>>>>
>>>>> I'd give out VPSs with 256MB RAM and probably 5Mbps connection speed.
>>>>>
>>>>> So the answer is, I don't know what will be running on them, however
>>>>> I could write up an "acceptable use policy", as well as use some
>>>>> throttling/scheduling?
>>>>>
>>>>> Thanks
>>>>>
>>>>> On 06/06/10 19:39, Michael Schmidt wrote:
>>>>>> Hi Jonathan,
>>>>>>
>>>>>> the question is, what a kind of VM?
>>>>>> You can over-utilize a much greater machine with one VM.
>>>>>> Or on the other side, you can run 40 VMs on a shorter machine.
>>>>>>
>>>>>> Each ressource can be a bottleneck
>>>>>>
>>>>>> - Memory - this is realy easy to calculate: Avaiable minus 768MB
>>>>>> (Reserved for Dom0 should be enugh in this case).
>>>>>> - CPU - Here we need a VM statistic
>>>>>> - Disk Bandwidth - Here we need a VM statistic, but in the most
>>>>>> cases not the bottleneck
>>>>>> - Disk IOPS - Here we need a VM statistic, in the most cases the
>>>>>> botelneck
>>>>>>
>>>>>> What a kind of VMs you plane to run?
>>>>>> Webservers / mailservers / database-servers ...?
>>>>>>
>>>>>> Best Regards
>>>>>>
>>>>>> Michael Schmidt
>>>>>>
>>>>>> Am 06.06.10 00:54, schrieb Jonathan Tripathy:
>>>>>>> Hi Everyone,
>>>>>>>
>>>>>>> I have a Dell R210 server which has a Xeon X3430 Quad Core CPU
>>>>>>> (2.4Ghz x 4) with 8GB of RAM. I intend to use the H200 controller
>>>>>>> in a RAID1 setup
>>>>>>>
>>>>>>> How many VMs do you think I'd be able to run on this machine? Is 20
>>>>>>> pushing it?
>>>>>>>
>>>>>>> I'd say most (if not all) guests would be in PV mode.
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Xen-users mailing list
>>>>>>> Xen-users@xxxxxxxxxxxxxxxxxxx
>>>>>>> http://lists.xensource.com/xen-users
>>>>> _______________________________________________
>>>>> Xen-users mailing list
>>>>> Xen-users@xxxxxxxxxxxxxxxxxxx
>>>>> http://lists.xensource.com/xen-users
>>> _______________________________________________
>>> Xen-users mailing list
>>> Xen-users@xxxxxxxxxxxxxxxxxxx
>>> http://lists.xensource.com/xen-users
>>>
>> _______________________________________________
>> Xen-users mailing list
>> Xen-users@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>