WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

RE: [Xen-users] XEN - networking and performance

To: <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-users] XEN - networking and performance
From: <admin@xxxxxxxxxxx>
Date: Fri, 7 Oct 2011 20:27:41 -0500
Delivery-date: Fri, 07 Oct 2011 18:29:12 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
Importance: Normal
In-reply-to: <B1B9801C5CBC954680D0374CC4EEABA50BE227D7@xxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Reply-to: admin@xxxxxxxxxxx
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AQHMhFAN5rOzJb1ggECVvcySiYiOp5Vv8jcA///SucCAAEirAIABIJaQgAB7G7A=
We've used SSD drives as caching drives (L2ARC) in ZFS SAN and NAS
solutions.  It is a cost effective way to dramatically improve the
performance of the ZFS systems.  We usually toss 300GB of SSD drives into
the storage systems for caching.  SSD is cheap compared to RAM.

Here is a link:
http://www.zfsbuild.com/2010/07/30/testing-the-l2arc/



-----Original Message-----
From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
[mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Jeff Sturm
Sent: Friday, October 07, 2011 1:13 PM
To: Simon Hobson; xen-users@xxxxxxxxxxxxxxxxxxx
Subject: RE: [Xen-users] XEN - networking and performance

> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-
> bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Simon Hobson
> Sent: Thursday, October 06, 2011 4:51 PM
>
> Jeff Sturm wrote:
> 
> >One of the traps we've run into when virtualizing moderately I/O-heavy
> >hosts, is not sizing our disk arrays right.  Not in terms of capacity
> >(terabytes) but in spindles.  If each physical host normally has 4
> >dedicated disks, for example, virtualizing 8 of these onto a domU
> >attached to a disk array with 16 drives effectively cuts that ratio
> >from 4:1 down to 2:1.  Latency goes up, throughput goes down.
> 
> Not only that, but you also guarantee that the I/O is across different
areas of the disk
> (different partitions/logical volumes) and so you also virtually guarantee
a lot more
> seek activity.

Very true, yes.  In such an environment, sequential disk performance means
very little.  You need good random I/O throughput and that's hard to get
with mechanical disks, beyond a few thousand iops.  15k disks help, a larger
chassis with more disks helps, but that's just throwing $$$ at the problem
and doesn't really break through the iops barrier.

Anyone tried SSD with good results?  I'm sure capacity requirements can make
it cost-prohibitive for many.

Jeff



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users