WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Looking for tips about Physical Migration on XEN

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] Looking for tips about Physical Migration on XEN
From: Javier Guerra <javier@xxxxxxxxxxx>
Date: Mon, 19 Jun 2006 18:26:23 -0500
Delivery-date: Mon, 19 Jun 2006 16:28:59 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <Pine.LNX.4.44.0606191553370.26751-100000@xxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <Pine.LNX.4.44.0606191553370.26751-100000@xxxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.9.1
On Monday 19 June 2006 6:01 pm, tbrown@xxxxxxxxxxxxx wrote:
> AoE exports a block device, AFAIK, this means you can _not_ have two nodes
> accessing (mounting) it at the same time or you are basically guaranteed

that's precisely the point of it.  AoE (or NBD, or iSCSI, or FC) gives you a 
block device.  you then partition it (GPT, LVM, EVMS) and _DON'T_ mount those 
LV on dom0, just give them to the domUs.  only one domU would mount each LV, 
no problem there.  when migrating, the 'new' domU must have access to the 
same LV, but at that time, the 'old' domU isn't running anymore. so, at no 
moment any LV is used by more than one domU.

> Also, at least with the version I tested, the vblade write performance
> sucked (5 Mbyte vs 40 Mbyte read) ... and the coraid docs showed similar
> numbers 5 Mbyte/s read/write per drive. That may be perfectly acceptable
> for you. It isn't bad. I tried nbd and it was much more symetric (40 or
> more Mbyte/sec both ways).

i haven't tested vblade yet, but the Coraid 15-bay SATA box gives me easily 
over 45-50MB/sec either read or write on GbE, no jumbo-frames (yet)


-- 
Javier

Attachment: pgpsmGZ4S6ZWw.pgp
Description: PGP signature

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users