[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] iscsi vs nfs for xen VMs


  • To: Jia Rao <rickenrao@xxxxxxxxx>
  • From: Christopher Chen <muffaleta@xxxxxxxxx>
  • Date: Fri, 7 Aug 2009 07:00:20 -0700
  • Cc: xen-users@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Fri, 07 Aug 2009 07:01:52 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=VrFZmToNFdP2Stza0CDxSIUGbj4V83jM+ocoW9qdDxRMiprj2mSiwm1LMAPF1vr0nL RdoAnbm556IOWNjVjvdOsl7whixgdQaQ0yoivjYsKRVJTmvOWxPfrDE8+brli6B0w4A5 R0ZJwydDJgVJGsgp3fXamGnJw9g8o/t9YHYI0=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

Jia:

You're partially correct. iSCSI as a protocol has no problem allowing
multiple initiators access to the same block device, but you're almost
certain to run into corruption if you don't set up a higher-level
locking mechanism to make sure your access is consistent across all
devices.

To state again: iSCSI is not in itself a protocol that provides all
the features necessary for a shared filesystem.

If you want to do that, you need to look into the shared filesystem
space (OCFS2, GFS, etc).

The other option is to set up individual logical volumes on the shared
LUN for each VM. Note that this still requires a inter-machine locking
protocol--in my case, Clustered LVM.

There are quite a few of us who have gone ahead and used clustered LVM
with the phy handler--this gives us the consistency on LVM data across
the multiple machines, while we administratively restrict access to
each logical volume to one machine at a time (unless we're doing a
live migration).

I hope this helps.

Cheers

cc

On Fri, Aug 7, 2009 at 6:48 AM, Jia Rao<rickenrao@xxxxxxxxx> wrote:
> Hi all,
>
> I used to host the disk images of my xen VMs in a nfs server and am
> considering move to iscsi for performance purpose.
> Here is the problem I encountered:
>
> For iscsi, there are two ways to export the virtual disk to the physical
> machine hosting the VMs.
>
> 1. Export the virtual disk (at the target side , it is either an img file or
> a lvm) as a physical device, e.g sdc, then boot the VMs using
> "phy:/dev/sdc".
>
> 2. Export the partition containing the virtual disks (in this case, the
> virtual disks are img files) to each physical machine as a physical device,
> and then on each physical machine I mount the new device to the file system.
> In this way, the img files are accessible from each physical machine
> (similar as nfs), and the VMs are booted using tapdisk
> "tap:aio/PATH_TO_IMGS_FILES".
>
> I prefer the second approach because I need tapdisk (each virtual disk is a
> process in host machines) to control the IO priority among VMs.
>
> However, there is a problem when I share the LUN containing all the VM img
> files among multiple hosts.
> It seems that any modifications to the LUN (by writing some data to folder
> that mounted LUN ) is not immediate observable at other hosts sharing the
> LUN (In the case of nfs, the changes are immediate synchronized at all the
> nfs clients). The changes are only observable when I umount the LUN and
> remount it on other physical hosts.
>
> I searched the Internet, it seems that iscsi is not intended for sharing a
> single LUN between multiple hosts.
> Is it true or ,I need some specific configuration of the target or initiator
> to make the changes immediately synchronized at multiple initiator?
>
> Thanks in advance,
> Jia
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
>



-- 
Chris Chen <muffaleta@xxxxxxxxx>
"The fact that yours is better than anyone else's
is not a guarantee that it's any good."
-- Seen on a wall

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.