WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Xen and iSCSI

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] Xen and iSCSI
From: Markus Hochholdinger <Markus@xxxxxxxxxxxxxxxxx>
Date: Tue, 31 Jan 2006 14:03:55 +0100
Delivery-date: Tue, 31 Jan 2006 13:13:56 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <43DCA940.50401@xxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <200601281724.47989.Markus@xxxxxxxxxxxxxxxxx> <43DCA940.50401@xxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.7.2
Hi,

Am Sonntag, 29. Januar 2006 12:38 schrieben Sie:
> Markus Hochholdinger wrote:
> > Background is that i am planing virtual servers within xen and they
> > should get their disks from the network so that i can perform live
> > migration. Has anyone a setup like this working?
> Sure. I've got a similar setup with gnbd. gnbd works like a charm. I've
> heard of people combining gnbd with drbd to get a true HA setup. I've
> only used mine for test and development.

hey, you're completely right :-) gnbd seams to be very easy and is exactly 
what i need! As of the docs 
(http://sourceware.org/cluster/gnbd/gnbd_usage.txt):
 server:
  1. Start the gnbd server daemon
  # gnbd_serv
  2. Export the block devices
  # gnbd_export -c -e <unique_gnbd_device_name> -d <local_partition_name>
 client:
  1. Mount sysfs, if not already running
  # mount -t sysfs sysfs /sys
  2. Load the gnbd module
  # modprobe gnbd
  3. Import the gnbd devices
  # gnbd_import -i <gnbd_server_machine>
  This imports all of the exported gnbd devices from a server. The gnbd
  devices will appear as /dev/gnbd/<unique_gnbd_device_name>. From this point
  on, continue the setup as if these were regular shared storage.

So the cool thing is, if you export block devices they are all automatically 
visible to the client with defined names. With iscsi you have to configure 
each block device on the server AND on the client (client can be multiple 
dom0s)! With gnbd you only have to configure them on the server. That makes 
management very easy!

So next point to me is where to make my raid1? This multipath thing looks 
complicated to me. You need a block device sync and heartbeat.

So my ideas now are:
 - two storage servers
 - on each storage server
  ~ lvm2 over all hard disks with striping (fast)
  ~ make logical volumes
  ~ export logical volumes with gnbd
 - on each dom0 import gnbd devices from both storage servers
 - configure domU to use these gnbd devices from dom0
 - in domU make raid1

advantages:
 + easy to set up
 + domU can use black devices as normal disks
 + (live) migration possible

disadvantages:
 + domU has to take care of redundancy (raid1)

things to keep in mind resp. to test:
 + how does disconnect of a gnbd device will be handled?
 + do you need the fence daemon?
 + does resizing of gnbd devices work transparently?

hint:
My opinion is, if you make the raid1 thing in dom0 you can't make live 
migration!?


-- 
greetings

eMHa

Attachment: pgpU3vuFs6lvB.pgp
Description: PGP signature

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
<Prev in Thread] Current Thread [Next in Thread>