|
|
|
|
|
|
|
|
|
|
xen-users
Re: [Xen-users] iSCSI target - run in Dom0 or DomU?
I wonder, instead of drdb, what would happen if you exported both
storage servers iscsi targets to your xen machines and then used linux
software raid1 to mount them both and keep them in sync.
Matthew Wild wrote:
On Friday 25 August 2006 13:50, Thomas Harold wrote:
Javier Guerra wrote:
On Thursday 24 August 2006 7:10 am, Thomas Harold wrote:
My plan for the disks is to lay software RAID over the disks (RAID1 for
the first pair, RAID10 for the second set of 6 disks). Then lay LVM on
top of mdadm's software RAID before handing it off to iscsitarget to be
divided up for use by the iSCSI initiators.
do you plan to export the LVs over iSCSI? i would advise to export the
PVs (/dev/mdX), and do CVLM on dom0 of the iSCSI initiators. that way,
you could add more storage boxes (with more PVs) to the same VG
Not sure yet. The goal is to have a Xen setup where we can move DomUs
between multiple head boxes on-the-fly. Having a SAN should make this
easier to do (if I've read correctly).
And phase2 of the test project would be to have two, identically
configured (roughly), SAN units that are either mirrored or
fault-tolerant so that the Xen DomUs can keep running even if one of the
two SAN units is down. That also includes having 2 physical switches,
multiple NICs bonded together (probably 2 bonded pairs, one for each
switch) and multiple cables going to the switches (both for
fault-tolerance and expanded bandwidth).
What I've been building is pretty much the same as this. We have 2 storage
servers with 5TB usable storage each, replicating through drbd. These then
run iscsitarget to provide lvm based iSCSI disks to a set of Xen servers
using open-iscsi. The vitual machines are then set up using these physical
disks. Because the iscsi devices can have the same /dev/disk/by-id
or /dev/disk/by-path labels on each Xen dom0, you can create generic config
files that will work across all the servers. Also, even though drbd is a
primary/secondary replication agent at the moment, everything is quite happy
for multiple Xen dom0s to connect to the disks, allowing for very quick live
migration.
I haven't gone quite so far with multiple switches etc., but we are using
VLANs to separate the dom0 traffic (eth0), domUs (eth1), and iSCSI (eth2).
All on Gb networking. We are also thinking of putting 10Gb links between the
storage servers to keep drbd happy.
Matthew
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|
|
|
|
|