WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] lenny amd64 and xen.

On Fri, Nov 28, 2008 at 12:12:01AM +0100, Thomas Halinka wrote:
> Hi Pasi,
> 
> Am Donnerstag, den 27.11.2008, 21:25 +0200 schrieb Pasi Kärkkäinen:
> > On Thu, Nov 27, 2008 at 05:10:30PM +0100, Thomas Halinka wrote:
> > > Am Samstag, den 22.11.2008, 20:19 +0400 schrieb Nicolas Ruiz:
> > > > Hi all,
> > > >  
> > > > I'm pretty interesting study this kind of solution for my office.
> > > > Following questions from Pasi, i would like to know from you, Thomas,
> > > > if you are using a SAN for your cluster.
> > > 
> > > i build up my own SAN with mdadm, lvm und vblade.
> > > 
> > > > If so, what kind of data access technologies you use with.
> > > 
> > > ATAoverEthernet, which sends ATA-commands over Ethernet (Layer2). It's
> > > something like SAN over Ethernet and much faster than iscsi, since no
> > > tcp/ip is used. also failover was very tricky with iscsi....
> > > 
> > 
> > OK. 
> > 
> > > 
> > > >  
> > > > Last question, how do you manage HA, Live migration and snapshots :
> > > > owned scripts ?
> > > 
> > > heartbeat2 with crm and constraints and the rest is managed through
> > > openqrm.
> > >
> > 
> > Hmm.. so openqrm can take care of locking domU disks in dom0's for live
> > migration? ie. making sure only single dom0 accesses domU disks at a time.. 
> >  
> > > >         
> > > >         This is interesting. Want to tell more about your setup? CLVM?
> > > >         iSCSI?
> > > 
> > > nope, just AoE and LVM
> > > 
> > > >         
> > > >         What kind of physical server hardware? What kind of storage?
> > > 
> > > It s self-build. We had evaluated FC-SAN-Solutions, but they were slow,
> > > unflexible and very expensive. We 're using Standard-Server with bonding
> > > over 10Gbit-NICs
> > > 
> > > This setup transfers 1300 MB/s at the moment, is highly scaleable and
> > > was about 70% cheaper than a FC-Solution.
> > > 
> > 
> > Ok. 
> > 
> > > >         
> > > >         What exact kernel and Xen versions?
> > > 
> > > at the moment its xen 3.2 and 2.6.18-Kernel. I am evaluating 3.3 and
> > > 2.6.26 atm.
> > > 
> > > >         
> > > >         Thanks!
> > > >         
> > > >         -- Pasi
> > > 
> > > 
> > > If interested in this Setup, i could get you a overview with a small
> > > abstract, what is managed where and why... you know ;)
> > > 
> > 
> > Yeah.. picture would be nice :) 
> 
> you can get it here: http://openqrm.com/storage-cluster.png
> 
> Some words to say:
> 
> - openqrm-server has mdadm started and sees all mdX-Devices
> - openqrm-server knows a vg "data" with all mdX-Devices inside
> - openqrm-server exports "lvol" to the LAN
> - openqrm-server provides a boot-service (pxe), which: deploys a
> XEN-Image to xen_1-X and puts this ressource into a puppet-class
> 
> in this xen-image is heartbeat2 with crm and constraints implemented.
> puppet only alters the config for me...
> 
> Some explanations:
> 
> - all the storage-boxes are standard-server with 4xGB-NICs, 24-SATA on
> Areca Raid6 (areca is impressive, since write-performance of raid 5 =
> raid 6 = raid 0). Only small OS and the rest of HDD is exported through
> vblade.
> - header_a und b is heartbeat v1 cluster with drbd. drbd mirrors the
> data for openqrm and heartbeat does HA for openqrm
> - openqrm itself is the storage-header exporting all the data from the
> storage-boxes to the clients
> - openqrm-boot-service deploys a xen-image and puppet-configuration to
> this xen-servers.
> - all xen-server see all vblades and shelfes
> - xen-vms resist on aoe-blades, so snapshotting, lvextend, resize2fs is
> possible online
> 
> Scalability:
> Storage: go buy 3 new server, put a bunch of harddisk inside, install
> linux, install vblade and fire them. on the openqrm-server you only have
> to create a new-md and extend the volume-group
> Performance: buy a new Server and let him pxe-boot, create a new
> appliance and watch your server rebooting, starting xen and
> participating the cluster.
> 
> We Started the Cluster with about 110 GB-Storage - at the moment we have
> about 430 GB Data and have to extend up to 1,2 PB in Summer 2009, which
> is no problem.
> 
> 
> No - go and search for a SAN-Solution like this and ask for the price ;)
> 
> http://www.storageperformance.org/results/benchmark_results_spc2 shows
> some-fc-solutions.....
> 
> i guess that we will be in summer in this performance-regions with about
> 30 % of the costs and much more flebility.
> http://www.storageperformance.org/results/b00035_HP-XP24000_SPC2_full-disclosure.pdf
> 
> Price: Total: $ 1,635,434  
> 
> > 
> > And thanks for the answer!
> 
> ay - cheers!
> 
> i will end up this post with some words of Coraid CEO Kemp:
> "... We truly are a SAN solution, but SAN is not in the vocabulary of
> Linux people, because SAN is equated with fiber channel, and fiber
> channel is too expensive. But now, there's 'poor man SAN"  [1]
> 
> > 
> > -- Pasi
> 
> Thomas
> 
> Any Questions - ask me ;)
> 
> [1] http://www.linuxdevices.com/news/NS3189760067.html
> 

Pretty nice setup you have there :) 

Thanks for the explanation. It was nice to see details about pretty big
setup.

Have you had any problems with it? How about failovers from header a to b..
do they cause any problems? 

-- Pasi

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>
  • Re: [Xen-users] lenny amd64 and xen., Pasi Kärkkäinen <=