WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-api

RE: [Xen-API] cross-pool migrate, with any kind of storage (shared or lo

To: Daniel Stodden <Daniel.Stodden@xxxxxxxxxx>
Subject: RE: [Xen-API] cross-pool migrate, with any kind of storage (shared or local)
From: Dave Scott <Dave.Scott@xxxxxxxxxxxxx>
Date: Mon, 18 Jul 2011 16:25:04 +0100
Accept-language: en-US
Acceptlanguage: en-US
Cc: "'xen-api@xxxxxxxxxxxxxxxxxxx'" <xen-api@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Mon, 18 Jul 2011 08:26:17 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <81A73678E76EA642801C8F2E4823AD21BC2D12C582@xxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-api-request@lists.xensource.com?subject=help>
List-id: Discussion of API issues surrounding Xen <xen-api.lists.xensource.com>
List-post: <mailto:xen-api@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=unsubscribe>
References: <81A73678E76EA642801C8F2E4823AD21BC2D12C569@xxxxxxxxxxxxxxxxxxxxxxxxx> <1310966125.29412.75.camel@ramone> <81A73678E76EA642801C8F2E4823AD21BC2D12C582@xxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-api-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcxFCbT1Ag1gyl14Tz6UPmMk2Cr3CgAKLFnwAAsUDzA=
Thread-topic: [Xen-API] cross-pool migrate, with any kind of storage (shared or local)
There are now 2 pages on the xen wiki:

http://wiki.xensource.com/xenwiki/CrossPoolMigration

and

http://wiki.xensource.com/xenwiki/CrossPoolMigrationV2

The V2 page has a first cut at a list of pros and cons of DRBD vs snapshot/copy.

Cheers,
Dave

> -----Original Message-----
> From: Dave Scott
> Sent: 18 July 2011 11:26
> To: Daniel Stodden
> Cc: xen-api@xxxxxxxxxxxxxxxxxxx
> Subject: RE: [Xen-API] cross-pool migrate, with any kind of storage
> (shared or local)
> 
> Hi Daniel,
> 
> Thanks for your thoughts. I think I'll add a wiki page later to
> describe the DRBD-based design idea and we can list some pros and cons
> of each perhaps.
> 
> I'm still not a DRBD expert but I've now read the manual and configured
> it a few times (where 'few' = 'about 3') :)
> 
> Daniel wrote:
> > If only FS integrity matters, you can run a coarser series of updates,
> > for asynchronous mirroring. I suspect DRBD to do at least sth like
> that
> > (I'm not a DRBD expert either). I'm not sure if the asynchronous mode
> I
> > see on the feature list allows for conclusions on DRBD's idea of HA
> in
> > any way. It may just limit HA to being synchronous mode. Does anyone
> > know?
> 
> It seems that DRBD can operate in 3 different synchronization modes:
> 
> 1. fully synchronous: writes are ACK'ed only when written to both disks
> 2. asynchronous: writes are ACK'ed when written to the primary disk
> (data is somewhere in-flight to the secondary)
> 3. semi-synchronous: writes are ACK'ed when written to the primary disk
> and in the memory (not disk) of the secondary
> 
> Apparently most people run it in fully synchronous mode over a fast LAN.
> Provided we could get DRBD to flush outstanding updates and guarantee
> that the two block devices are identical during the migration downtime
> when the domain is shutdown, I guess we could use any of these methods.
> Although if fully synchronous is the most common option, we may want to
> stick with that?
> 
> > Anyway, it's not exactly a rainy weekend project, so if you want
> > consistent mirroring, there doesn't seem to be anything better than
> > DRBD
> > around the corner.
> 
> It did rain this weekend :) So I've half-written a python module for
> configuring and controlling DRBD:
> 
> https://github.com/djs55/drbd-manager
> 
> It'll be interesting to see how this performs in practice. For some
> realistic workloads I'd quite like to measure
> 1. total migration time
> 2. total migration downtime
> 3. ... effect on the guest during migration (somehow)
> 
> For (3) I would expect that continuous replication would slow down
> guest I/O more during the migrate than explicit snapshot/copy (as if
> every I/O performed a "mini snapshot/copy") but it would probably
> improve the downtime (2), since there would be no final disk copy.
> 
> What would you recommend for workloads / measurements?
> 
> > In summary, my point is that it's probably better to focus on
> migration
> > only - it's one flat dirty log index and works in-situ at the block
> > level. Beyond, I think it's perfectly legal to implement mirroring
> > independently -- the math is very similar, but the difference make
> for
> > huge impact on performance, I/O overhead, space to be set aside, and
> > robustness.
> 
> Thanks,
> Dave
> 
> >
> > Cheers,
> > Daniel
> >
> > [PS: comments/corrections welcome, indeed].
> >
> > > 3. use the VM metadata export/import to move the VM metadata
> between
> > pools
> > >
> > > I'd also like to
> > > * make the migration code unit-testable (so I can test the failure
> > paths easily)
> > > * make the code more robust to host failures by host heartbeating
> > > * make migrate properly cancellable
> > >
> > > I've started making a prototype-- so far I've written a simple
> python
> > wrapper around the iscsi target daemon:
> > >
> > > https://github.com/djs55/iscsi-target-manager
> > >
> > > _______________________________________________
> > > xen-api mailing list
> > > xen-api@xxxxxxxxxxxxxxxxxxx
> > > http://lists.xensource.com/mailman/listinfo/xen-api
> >

_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api