WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Distributed xen or cluster?

To: xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-users] Distributed xen or cluster?
From: "lists@xxxxxxxxxxxx" <lists@xxxxxxxxxxxx>
Date: Wed, 21 Jan 2009 15:18:24 -0600
Delivery-date: Wed, 21 Jan 2009 13:19:47 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <fbe260260901211225n9d31dc7id7b2777f631da25d@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Reply-to: lists@xxxxxxxxxxxx
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thanks for the lead Rob!

>�Something like this?
>
>http://wiki.xensource.com/xenwiki/Open_Topics_For_Discussion?action=AttachFile
>�&do=get&target=Kemari_08.pdf

So, in essence, this is doing a sort of pre-migration, constantly updating the 
information that the new VM server would need in order to accept a migration 
command and fire up it's new servers.

So because the machines are constantly syncing this information, there's no 
need to do a manual migration as the data would always be there.

Nifty and seems like something that someone would eventually have realized :).
Is that kind of what it boils down to?

Mike




_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>