WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Live migration?

To: Chris Fanning <christopher.fanning@xxxxxxxxx>
Subject: Re: [Xen-users] Live migration?
From: "Daniel J. Nielsen" <djn@xxxxxxxxxx>
Date: Tue, 13 Mar 2007 10:40:35 +0100
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 13 Mar 2007 02:55:56 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <215ff4410703130111t18c87c3ctd07460bcad954fe1@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcdlU6ZL5QV9cNFGEduOcwAUUWMFRg==
Thread-topic: [Xen-users] Live migration?
User-agent: Microsoft-Entourage/11.3.3.061214
Hi Chris,

We still use Xen in production, but due to network I/O performance isssues,
I wouldn't recomment our setup if you intend to run more than one or two
virtual machines on each dom0.

In the case described below, we discovered we missed the experimental
support for hotpluggable cpus in our custom debian kernels. A recompile
later and all worked without a hitch.

As to network card I'm not sure. We use the ones provided in our HP Proliant
servers. For one of our servers, there are two:

Broadcom Corporation NetXtreme BCM5704 Gigabit Ethernet (rev 10)

I hope this clears something up. I'm not subscribed to xen-users anymore (I
just peruse the archives), so please include me in eventual replis.

/Daniel

On 3/13/07 9:11 AM, "Chris Fanning" <christopher.fanning@xxxxxxxxx> wrote:

> Hello Daniel,
> 
> I am trying to setup the same installation that you mention.
> I have dom0's on nfsroot and domU's on AoE.
> 
> At present I've got everything on 100mb/s and it doesn't work very
> well. xend takes about 20secs to starup and domU's don't recover
> network connection after migration. I'd like to try it with 1000mb/s.
> 
> Can you please recommend me the network cards I should use? I have
> some Dlinks but (for some reason) the modules don't get loaded even
> though lspci does show them.
> The thin server boxes need to boot pxe (of course).
> 
> Thanks.
> Chris.
> 
> On 9/15/06, Daniel Nielsen <djn@xxxxxxxxxx> wrote:
>> Hi.
>> 
>> We are currently migrating to Xen for our production servers, version
>> 3.0.2-2. But we are having problems with the live-migration feature.
>> 
>> Our setup is this;
>> 
>> We run debian-stable (sarge), with selected packages from backports.org. Our
>> glibc is patched to be "Xen-friendly". In our test-setup, we have two dom0's
>> both netbooting from a central NFS/tftpboot server e.g. not storing anything
>> locally. Both dom0's have two ethernet ports. eth0 is used by the dom0 and
>> eth1 is bridged to Xen.
>> 
>> Our domUs also use a NFS-root, also debian sarge. They use the same kernel.
>> They have no "ties" to the local machine, except for network access, they do
>> not mount any localdrives or files as drives. All is exclusively run through
>> NFS and in RAM.
>> 
>> When migrating machines (our dom0's are named after fictional planets, and
>> virtual machines after fictional spaceships):
>> 
>> geonosis:/ root# xm migrate --live serenity lv426
>> it just hangs.
>> 
>> A machine called serenity pops up on lv426:
>> 
>> lv426:/ root# xm list
>> Name                              ID Mem(MiB) VCPUs State  Time(s)
>> Domain-0                           0      128     4 r----- 21106.6
>> serenity                           8     2048     1 --p---     0.0
>> lv426:/ root#
>> 
>> But nothing happens.
>> 
>> If we migrate a lower mem domU with eg. 256MiB it works without a hitch.
>> If we migrate a domU with eg. 512 MiB it sometimes works, othertimes it
>> doesn't. But for domUs with 2GiB ram, it consistently fails.
>> 
>> In the above example, if we wait quite some hours, then serenity will stop
>> responding, and geonosis will be left with a
>> 
>> genosis:/ root# xm list
>> Name                              ID Mem(MiB) VCPUs State  Time(s)
>> Domain-0                           0      128     4 r----- 21106.6
>> Zombie-serenity                    8      2048    2 -----d  3707.8
>> geonosis:/ root#
>> 
>> 
>> I have attached the relevant entries from the xend.log files from both
>> geonosis and lv426.
>> 
>> I hope somebody is able to clear up what we are missing.
>> 
>> I noticed in geonosis.log, that it wants 2057 MiB. I'm unsure of what it
>> means...?
>> 
>> 
>> /Daniel
>> Portalen
>> 
>> 
>> _______________________________________________
>> Xen-users mailing list
>> Xen-users@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-users
>> 
>> 


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>