[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen & I/O in clusters - Single Vs. Dual CPU issue


  • To: Rune Johan Andresen <runejoha@xxxxxxxxxxx>
  • From: Bin Ren <bin.ren@xxxxxxxxx>
  • Date: Thu, 4 Nov 2004 18:41:27 +0000
  • Cc: Xen Virtual Machine Monitor <xen-devel@xxxxxxxxxxxxxxxxxxxxx>, H=E5vard_Bjerke <havard.bjerke@xxxxxxxxxxx>, Rune Andresen <rune.johan.andresen@xxxxxxxxxxx>
  • Delivery-date: Fri, 05 Nov 2004 07:45:21 +0000
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:references; b=a2tkAMYhoyJszjwyjZsQa7hf5ZiSBRsiHAsw99WE0xxS0RWdfnuXNDxLK1chE2O3eQWGEBvTvfoWu/ACmoNci1bPtCRE7azEQXuwkaj1xM0Bn5XDHoJ7qj0rY/P7Udp9nIuMMN6+2H2JNzhijUJlpgYpbOvBiynrPbJR2wCTGVk=
  • List-id: List for Xen developers <xen-devel.lists.sourceforge.net>

Among A1, A2, B1, B2, which ones are domain 0 and which are unpriviledged?

- Bin

On Thu, 4 Nov 2004 18:42:34 +0100, Rune Johan Andresen
<runejoha@xxxxxxxxxxx> wrote:
> Well, after the issue between two xen dom0 domains is solved there is a
> new case we don't
> understand:
> 
> With two physical domains and 4 guest OSs (2 on each physical node) we
> get some rare results with ttcp (b=1000000, l = 1000000):
> 
> Lets say we have two guest OSs on physical node A, A1 and A2, and two
> guest OSs on physical node B, B1 and B2.
> 
> Between A1 and B1 I get 110 000 KB/s (which is almost optimal!)
> Between A1 and B2 I get 81 0000 KB/s
> Between A2 and B1 I get 94 000 KB/s
> 
> Do you have any idea why we get less performance in the last two cases?
> It doesn't make sense. It cant be
> a bottleneck in the network either because of case 1.(?)
> 
> Cheers,
> Rune
> 
> 
> 
> 
> On Nov 2, 2004, at 5:51 PM, Mark A. Williamson wrote:
> 
> >> If you're using MPI over TCP/IP (which I imagine you are) then it
> >> should
> >> Just Work (TM).  We have tried live migration with MPI applications
> >> but you
> >> shouldn't have any problems moving the VMs around with a cluster.
> >
> > Sorry I meant to say we have *not* tried live migration with MPI
> > applications.
> >
> > Note to self: read before clicking send!
> >
> > Cheers,
> > Mark
> >
> >
> > -------------------------------------------------------
> > This SF.Net email is sponsored by:
> > Sybase ASE Linux Express Edition - download now for FREE
> > LinuxWorld Reader's Choice Award Winner for best database on Linux.
> > http://ads.osdn.com/?ad_id=5588&alloc_id=12065&op=click
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxxxx
> > https://lists.sourceforge.net/lists/listinfo/xen-devel
> 
> -------------------------------------------------------
> This SF.Net email is sponsored by:
> Sybase ASE Linux Express Edition - download now for FREE
> LinuxWorld Reader's Choice Award Winner for best database on Linux.
> http://ads.osdn.com/?ad_id=5588&alloc_id=12065&op=click
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxxxx
> https://lists.sourceforge.net/lists/listinfo/xen-devel
>


-------------------------------------------------------
This SF.Net email is sponsored by:
Sybase ASE Linux Express Edition - download now for FREE
LinuxWorld Reader's Choice Award Winner for best database on Linux.
http://ads.osdn.com/?ad_id=5588&alloc_id=12065&op=click
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.