This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] myrinet dma

To: Tim Freeman <tfreeman@xxxxxxxxxxx>
Subject: Re: [Xen-devel] myrinet dma
From: Mark Williamson <Mark.Williamson@xxxxxxxxxxxx>
Date: Fri, 27 Aug 2004 16:14:58 +0100
Cc: xen-devel@xxxxxxxxxxxxxxxxxxxxx, Mark.Williamson@xxxxxxxxxxxx
Delivery-date: Fri, 27 Aug 2004 16:31:59 +0100
Envelope-to: steven.hand@xxxxxxxxxxxx
In-reply-to: Message from Tim Freeman <tfreeman@xxxxxxxxxxx> of "Thu, 26 Aug 2004 11:09:18 CDT." <20040826110918.09cff179@prana-bindu>
List-archive: <http://sourceforge.net/mailarchive/forum.php?forum=xen-devel>
List-help: <mailto:xen-devel-request@lists.sourceforge.net?subject=help>
List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
List-post: <mailto:xen-devel@lists.sourceforge.net>
List-subscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=subscribe>
List-unsubscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=unsubscribe>
Sender: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx
IMO, this is roughly what would need to be done:

Direct data path:
The OS component would have to be modified so that in dom0 it would perform 
the usual tasks of pinning memory AND talking to the hardware but in 
unprivileged domains it would pin memory itself and then request that dom0 set 
up the hardware.  This is control path, not data path so the indirection 
shouldn't hurt performance - guest applications can talk to the hardware 

It may be possible to use an existing library as-is, I'm not sure.

Writing the code to do this should be quite tractable for someone with the 
appropriate experience.  I'd imagine that user applications would receive 
similar performance to in non-virtualised configurations, with the 
qualification that if you run lots of domains on one CPU, they will obviously 
tend to experience less CPU time and higher latency anyway.

This approach limits you to no more clients than you have channels.

Multiplexed data path:
Multiplexing multiple guests onto single a channel seems a bit more difficult. 
 Perhaps it could be done with modifications to allow dom0 to control the 
channel, with other domains requesting data path as well as control path 
operations from it.  This could still give zero copy into guest applications 
but there might be some performance hit in latency due to the extra level of 
indirection, although suitable pipelining may provide good bandwith (as for 
the existing net and block drivers).

This  would be more work to implement than direct data path.  I guess there's 
also the possibility that your next interface might have lots of channels, 
making such multiplexing less important...

> Do your plans for infiniband allow 100s of guests to each have high speed
> networking?  How much might the performance degrade?

Simply having plenty of channels on the host interface card would be more 
straightforward than sharing them, see the above comment for the direct data 

I don't personally know what is planned regarding infiniband support, though.

> If I'm thinking about this correctly, it sounds like all of these domains'
> traffic could be put onto one Myrinet channel and five special domains
> could truly take advantage of Myrinet?

As for the issue of multiplexing some domains onto an ethernet-type interface 
and having some privileged domains also accessing the card directly, yes this 
sounds plausible in the first scenario described above (control-path 
multiplexing with direct data-path).

Just my $0.02

This SF.Net email is sponsored by BEA Weblogic Workshop
FREE Java Enterprise J2EE developer tools!
Get your free copy of BEA WebLogic Workshop 8.1 today.
Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>