WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: Interdomain comms

To: andrew.warfield@xxxxxxxxxxxx
Subject: Re: [Xen-devel] Re: Interdomain comms
From: Mike Wray <mike.wray@xxxxxx>
Date: Tue, 10 May 2005 15:30:59 +0100
Cc: Eric Van Hensbergen <ericvh@xxxxxxxxx>, Harry Butterworth <harry@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx>, "Ronald G. Minnich" <rminnich@xxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 10 May 2005 14:38:53 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <eacc82a405051003094d84c011@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <0BAE938A1E68534E928747B9B46A759A6CF3AC@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <a4e6962a0505061719165b32e4@xxxxxxxxxxxxxx> <1115472417.4082.46.camel@localhost> <Pine.LNX.4.58.0505071009150.13088@xxxxxxxxxxxxxxx> <1115486227.4082.70.camel@localhost> <a4e6962a050507142932654a5e@xxxxxxxxxxxxxx> <1115503861.4460.2.camel@localhost> <a4e6962a050507175754700dc8@xxxxxxxxxxxxxx> <eacc82a4050508011979bda457@xxxxxxxxxxxxxx> <42807150.3030907@xxxxxx> <eacc82a405051003094d84c011@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla Thunderbird 1.0.2 (X11/20050317)
Andrew Warfield wrote:
It should be possible to still use the page mapping in the i/o transport.
The issue right now is that the i/o interface is very low-level and
intimately tangled up with the structs being transported.

I don't doubt that it is possible.  The point I was making is that the
current i/o interfaces are low level for a reason, and that
generalizing this to a higher-level communications primitive is a
non-trivial thing.  Just considering the disk and net interfaces, the
current device channels each make particular decisions regarding (a)
what to copy and what to map, (b) when to send notification to get
efficient batching through the scheduler, and most recently (c) which
grant mechanism to use to pass pages securely across domains.

It should be relatively easy to provide these kinds of facilities in a
higher-level api.

Having a higher-level API to make all this easier, and especially to
reduce the code/complexity required to build new drivers etc is
something that will be fantastic to have.  I think though that at
least some of these underlying issues will need to be exposed for it
to be useful.  I'm not convinced that reimplementing the sockets API
for interdomain communication is a very good solution...

I wasn't suggesting exactly the sockets api, but something more like
the connect/send and listen/recv logic. Harry's API is quite like that,
with additional higher-level facilities.

The
buffer_reference struct that Harry mentioned looks quite interesting
as a start though in terms of describing a variety of underlying
transports.  Do you have a paper reference on that work, Harry?

With regards forwarding device channels across a network, I think we
can expect application-level involvement for shifting device messages
across a cluster.  If this is down the road, and it's certainly
something that has been discussed, a device channel is potentially two
local shared memory device channels between VMs on local hosts, and a
network connection between the physical hosts.  Beyond the more
complicated error cases that this obviously involves, we can then make
this as arbitrarily more complex by discussing HA or security
concerns... for the moment though, I think it would be interesting to
see how well the existing local host cases can be generalized.  ;)
And with the domain control channel there's an implicit assumption
that 'there can be only one'. This means for example, that domain A
using a device with backend in domain B can't connect directly to domain B,
but has to be 'introduced' by xend. It'd be better if it could connect
directly.

This is not a flaw with the current implementation -- it's completely
intentional.  By forcing control through xend we ensure that there is
a single point for control logic, and for managing state.  Why do you
feel it would be better to provide arbitrary point-to-point comms in a
VMM environment that is specifically trying to provide isolation
between guests?

OK, so it's an intentional flaw ;-).

One reason is that front-end drivers have to connect to their backends.
If they can find out who to connect to and then do it, it simplifies things.
Especially when that info is available from a store or registry service
as proposed for 3.0.

At the moment xend has to exchange messages with the domain to get the
device front-end handle and shared page address, and then exchange messages
with the back-end so it can create the device and map the page.
Telling the font-end which back-end to connect to would be much simpler.

Something like what Harry proposes should still be able to use
page mapping for efficient local comms, but without _requiring_
it. This opens the way for alternative transports, such as network.

Rather than going straight for something very high-level, I'd prefer
to build up gradually, starting with a more general message transport
api that includes analogues to listen/connect/recv/send.


As I said, I'm unconvinced that trying to mimic the sockets API is the
right way to go -- I think the communicating parties often want to see
and work with batches of messages without having to do extra copies or
have event notification made implicit.

Like I said, I wasn't suggesting _exactly_ the sockets api, more the
spirit of it. There is an analogue of batching for sockets though: flush.

I think you are completely
right about a gradual approach though -- having a generalized
host-local device channel would be very interesting to see...
especially if it could be shown to apply to the existing block, net,
usb, and control channels in a simplifying fashion.


Just a small matter of programming then :-).

Mike

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel