WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: Interdomain comms

To: Eric Van Hensbergen <ericvh@xxxxxxxxx>
Subject: Re: [Xen-devel] Re: Interdomain comms
From: Harry Butterworth <harry@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Date: Sun, 08 May 2005 18:48:42 +0100
Cc: Mike Wray <mike.wray@xxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, "Ronald G. Minnich" <rminnich@xxxxxxxx>, Eric Van Hensbergen <ericvh@xxxxxxxxxxxxxxxxxxxxx>
Delivery-date: Sun, 08 May 2005 17:44:53 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <a4e6962a050508091852e7d303@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <0BAE938A1E68534E928747B9B46A759A6CF3AC@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <1115421185.4141.18.camel@localhost> <a4e6962a0505061719165b32e4@xxxxxxxxxxxxxx> <1115472417.4082.46.camel@localhost> <Pine.LNX.4.58.0505071009150.13088@xxxxxxxxxxxxxxx> <1115486227.4082.70.camel@localhost> <a4e6962a050507142932654a5e@xxxxxxxxxxxxxx> <1115503861.4460.2.camel@localhost> <a4e6962a050507175754700dc8@xxxxxxxxxxxxxx> <1115541386.6886.105.camel@localhost> <a4e6962a050508091852e7d303@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Sun, 2005-05-08 at 11:18 -0500, Eric Van Hensbergen wrote:

> > This is probably likely to be true most of the time so an API at this
> > level will be useful but I'd also like to be able to write I/O
> > applications that manage the data in buffers that are never mapped into
> > the application address space.
> >
> 
> Well, this was the context of the example (the FE was registering a
> buffer from its own address space).  The existing Plan 9 API doesn't
> have a good example of how to handle the more abstract buffer handles
> you describe, but I don't think there's anything in the protocol which
> would prevent such a utilization.  I need to think about this scenario
> a bit more, could you give an example how how you would use this
> feature?

Read from disk, cache in buffer cache, sendfile to remote client. 

>  
> > Also, I'd like to be able to write applications that have clients which
> > use different types of buffers without having to code for each case in
> > my application.
> > 
> 
> The attempt at portability is admirable, but it just seems to add
> complexity -- if I want to use the reference, I'll have to make
> another functional call to resolve the buffer.  I guess I'm being too
> narrow minded, but I just don't have a clear idea of the utility of
> hidden buffers.  I never know who I am supposed to be hiding
> information from. ;)

Yourself: it's harder for a bug to scribble on the data if it's not
mapped into the address space. Also, it eliminates the overhead of page
table manipulations which aren't needed if you don't want to look at the
data.

> > Also, I can change the memory management without changing all the calls
> > to the API, I only have to change where I get buffers from.
> 
> Again - I agree that this is an important aspect.  Perhaps this sort
> of functionality is best called out separately with its own interfaces
> to provide and resolve buffer handles.   It seems like perhaps this
> might be worth breaking out into its own.  It seems like there would
> be three types of operations on your proposed struct:
>    abstract_ref = get_ref( *real_data, flags ); /* constructor */
>    real_data = resolve_ref( *abstract_ref, flags);
>    forget_ref( abstract_ref ); /* destructor */
> Lots of details under the hood there (as it should be).  flags could
> help specify things like read-only, cow, etc.   Is such an interface
> sufficient?  If I'm being naive here just tell me to shut up and I'll
> won't talk about it until I've had the time to look a little deeper
> into things.

You could have something like that or you could make a
local_buffer_reference for some memory in the virtual address space and
then use the local_buffer_reference_copy function which would get the
data into your address space as efficiently as it could. It works out as
pretty much the same thing.  Both cases require allocation of the
virtual address space somehow, if this is an operation that could fail
(more physical memory than available address space for example) then
that must be expressed in the API somehow; having to allocate the memory
up-front is a relatively clean way of doing this.  This method also
allows you to copy between a local_buffer_reference and the stack and
warms the CPU cache up just before you look at the data.  Which approach
you choose probably depends on the context.  I guess you might want
both.

> 
> > 
> > BTW, this specific abstraction I learnt about from an embedded OS
> > architected by Nik Shalor. He might have got it from somewhere else.
> > 
> 
> Any specific paper references we should be looking at?  Or is obvious
> from a google?

Here's a reference for historical interest but be aware that I'm
proposing something different.  In particular, these DDRs confuse the
function of local and remote buffer references.

Page numbered 16 of the following doc or page 32 according to xpdf.

http://www-900.ibm.com/cn/support/library/storage/download/Advanced%
20SerialRAID%20Plus%20Adapter%20Technical%20Reference.pdf

Harry.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>