WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Fwd: Re: [Xen-devel] [PATCH] skeleton frontend/backend examples and a d

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Fwd: Re: [Xen-devel] [PATCH] skeleton frontend/backend examples and a deadlock]
From: harry <harry@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Date: Fri, 04 Nov 2005 17:06:26 +0000
Delivery-date: Fri, 04 Nov 2005 17:06:29 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
--- Begin Message ---
To: Mark Williamson <mark.williamson@xxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] skeleton frontend/backend examples and a deadlock
From: harry <harry@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Date: Thu, 03 Nov 2005 11:30:00 +0000
In-reply-to: <200511030217.27986.mark.williamson@xxxxxxxxxxxx>
References: <1130908964.18258.25.camel@xxxxxxxxxxxxxxxxxxxxx> <1130935066.4719.22.camel@localhost> <200511030217.27986.mark.williamson@xxxxxxxxxxxx>
On Thu, 2005-11-03 at 02:17 +0000, Mark Williamson wrote:
> A few random questions:
> 
> * Does XenIDC have any performance impact?

I expect so :-)  The slight extra complexity might make it go slower or
alternatively, because all the common code is in one place (so when you
optimise it you improve all the drivers) it might make it go faster :-)

The only think I can think of which is non-optimal about the design of
the API from a performance perspective is that a network transparent
implementation wouldn't easily be able to couple the transaction
completion to the completion of a bulk data send.  This isn't an issue
until the IDC mechanism has to span nodes in a cluster though which is
probably a way off for Xen. The existing driver code of course has much
bigger problems with network transparency.

The implementation probably needs some performance tweaking to get
batching working correctly and possibly to do speculative interrupt
handling.  Unfortunately I was forced to get a couple of patents on this
stuff a while back and I'm not sure if I'm allowed to put it in.  I'll
look into it when I have the code working.

> * Can it be compatible with the current ring interface, or does it imply 
> incompatibility with the existing scheme? (i.e. is it an "all or nothing" 
> patch?)

The API implementation isn't binary compatible with the ring code in the
other drivers or the current way the store is used to set up the ring
interface but it's not an all or nothing patch because it can coexist
side-by-side with the other drivers.

The endpoint does use shared pages for a ring buffer but it shares two
pages, one from the FE and one from the BE, each read-only.  I did this
because it's a simple symmetric implementation which was the quickest
for me to implement and I think it is easy to understand.  It also has
the advantage that if the ring gets scribbled on you can point the
finger at which domain was likely to be responsible.  I'm not aware of
any security implications.

The format of the data in the ring is slightly more complicated too
because the code is generic to cope with the varying size of different
clients requests.

I use the store differently because I wanted to correctly handle suspend
resume and loadable modules.  I don't think any of the other drivers do
this correctly yet.  Not even Rusty's skeleton driver.

The API though is completely decoupled from the implementation so you
could change the underlying implementation to go back to a single page
given from the FE to the BE or anything else you like.  I doubt you'd be
able to make the implementation binary compatible with the existing code
without having some special cases in it for the existing code.

> * Will it be able to leverage page transfers?

I expect so.  I used the local/remote buffer reference abstraction for
the bulk data transfer.  You could define a local buffer reference for
memory that was intended for transferring.  This could be converted into
a new kind of remote buffer reference which could be interpreted
accordingly at the destination.  The implementation is designed to be
extended with an arbitrary number of different types of buffer
references.

I've almost finished the rbr_provider_pool which is the FE side of the
bulk data transfer mechanism.  When I send out a patch with this code in
it will demonstrate how the local and remote buffer references are used.

Harry.

--- End Message ---
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>