WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] High-Level API

On Wed, 2006-07-19 at 13:54 -0400, John D. Ramsdell wrote:
> > From: Harry Butterworth <harry@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
> ...
> > But, whatever the low-level API, whether grant-tables or something
> > which has better support for revocation and n-way communication, I
> > think there needs to be a small library to implement a higher level
> > API that is more convenient for driver authors to use directly.
> 
> Harry,
> 
> I curious to know what abstractions you would want to collected
> together in a high-level API.  Is there a common pattern of usage that
> can be easily packaged as a high-level API?  I tried the think of a
> generic, but high-level API for establishing communication between
> domains, which I have enclosed.  Is this the kind of API you're
> talking about?  Rather than dwelling on my proposal, perhaps it would
> be most interesting if you proposed an API at the same level of detail.

I already had a go at this...

Initial discussion:
http://lists.xensource.com/archives/html/xen-devel/2005-05/msg00154.html

Documentation and implementation and example usb driver using the API:
http://lists.xensource.com/archives/html/xen-devel/2005-12/msg00601.html

Sketchy example block driver using the API:
http://lists.xensource.com/archives/html/xen-devel/2006-01/msg00472.html

...there is more if you search in the archives.

Some random thoughts now that a bit of time has passed:

In addition to the high-performance asynchronous comms provided by
xenidc, I think there is a requirement for synchronous comms for the
pciback/front driver.  This could be implemented as something like a
synchronous invocation of the core xenidc code with polling rather than
interrupts for completions.

You'd want to reimplement the API on top of something with better
revocation than grant-tables.  There's a place where the code sets up an
interval timer to poll indefinitely waiting for the remote domain to
free up resources before the channel can be reestablished.

Use-cases like a shared framebuffer don't really fit into the kind of
channel model presented by the xenidc API.  I think this is a bit of a
special case.  VNC is a good fit of course.

The xenidc implementation is fairly tricky mainly because of the lossy,
level sensitive communication through xenbus and xenstore.

I think the xenbus API needs to be fixed in a separate effort.

> John
> 
> Name: advertise_endpoint
> 
> Inputs:
>     domid_t buddy      // The domain with which to share information
>     void (*a_handler)(domid_t buddy, void *data) // Handler invoked 
>     // when a notification is received on the read port
>     void *data         // Application specific data given to handlers
> 
> Outputs:
>     void *write_page   // Page written in this domain, and read by the buddy
>     evtchn_port_t read_port  // port used to notify the buddy that data
>     // in the read page has been read.
> 
> Implementation:
>     The write_page and an unbound port is allocated.  A grant ref is
>     generated for the write page, and the ref and the port is
>     published using XenBus.
>     
> Name: connect_to_endpoint
> 
> Inputs:
>     domid_t buddy      // The domain with which to share information
>     void (*a_handler)(domid_t buddy, void *data) // Handler invoked 
>     // when a notification is received on the write port
>     void *data         // Application specific data given to handlers
> 
> Outputs:
>     void *read_page    // Page read in this domain, and written by the buddy
>     evtchn_port_t write_port  // port used to notify that data is ready
>     // for the other domain in the write page
>     grant_handle_t handle    // Handle for mapped read page
> 
> Implementation:
>     A grant ref and port associated with the buddy domain is obtained
>     via XenBus.  The mapped page is returned as the read page, and the
>     result of performing an interdomain bind is the write port.

Your API is still a low level API.  You need to deal with arbitrary
sized bulk data transfer, concurrency, request-response coupling,
multiple data phases, channel lifecycle, resource management,
abstraction of the memory model and probably some other stuff that
doesn't immediately come to mind.

It's probably not worth reopening this discussion unless the core xen
team are interested.

Harry.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>