WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] [PATCH 00 of 24] [RFC] libxc: hypercall buffers

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] [PATCH 00 of 24] [RFC] libxc: hypercall buffers
From: Ian Campbell <ian.campbell@xxxxxxxxxx>
Date: Mon, 06 Sep 2010 14:38:20 +0100
Cc: Ian Campbell <ian.campbell@xxxxxxxxxx>
Delivery-date: Mon, 06 Sep 2010 06:39:27 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
libxc currently locks various on-stack data structures present on the
stack using mlock(2) in order to try and make them safe for passing to
hypercalls (which requires the memory to be mapped)

There are several issues with this approach:

1) mlock/munlock do not nest, therefore mlocking multiple pieces of
   data on the stack which happen to share a page causes everything to
   be unlocked on the first munlock not the last. This is likely to be
   currently OK for the uses in libxc taken in isolation but could
   impact any caller of libxc which uses mlock itself.
2) mlocking only parts of the stack is considered by many to be a
   dubious, if strictly speaking allowed by the relevant
   specifications, use of mlock.
3) mlock may not provide the required semantics needed for hypercall
   safe memory. mlock simply ensures that there cvan be no major
   faults (page faults requiring I/O to satisfy) but does not
   necessarily rule out minor faults (e.g. due to page migration)

The following introduces an explicit hypercall-safe memory pool API
which includes support for bouncing user-supplied memory buffers into
suitable memory.

This series addresses (1) and (2) but does not directly address (3)
other than by encapsulating the code which acquires hypercall safe
memory into one place where it can be more easily fixed.

There is also the slightly separate issue of code which forgets to
lock buffers as necessary and therefor this series overrides the Xen
guest-handle interfaces to attempt to improve compile-time checking
for the correct use of the memory pool. This scheme works for the
pointers contained within hypercall argument structures but doesn't
catch the actual hypercall arguments themselves. I'm open to
suggestions on how to extend it cleanly to catch those cases.

This RFC series only partially translates over to the the new
scheme. It is intended that the final series end with a patch which
effectively does s/xc_set_xen_guest_handle/set_xen_guest_handle/g in
order to catch future errors (it should also remove the now redundant
hcall_buf_prep and hcall_buf_release calls and assiciated
infrastructure).

The RFC has already grown to many more patches than I originally
intended so I'd like to solicit some comments on the basic premise,
usability of the interface etc, before I dig down and convert/cleanup
the rest.

I've tried in this initial pass to keep the locking/bouncing at the
same level of the call stack. There seems to be several opportunities
for pushing this up or down to reduce unnecessary bouncing. While it
would be nice to avoid exposing the explicit allocation to users of
libxc (by using bounce buffers at all public interfaces) I do not
think this will be possible for performance reasons in many
cases. Already there are several users of libxc which lock their own
buffers.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel