[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 00 of 25] libxc: Hypercall buffers



libxc currently locks various on-stack data structures present on the
stack using mlock(2) in order to try and make them safe for passing to
hypercalls (which requires the memory to be mapped)

There are several issues with this approach:

1) mlock/munlock do not nest, therefore mlocking multiple pieces of
   data on the stack which happen to share a page causes everything to
   be unlocked on the first munlock not the last. This is likely to be
   currently OK for the uses in libxc taken in isolation but could
   impact any caller of libxc which uses mlock itself.
2) mlocking only parts of the stack is considered by many to be a
   dubious, if strictly speaking allowed by the relevant
   specifications, use of mlock.
3) mlock may not provide the required semantics needed for hypercall
   safe memory. mlock simply ensures that there can be no major
   faults (page faults requiring I/O to satisfy) but does not
   necessarily rule out minor faults (e.g. due to page migration)

The following introduces an explicit hypercall-safe memory pool API
which includes support for bouncing user-supplied memory buffers into
suitable memory.

This series addresses (1) and (2) but does not directly address (3)
other than by encapsulating the code which acquires hypercall safe
memory into one place where it can be more easily fixed.

There is also the slightly separate issue of code which forgets to
lock buffers as necessary and therefor this series overrides the Xen
guest-handle interfaces to attempt to improve compile-time checking
for the correct use of the memory pool. This scheme works for the
pointers contained within hypercall argument structures but doesn't
catch the actual hypercall arguments themselves. I'm open to
suggestions on how to extend it cleanly to catch those cases.

The bits which touch ia64 are not even compile tested since I do not
have access to a suitable userspace-capable cross compiler.

Changes since last time:
  - rebased on top of recent cpupool changes, conflicts in
    xc_cpupool_getinfo and xc_cpupool_freeinfo.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.