WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH 10 of 24] libxc: infrastructure for hypercall saf

To: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Subject: Re: [Xen-devel] [PATCH 10 of 24] libxc: infrastructure for hypercall safe data buffers
From: Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx>
Date: Tue, 7 Sep 2010 10:56:50 +0100
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 07 Sep 2010 02:57:51 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4C85FB75.9070905@xxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Citrix Systems, Inc.
References: <bf7fb64762eb7decea9a.1283780310@xxxxxxxxxxxxxxxxxxxxx> <4C85FB75.9070905@xxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Tue, 2010-09-07 at 09:44 +0100, Jeremy Fitzhardinge wrote:
> How does this end up making the memory suitable for passing to Xen? 
> Where does it get locked down in the non-__sun__ case?  And why just
> __sun__ here?

As described in patch 0/24 the series still uses the same mlock
mechanism as before to actually obtain "suitable" memory. The __sun__
stuff is the same as before too -- this part was ported direct from the
existing bounce implementation in xc_private.c.

This series only:
      * ensures that everywhere which should be using special hypercall
        memory is actually using the correct (or any!) interface to
        obtain it. Not everywhere was, sometimes by omission but more
        often because the current implementation will only bounce one
        buffer at a time and just locks any subsequent nested to bounce
        attempts in place. The currently implementation also only
        bounces buffers smaller than 1 page and just locks anything else
        in place.
      * ensures that each buffer is only locked once -- some callchains
        were (un)locking the same buffer multiple times going down/up
        the stack (particularly concerning for buffers which are reused)
      * removes the use of mlock on portions of the stack, which is
        considered more dubious than using mlock in general.
      * makes it easier to switch to a better mechanism than mlock in
        the future (i.e. phase 2) by consolidating the magic allocations
        into one place.

> Is there any way to make memory hypercall-safe with existing syscalls,
> or does/will it end up copying from this memory into the kernel before
> issuing the hypercall?  Or adding some other mechanism for pinning 
> down the pages?

It's not clear what phase 2 actually is (although phase 3 is clearly
profit), I don't think any existing syscalls do what we need. mlock
(avoiding the stack) gets pretty close and so far the issues with mlock
seem to have been more potential than hurting us in practice, but it
pays to be prepared e.g. for more aggressive page migration/coalescing
in the future, I think.

It's not possible to copy the necessary buffers in the kernel without
adding deep introspection of the necessary hypercalls to the kernel
itself, I think we want to avoid this if possible.

Also some of the buffers can be quite large and/or potentially
performance sensitive so we would like to retain the ability to allocate
the correct sort of memory in userspace from the get go and therefor
avoid bouncing at all.

I was thinking we might need to implement some sort of special anonymous
mmap on the privcmd device or an ioctl or something along those lines,
but I'm open to better suggestions.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>