This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] [patch] 32/64-bit hypercall interface revisited

To: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
Subject: Re: [Xen-devel] [patch] 32/64-bit hypercall interface revisited
From: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
Date: Wed, 26 Apr 2006 09:46:51 +0100
Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, Hollis Blanchard <hollisb@xxxxxxxxxx>, xen-ppc-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 26 Apr 2006 01:50:57 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <571ACEFD467F7749BC50E0A98C17CDD8094E7B9C@pdsmsx403>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <571ACEFD467F7749BC50E0A98C17CDD8094E7B9C@pdsmsx403>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx

On 26 Apr 2006, at 09:15, Tian, Kevin wrote:

        Could you reveal something about how to kill mlock() completely?
:-) Current mlock() can ensure the ptes related to user buffer existing
in page table, and thus xen can copy from/to that buffer directly. By
removing mlock(), do you mean page fault may be injected to guest

Sorry, I meant that the *current* mlock() strategy needs to go, to be replaced by pre-allocated mlock()ed (or whatever else is needed to prepare a buffer for hypercall usage on a particular architecture) buffers.

This is needed even on x86 because the current strategy of mlock/munlock of non-page-aligned buffers is not really safe (mlock isn't nested). We get away with it because it's rather unlikely that two hypercall requests from two different threads will have arguments overlapping at page granularity, but it's undesirable.

It's a pain to implement partly because it will change the libxc interface (callers passing in an array for a hypercall will need to specially allocate the array, callers returned an array will need to free it in a special way), or we'll end up with two sets of interface: one legacy, copying interface and one new higher-speed interface.

Done properly, though, the mechanisms needed for each architecture can be hidden behind the pre-allocation interface.

 -- Keir

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>