WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH 10 of 24] libxc: infrastructure for hypercall saf

To: Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH 10 of 24] libxc: infrastructure for hypercall safe data buffers
From: Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx>
Date: Tue, 7 Sep 2010 19:44:46 +0100
Cc: Jeremy Fitzhardinge <jeremy@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 07 Sep 2010 11:45:47 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <19590.29997.199583.59386@xxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Citrix Systems, Inc.
References: <bf7fb64762eb7decea9a.1283780310@xxxxxxxxxxxxxxxxxxxxx> <4C85FB75.9070905@xxxxxxxx> <1283853410.14311.87.camel@xxxxxxxxxxxxxxxxxxxxxx> <19590.29997.199583.59386@xxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Tue, 2010-09-07 at 18:23 +0100, Ian Jackson wrote: 
> Ian Campbell writes ("Re: [Xen-devel] [PATCH 10 of 24] libxc: infrastructure 
> for hypercall safe data buffers"):
> > It's not clear what phase 2 actually is (although phase 3 is clearly
> > profit), I don't think any existing syscalls do what we need. mlock
> > (avoiding the stack) gets pretty close and so far the issues with mlock
> > seem to have been more potential than hurting us in practice, but it
> > pays to be prepared e.g. for more aggressive page migration/coalescing
> > in the future, I think.
> 
> Ian and I discussed this extensively on IRC, during which conversation
> I became convinced that mlock() must do what we want.  Having read the
> code in the kernel I'm not not so sure.

After we had our discussion some other conversation I had (I forget
where/with whom) which made me pretty sure we were wrong as well.

> The ordinary userspace access functions are all written to cope with
> pagefaults and retry the access.  So userspace addresses are not in
> general valid in kernel mode even if you've called functions to try to
> test them.

Correct, the difference between a normal userspace access function and a
hypercall is that it is possible to inject (and handle) and page fault
in the former case whereas we cannot inject a page fault to a VCPU while
it is processing a hypercall.

(Maybe it is possible in principle to make all hypercalls restartable
such that we can return to the guest in order to inject page faults but
its not the case right now and I suspect it would be an enormous amount
of work to make it so)

>   It's not clear what mlock prevents; does it prevent NUMA
> page migration ?  If not then I think indeed the page could be made
> not present by one VCPU editing the page tables while another VCPU is
> entering the hypercall, so that the 2nd VCPU will get a spurious
> EFAULT.

I think you are right, these kinds of page faults are possible.

It seems that mlock is only specified to prevent major page faults (i.e.
those requiring I/O to service) but doesn't specify anything regarding
minor page faults. It ensures that the data is resident in RAM but not
necessarily that it is continuously mapped into your virtual address
space nor writeable.

Minor page faults could be caused by NUMA migration (as you say), CoW
mappings or by the kernel trying to consolidate free memory in order to
satisfy a higher order allocation (Linux has recently gained this exact
functionality, I believe). I'm sure there are a host of other potential
causes too...

It's possible that historically most of these potential minor fault
causes were either not implemented in the kernels we were using for
domain 0 (e.g. consolidation is pretty new) or not likely to hit in
practice (e.g. perhaps libxc's usage patterns make it likely that any
CoW mappings are already dealt with by the time the hypercall happens).

Going forward I think it's likely that NUMA migration and memory
consolidation and the like will become more widespread.

> OTOH: there must be other things that work like Xen - what about user
> mode device drivers of various kinds ?  Do X servers not mlock memory
> and expect to be able to tell the video card to DMA to it ?  etc.

DMA would require physical (or more strictly DMA) addresses rather than
virtual addresses so locking the page into a particular virtual address
space doesn't matter all that much from a DMA point of view. I don't
think pure user mode device drivers can do DMA, there is always some
sort of kernel stub required.

In any case the kernel has been moving away from needing privileged X
servers with direct access to hardware in favour of KMS for a while so
I'm not sure an appeal to any similarity we may have with that case
helps us much.

> I think if linux-kernel think that people haven't assumed that mlock()
> actually pins the page, they're mistaken - and it's likely to be not
> just us.

Unfortunately, I think we're reasonably unique. 

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>