[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: how to handle paged hypercall args?



At 10:33 +0000 on 15 Nov (1289817224), Keir Fraser wrote:
> On 15/11/2010 10:20, "Tim Deegan" <Tim.Deegan@xxxxxxxxxx> wrote:
> 
> >> Yes, and you'd never turn on paging for dom0 itself. That would never work!
> > 
> > :) No, the issue is if dom0 (or whichever dom the pager lives in) is
> > trying an operation on domU's memory that hits a paged-out page
> > (e.g. qemu or similar is mapping it) with its only vpcu - you can't
> > just block or spin.  You need to let dom0 schedule the pager process.
> > 
> >> Changing every user of the guest accessor macros to retry via guest space 
> >> is
> >> really not tenable. We'd never get all the bugs out.
> > 
> > Right now, I can't see another way of doing it.  Grants can be handled
> > by shadowing the guest grant table and pinning granted frames so the
> > block happens in domU (performance-- but you're already paging, right?)
> > but what about qemu, xenctx, save/restore...?
> 
> We're talking about copy_to/from_guest, and friends, here.

Oh sorry, I had lost the context there. 

Yes, for those the plan was just to pause and retry, just like all other
cases where Xen needs to access guest memory.  We hadn't particularly
considered the case of large hypercall arguments that aren't all read
up-front.  How many cases of that are there?  A bit of reordering on the
memory-operation hypercalls could presuambly let them be preempted and
restart further in mid-operation next time.  (IIRC the compat code
already does something like this).

Tim.

-- 
Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Principal Software Engineer, Xen Platform Team
Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.