[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 REPOST 12/12] x86/hvm/ioreq: add a new mappable resource type...



> -----Original Message-----
> From: George Dunlap
> Sent: 29 August 2017 14:40
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> Cc: Roger Pau Monne <roger.pau@xxxxxxxxxx>; Stefano Stabellini
> <sstabellini@xxxxxxxxxx>; Wei Liu <wei.liu2@xxxxxxxxxx>; Andrew Cooper
> <Andrew.Cooper3@xxxxxxxxxx>; Tim (Xen.org) <tim@xxxxxxx>; Jan Beulich
> <jbeulich@xxxxxxxx>; Ian Jackson <Ian.Jackson@xxxxxxxxxx>; xen-
> devel@xxxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-devel] [PATCH v2 REPOST 12/12] x86/hvm/ioreq: add a
> new mappable resource type...
> 
> On Fri, Aug 25, 2017 at 10:46 AM, Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> wrote:
> >> > +    /*
> >> > +     * Allocated IOREQ server pages are assigned to the emulating
> >> > +     * domain, not the target domain. This is because the emulator is
> >> > +     * likely to be destroyed after the target domain has been torn
> >> > +     * down, and we must use MEMF_no_refcount otherwise page
> >> allocation
> >> > +     * could fail if the emulating domain has already reached its
> >> > +     * maximum allocation.
> >> > +     */
> >> > +    iorp->page = alloc_domheap_page(currd, MEMF_no_refcount);
> >>
> >> I don't really like the fact that the page is not accounted for any
> >> domain, but I can see the point in doing it like that (which you
> >> argument in the comment).
> >>
> >> IIRC there where talks about tightening the accounting of memory
> >> pages, so that ideally everything would be accounted for in the memory
> >> assigned to the domain.
> >>
> >> Just some random through, but could the toolstack set aside some
> >> memory pages (ie: not map them into the domain p2m), that could then
> >> be used by this? (not asking you to do this here)
> >>
> >> And how many pages are we expecting to use for each domain? I assume
> >> the number will be quite low.
> >>
> >
> > Yes, I agree the use on MEMF_no_refcount is not ideal and you do
> highlight an issue: I don't think there is currently an upper limit on the
> number of ioreq servers so an emulating domain could exhaust memory
> using the new scheme. I'll need to introduce a limit to avoid that.
> 
> I'm not terribly happy with allocating out-of-band pages either.  One
> of the advantages of the way things are done now (with the page
> allocated to the guest VM) is that it is more resilient to unexpected
> events:  If the domain dies before the emulator is done, you have a
> "zombie" domain until the process exits.  But once the process exits
> for any reason -- whether crashing or whatever -- the ref is freed and
> the domain can finish dying.
> 
> What happens in this case if the dm process in dom0 is killed /
> segfaults before it can unmap the page?  Will the page be properly
> freed, or will it just leak?

The page is referenced by the ioreq server in the target domain, so it will be 
freed when the target domain is destroyed.

> 
> I don't immediately see an advantage to doing what you're doing here,
> instaed of just calling hvm_alloc_ioreq_gfn().  The only reason you
> give is that the domain is usually destroyed before the emulator
> (meaning a short period of time where you have a 'zombie' domain), but
> I don't see why that's an issue -- it doesn't seem like that's worth
> the can of worms that it opens up.
> 

The advantage is that the page is *never* in the guest P2M so it cannot be 
mapped by the guest. The use of guest pages for communication between Xen and 
an emulator is a well-known attack surface and IIRC has already been the 
subject of at least one XSA. Until we have better infrastructure to account 
hypervisor memory to guests then I think using alloc_domheap_page() with 
MEMF_no_refcount is the best way.

  Paul

>  -George
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.