[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels for sync requests.



> -----Original Message-----
> From: Petre Ovidiu PIRCALABU [mailto:ppircalabu@xxxxxxxxxxxxxxx]
> Sent: 08 January 2019 14:50
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>; Wei Liu
> <wei.liu2@xxxxxxxxxx>; Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>; Konrad
> Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>; George Dunlap
> <George.Dunlap@xxxxxxxxxx>; Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Ian
> Jackson <Ian.Jackson@xxxxxxxxxx>; Tim (Xen.org) <tim@xxxxxxx>; Julien
> Grall <julien.grall@xxxxxxx>; Tamas K Lengyel <tamas@xxxxxxxxxxxxx>; Jan
> Beulich <jbeulich@xxxxxxxx>; Roger Pau Monne <roger.pau@xxxxxxxxxx>
> Subject: Re: [Xen-devel] [RFC PATCH 4/6] vm_event: Use slotted channels
> for sync requests.
> 
> On Thu, 2018-12-20 at 12:05 +0000, Paul Durrant wrote:
> > > -----Original Message-----
> > >
> > > The memory for the asynchronous ring and the synchronous channels
> > > will
> > > be allocated from domheap and mapped to the controlling domain
> > > using the
> > > foreignmemory_map_resource interface. Unlike the current
> > > implementation,
> > > the allocated pages are not part of the target DomU, so they will
> > > not be
> > > reclaimed when the vm_event domain is disabled.
> >
> > Why re-invent the wheel here? The ioreq infrastructure already does
> > pretty much everything you need AFAICT.
> >
> >   Paul
> >
> 
> To my understanding, the current implementation of the ioreq server is
> limited to just 2 allocated pages (ioreq and bufioreq)

The current implementation is, but the direct resource mapping hypercall 
removed any limit from the API. It should be feasible to extend to as many 
pages as is needed, hence:

#define XENMEM_resource_ioreq_server_frame_ioreq(n) (1 + (n))

...in the public header.

> The main goal of the new vm_event implementation proposal is to be more
> flexible in respect of the number of pages necessary for the
> request/response buffers ( the slotted structure which holds one
> request/response per vcpu or the ring spanning multiple pages in the
> previous proposal).
> Is it feasible to extend the current ioreq server implementation
> allocate dynamically a specific number of pages?

Yes, absolutely. At the moment the single page for synchronous emulation 
requests limits HVM guests to 128 vcpus. When we want to go past this limit 
then multiple pages will be necessary... which is why the hypercall was 
designed the way it is.

> 
> Also, for the current vm_event implementation, other than using the
> hvm_params to specify the ring page gfn, I couldn't see any reason why
> it should be limited to HVM guests only. Is it feasible to assume the
> vm_event mechanism will not ever be extended to PV guests?
> 

Unless you limit things to HVM (and PVH) guests then I guess you'll run into 
the same page ownership problems that ioreq server ran into (due to a PV guest 
being allowed to map any page assigned to it... including those that may be 
'resources' it should not be able to see directly). Any particular reason why 
you'd definitely want to support pure PV guests?

  Paul

> Many thanks,
> Petre
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.