[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] compat mode argument translation area



On Tue, 2013-02-05 at 07:34 +0000, Jan Beulich wrote:
> >>> On 04.02.13 at 18:50, Keir Fraser <keir@xxxxxxx> wrote:
> > On 04/02/2013 16:50, "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
> >> this, originally having been at a fixed location outside of Xen virtual
> >> address ranges, has seen a number of changes over the years, with
> >> the net effect that right now we're requiring an order-1 allocation
> >> from the Xen heap. Obviously it would be much better if this got
> >> populated with order-0 allocations from the domain heap.
> >> 
> >> Considering that there's going to be one such area per vCPU (less
> >> those vCPU-s that don't need translation, i.e. 64-bit PV ones), it
> >> seems undesirable to me to use vmap() for this purpose.
> >> 
> >> Instead I wonder whether we shouldn't go back to putting this at
> >> a fixed address (5Gb or 8Gb) at least for PV guests, thus reducing
> >> the virtual address range pressure (compared to the vmap()
> >> approach as well as for the case that these areas might need
> >> extending). Was there any other reason that you moved them out
> >> of such a fixed area than wanting to use mostly the same code
> >> for PV and HVM (which ought to be possible now that there's a
> >> base pointer stored for each vCPU)?
> > 
> > The original reason was so that we only needed to allocate memory for the
> > xlat_area per physical cpu.
> > 
> > Because of allowing sleeping in a hypercall (via a waitqueue) we can no
> > longer do that anyway, so we are back to allocating an xlat_area for every
> > vcpu. And we could as well map that at a fixed virtual address I suppose.
> > 
> > Do we care about vmap() pressure though? Is there a downside to making the
> > vmap area as big as we like? I mean even the existing 16GB area is good for
> > a million vcpus or so ;)
> 
> My original intention was for vmap() to be used for the planned
> for per-vCPU stacks (alongside any ioremap() users of course,
> of which we - fortunately - shouldn't have too many). So my
> concern was really more with regard to other possible users
> showing up; the use case here certainly doesn't represent a
> limiting factor on it own.
> 
> Making the range arbitrarily large of course isn't an option;
> raising it up to, say, about 100G wouldn't be problem though
> (slightly depending on whether we also need to grow the
> global domain page map area - of course that implementation
> could also be switched to build upon vmap()).
> 
> I guess I'll go with that approach then; likely the 3-level event
> channel implementation (Wei - hint, hint) also ought to use this
> to simplify the code there.
> 

Aha, thanks for the hint. ;-)


Wei.

> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.