[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v10 1/6] x86: detect and initialize Cache QoS Monitoring feature



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: Thursday, April 03, 2014 4:47 PM
> To: Xu, Dongxiao
> Cc: andrew.cooper3@xxxxxxxxxx; Ian.Campbell@xxxxxxxxxx;
> Ian.Jackson@xxxxxxxxxxxxx; stefano.stabellini@xxxxxxxxxxxxx;
> xen-devel@xxxxxxxxxxxxx; dgdegra@xxxxxxxxxxxxx; keir@xxxxxxx
> Subject: RE: [Xen-devel] [PATCH v10 1/6] x86: detect and initialize Cache QoS
> Monitoring feature
> 
> >>> On 03.04.14 at 10:27, <dongxiao.xu@xxxxxxxxx> wrote:
> >> From: xen-devel-bounces@xxxxxxxxxxxxx
> >> [mailto:xen-devel-bounces@xxxxxxxxxxxxx] On Behalf Of Jan Beulich
> >> But you could get away with a split approach: Provide the socket ID
> >> list via ordinary hypercall means (if this isn't already derivable from
> >> the topology sysctl anyway), and only share the per-socket data
> >> page(s).
> >
> > sysctl->u.getcqminfo has size limitation of 128 bytes.
> > The per-socket MFN array _may_ exceed this limitation...
> >
> > We discussed this issue in previous threads, that's how this 2-level of
> > data_mfn/data page sharing mechanism is proposed.
> 
> That's unrelated - you can always introduce a handle pointing to the
> array where the MFNs are to be stored. Whether that's preferable
> over the shared page approach largely depends on the number of
> entries in the page, and the performance needs. Both I think would
> suggest to use the handle-to-array approach, and use shared pages
> only for the actual data.

I am okay with either solution, whatever share or copy.

The 2-level sharing is previously proposed by Andrew, which I implemented in 
v10.
Andrew, are you okay with this copy solution for the MFN page?

Thanks,
Dongxiao

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.