[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen Platform QoS design discussion



> -----Original Message-----
> From: xen-devel-bounces@xxxxxxxxxxxxx
> [mailto:xen-devel-bounces@xxxxxxxxxxxxx] On Behalf Of Jan Beulich
> Sent: Monday, May 19, 2014 8:42 PM
> To: George Dunlap
> Cc: Andrew Cooper; Xu, Dongxiao; Ian Campbell; xen-devel@xxxxxxxxxxxxx
> Subject: Re: [Xen-devel] Xen Platform QoS design discussion
> 
> >>> On 19.05.14 at 14:13, <george.dunlap@xxxxxxxxxxxxx> wrote:
> > On 05/19/2014 12:45 PM, Jan Beulich wrote:
> >>>>> On 19.05.14 at 13:28, <George.Dunlap@xxxxxxxxxxxxx> wrote:
> >>> But in reality, all we need the daemon for is a place to store the
> >>> information to query.  The idea we came up with was to allocate memory
> >>> *inside the hypervisor* to store the information.  The idea is that
> >>> we'd have a sysctl to prompt Xen to *collect* the data into some
> >>> memory buffers inside of Xen, and then a domctl that would allow you
> >>> query the data on a per-domain basis.
> >>>
> >>> That should be a good balance -- it's not quite as good as having as
> >>> separate daemon, but it's a pretty good compromise.
> >> Which all leaves aside the suggested alternative of making available
> >> a couple of simple operations allowing an eventual daemon to do the
> >> MSR accesses without the hypervisor being concerned about where
> >> to store the data and how to make it accessible to the consumer.
> >
> >  From a libxl perspective, if we provide "libxl_qos_refresh()" (or
> > "libxl_qos_freshness_set()") and "libxl_qos_domain_query()" (or
> > something like it), it doesn't matter whether it's backed by memory
> > stored in Xen via hypercall or by a daemon.
> >
> > What I was actually envisioning was an option to either query them by a
> > domctl hypercall, or by having a daemon map the pages and read them
> > directly.  That way we have the daemon available for those who want it
> > (say, maybe xapi, or a future libxl daemon / stat collector), but we can
> > get a basic level implemented right now without a terrible amount of
> > architectural work.
> 
> But that's all centric towards the daemon concept (if we consider
> storing the data inn hypervisor memory also being some kind of a
> daemon). Whereas the simple helpers I'm suggesting wouldn't
> necessarily require a daemon to be written at all - a query
> operation for a domain would then simply be broken down at the
> tools level to a number of MSR writes/reads.
> 
> >>> There are a couple of options regarding collecting the data.  One is
> >>> to simply require the caller to do a "poll" sysctl every time they
> >>> want to refresh the data.  Another possibility would be to have a
> >>> sysctl "freshness" knob: you could say, "Please make sure the data is
> >>> no more than 1000ms old"; Xen could then automatically do a refresh
> >>> when necessary.
> >>>
> >>> The advantage of the "poll" method is that you could get a consistent
> >>> snapshot across all domains; but you'd have to add in code to do the
> >>> refresh.  (An xl command querying an individual domain would
> >>> undoubtedly end up calling the poll on each execution, for instance.)
> >>>
> >>> An advantage of the "freshness" knob, on the other hand, is that you
> >>> automatically get coalescing without having to do anything special
> >>> with the interface.
> >> With the clear disadvantage of potentially doing work the results of
> >> which is never going to be looked at by anyone.
> >
> > Jan, when you make a criticism it needs to be clear what alternate you
> > are suggesting.
> 
> With there only having been given two options, it seemed clear that
> by seeing an obvious downside for one I would mean to other to be
> preferable. Of course you're right in saying (further down) that the
> risk of obtaining data that no-one is interested in is always there,
> just that when someone says "poll" I'd imply (s)he's interested in the
> data, as opposed to doing the collect periodically.
> 
> > AFAICT, regarding "collection" of the data, we have exactly three options:
> > A. Implement a "collect for all domains" option (with an additional
> > "query data for a single domain" mechanism; either by daemon or hypercall).
> > B. Implement a "collect information for a single domain at a time" option
> > C. Implement both options.
> >
> > "Doing work that is never looked at by anyone" will always be a
> > potential problem if we choose A, whether we use a daemon, or use the
> > polling method, or use the automatic "freshness" knob.  The only way to
> > avoid that is to do B or C.
> >
> > We've already said that we expect the common case we expect is for a
> > toolstack to want to query all domains anyway.  If we think that's true,
> > "make the common case fast and the uncommon case correct" would dictate
> > against B.
> >
> > So are you suggesting B (disputing the expected use case)?  Or are you
> > suggesting C?  Or are you just finding fault without thinking things
> > through?
> 
> I'm certainly putting under question whether the supposed use case
> indeed is the common one, and I do that no matter which model
> someone claims is going to be the "one". I simply see neither backed
> by any sufficiently hard data.
> 
> And without seeing the need for any advanced access mechanism,
> I'm continuing to try to promote D - implement simple, policy free
> (platform or sysctl) hypercalls providing MSR access to the tool stack
> (along the lines of the msr.ko Linux kernel driver).

Do you mean some hypercall implementation like following:
In this case, Dom0 toolstack actually queries the real physical CPU MSRs.

struct xen_sysctl_accessmsr      accessmsr
{
    unsigned int cpu;
    unsigned int msr;
    unsigned long value;
}

do_sysctl () {
...
case XEN_SYSCTL_accessmsr:
    /* store the msr value in accessmsr.value */
    on_selected_cpus(cpumask_of(cpu), read_msr, &(op->u.accessmsr), 1);
}


Thanks,
Dongxiao

> 
> Jan
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.