[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 for Xen 4.6 3/4] libxl: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler



On Tue, 2015-06-09 at 18:18 +0200, Dario Faggioli wrote:
> On Mon, 2015-06-08 at 15:55 -0500, Chong Li wrote:
> > On Mon, Jun 8, 2015 at 10:56 AM, Dario Faggioli
> 
> > > So, Thoughts? What do you think the best way forward could be?
> > 
> > I like option 2 more. But I think we may also need a 'vcpuid' field in
> > libxl_sched_params.
> > 
> For sparse array support, yes. At which point, I would flip the names as
> well, i.e., something like this:
> 
> libxl_vcpu_sched_params = Struct("vcpu_sched_params",[
>     ("vcpuid",       integer, { xxx some init val xxx}),
>     ("weight",       integer, {'init_val': 'LIBXL_PARAM_WEIGHT_DEFAULT'}),
>     ("cap",          integer, {'init_val': 'LIBXL_PARAM_CAP_DEFAULT'}),
>     ("period",       integer, {'init_val': 'LIBXL_PARAM_PERIOD_DEFAULT'}),
>     ("slice",        integer, {'init_val': 'LIBXL_PARAM_SLICE_DEFAULT'}),
>     ("latency",      integer, {'init_val': 'LIBXL_PARAM_LATENCY_DEFAULT'}),
>     ("extratime",    integer, {'init_val': 'LIBXL_PARAM_EXTRATIME_DEFAULT'}),
>     ("budget",       integer, {'init_val': 'LIBXL_PARAM_BUDGET_DEFAULT'}),

Is a full new set of defaults really necessary? I don't think it would
be semantically all that strange to say that an unspecified per-vcpu
value will default to the domain default, at which point having
LIBXL_DOMAIN_SCHED_PARAM_FOO_DEFAULT mentioned doesn't seem all that
strange.

>     ])
> 
> libxl_sched_params = Struct("sched_params",[
>     ("sched",        libxl_scheduler),
>     ("vcpus",        Array(libxl_sched_params, "num_vcpus")),
>     ])
> 
> With the possibility of naming the latter 'libxl_vcpus_sched_params',
> which is more descriptive, but perhaps is too similar to
> libxl_vcpu_sched_params.

I think I'd go the other way and name the thing with all the values in
it "libxl_sched_params" and the thing with the per-vcpu array in it
"libxl_vcpu_sched_params".

> Ian, George, what do you think?
> 
> While we're here, another thing we would appreciate some feedback on is
> what should happen to libxl_domain_sched_params_get(). This occurred to
> my mind while reviewing patch 4 of this series. Actually, I think we've
> discussed this before, but can't find the reference now.
> 
> Anyway, my view is that, for a scheduler that uses per-vcpu parameters,
> libxl_domain_sched_params_set() should set the same parameters for all
> the vcpus.
> When it comes to _get(), however, I'm not sure. To match the _set()
> case, we'd need to return the parameters of all the vcpus, but we can't,
> because the function takes a libxl_domain_sched_params argument, which
> just holds 1 tuple.

Wouldn't domain_get/set be manipulating the domain's default values for
things, i.e. what a vcpu gets if nothing more specific is specified? Or
is it too late to say that?

For set overall I don't really think it matters too much if it sets
everything (i..e all vcpus and the defaults) so long as is a documented
effect of the API -- anyone who adds per-vcpu support would then be
aware of this and can adjust their code accordingly.

For get I think saying it returns the defaults used for vcpus which
don't have something more explicit set is perfectly fine and doesn't
pose an upgrade wrinkle, since only those who are aware of the vcpu
settings would care about the distinction.

> Should we just WARN and ask, when on that specific scheduler, to use the
> per-vcpu variant being introduced in this patch
> (libxl_vcpu_sched_params_get())?
> 
> This does not look ideal, but without changing the prototype of
> libxl_domain_sched_params_get(), I don't see what else sensible we could
> do... :-/
> 
> Should we change it, and do the LIBXL_API_VERSION "trick"?
> 
> So, again, thoughts?
> 
> Regards,
> Dario
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.