[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v1 1/4] xen: add real time scheduler rt



Hi Jan,


2014-08-27 2:26 GMT-04:00 Jan Beulich <JBeulich@xxxxxxxx>:
>>> On 27.08.14 at 04:07, <xumengpanda@xxxxxxxxx> wrote:
>> > +            /* get vcpus' params */
>> > +            XEN_GUEST_HANDLE_64(xen_domctl_sched_rt_params_t) vcpu;
>>
>> Why does this need to be a handle? Do you permit setting these
>> to different values for different vCPU-s? Considering that other
>> schedulers don't do this, why does yours need to?
>>
>
> Yes, we need a handler here to get each vcpu's parameters of a domain.
>
> let me explain why we need to set and get the parameters of "each" vcpu:
> 1) A VCPU is the basic scheduling and accounting unit in the Global
> Earliest Deadline First (gEDF) scheduling. We account the budget
> consumption for each vcpu instead of each domain, while the credit or
> credit2 scheduler account the credit consumption for each domain.
> 2) Based on the Global Earliest Deadline First (gEDF) scheduling theory,
> each vcpu's parameter will be used to decide the scheduling sequence of
> these vcpus.  Two vcpus with same utilization but different period and
>  budget can be scheduled differently. For example, the vcpu with budget
> 10ms and period 20ms is less responsive than the vcpu with budget 2ms and
> period 8ms, although they have the same utilization 0.5.
>
> Therefore, a domain's real-time performance is based on the parameters of
> each VCPU of this domain.
> Hence, users need to be able to set and get each vcpu's parameters of a
> domain.
>
> This gEDF scheduler is different from the credit and credit2 schedulers.
> The existing credit and credit2 scheduler account the credit for each
> domain, instead of each vcpu, that's why they set parameter per domain
> instead of per vcpu.

Parameter setting and accounting aren't tied together, and both
credit schedulers account on a per-vCPU basis afaict. Hence this
doesn't really answer the question.

Let's me explain in  another short and clearer way.  

Because each vcpu's parameters can affect the scheduling sequence and thus affect the real-time performance of a domain, users may want to know what is the parameters of each vcpu of each domain so that they can have an intuition of how the vcpus will be scheduled.  (Do you agree? :-)) 
Users may also need to set each VCPU's parameter of a domain to achieve their desired real-time performance for this domain. After they set a vcpu's parameter of a domain, they need to have a way to check the new parameters of this vcpu of this domain. right? 

Because of the above two scenarios, users need to know each vcpu's parameters of a domain. So we need the handler to pass each vcpu's parameters from kernel to userspace to show to users. 

One thing to note is that: this handler is only used to get each vcpu's parameters of a domain. We don't need this handler to set a vcpu's parameter. 



> In my memory, we had such discussion on this question in the mailing list
> after the first RFC patch of this rt scheduler was released. We agreed that
> the real-time scheduler should supports setting and getting each vcpu's
> parameters. :-)

If so, can you point me to the specific mails rather than have me go
dig for them?

Sure! My bad. 

We had a long discussion of the design of this functionality of getting each vcpu's parameters. It's here: http://www.gossamer-threads.com/lists/xen/devel/339146

Another thread that discusses the interface for improved SEDF also discusses the idea of getting/setting each vcpu's parameters for a real-time scheduler.  This rt scheduler is supposed to replace the existing SEDF scheduler. 


I extract the interesting part related to this question:
Quote from Dario
:

"I don't
think the renaming+SEDF deprecation should happen until proper SMP
support is implemented, and probably also not until support for per-VCPU
scheduling parameters (quite important for an advanced real-time
scheduling solution) is there."

"The problems SEDF has are:
1. it has really really really poor SMP support
2. it does not allow to specify scheduling parameters on a per-VCPU
basis, but only on a domain basis. This is fine for general purpose
schedulers, but can be quite important in real-time workloads "
 
Please let me know if you have further questions. Maybe Dario could also give more insight on this, later. :-)


>> > +            uint16_t nr_vcpus;
>> > +            /* set one vcpu's params */
>> > +            uint16_t vcpu_index;
>> > +            uint16_t padding[2];
>> > +            uint64_t period;
>> > +            uint64_t budget;
>>
>> Are values overflowing 32 bits here really useful/meaningful?
>>
>
> W
> e allow the period and budget to be at most 31536000000000 (which is one
> year in microsecond) in the libxl.c. 31536000000000 is larger than 2^32
> =4294967296. So we have to use 64bit type here for period and budget.
>
> In addition, This is consistent with the period and budget type s_time_t
> in the kernel space. In the kernel space (sched_rt.c), we represent the
> period and budget in the type s_time_t, which is signed 64bit. So we use
> the uint64_t for period and budget here to avoid some type conversion.

Neither of this answers the question: Is this really a useful value
range?

I see the issue. Is 31536000000000 a good upper bound for period and budget? 
Actually, I'm not sure. This totally depends on users' requirement. 
4294967296us = 1.19hour. I'm not sure 1.19hour should be long enough for real-time applications? 
If it's enough, I can definitely change the type from uint64 to uint32.
Do you have any suggestion of how we can get a proper upper bound for period and budget?

Thank you very much!

Best,

Meng


--


-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.