[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 for Xen 4.6 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler



>>> On 09.08.15 at 17:45, <lichong659@xxxxxxxxx> wrote:
> On Mon, Jul 13, 2015 at 3:37 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>>> On 11.07.15 at 06:52, <lichong659@xxxxxxxxx> wrote:
>>> @@ -1162,8 +1176,82 @@ rt_dom_cntl(
>>>          }
>>>          spin_unlock_irqrestore(&prv->lock, flags);
>>>          break;
>>> +    case XEN_DOMCTL_SCHEDOP_getvcpuinfo:
>>> +        spin_lock_irqsave(&prv->lock, flags);
>>> +        for ( index = 0; index < op->u.v.nr_vcpus; index++ )
>>> +        {
>>> +            if ( copy_from_guest_offset(&local_sched,
>>> +                          op->u.v.vcpus, index, 1) )
>>> +            {
>>> +                rc = -EFAULT;
>>> +                break;
>>> +            }
>>> +            if ( local_sched.vcpuid >= d->max_vcpus ||
>>> +                          d->vcpu[local_sched.vcpuid] == NULL )
>>> +            {
>>> +                rc = -EINVAL;
>>> +                break;
>>> +            }
>>> +            svc = rt_vcpu(d->vcpu[local_sched.vcpuid]);
>>> +
>>> +            local_sched.s.rtds.budget = svc->budget / MICROSECS(1);
>>> +            local_sched.s.rtds.period = svc->period / MICROSECS(1);
>>> +
>>> +            if ( __copy_to_guest_offset(op->u.v.vcpus, index,
>>> +                    &local_sched, 1) )
>>> +            {
>>> +                rc = -EFAULT;
>>> +                break;
>>> +            }
>>> +            if( hypercall_preempt_check() )
>>> +            {
>>> +                rc = -ERESTART;
>>> +                break;
>>> +            }
>>
>> I still don't see how this is supposed to work.
> 
> I return -ERESTART here, and the upper layer function (do_domctl) will
> handle this error code by calling hypercall_create_continuation.

I have no idea where you found the upper layer (i.e. the
XEN_DOMCTL_scheduler_op case of do_domctl() to take care of
this).

>>> +} xen_domctl_schedparam_vcpu_t;
>>> +DEFINE_XEN_GUEST_HANDLE(xen_domctl_schedparam_vcpu_t);
>>> +
>>>  /* Set or get info? */
>>>  #define XEN_DOMCTL_SCHEDOP_putinfo 0
>>>  #define XEN_DOMCTL_SCHEDOP_getinfo 1
>>> +#define XEN_DOMCTL_SCHEDOP_putvcpuinfo 2
>>> +#define XEN_DOMCTL_SCHEDOP_getvcpuinfo 3
>>>  struct xen_domctl_scheduler_op {
>>>      uint32_t sched_id;  /* XEN_SCHEDULER_* */
>>>      uint32_t cmd;       /* XEN_DOMCTL_SCHEDOP_* */
>>>      union {
>>> -        struct xen_domctl_sched_sedf {
>>> -            uint64_aligned_t period;
>>> -            uint64_aligned_t slice;
>>> -            uint64_aligned_t latency;
>>> -            uint32_t extratime;
>>> -            uint32_t weight;
>>> -        } sedf;
>>> -        struct xen_domctl_sched_credit {
>>> -            uint16_t weight;
>>> -            uint16_t cap;
>>> -        } credit;
>>> -        struct xen_domctl_sched_credit2 {
>>> -            uint16_t weight;
>>> -        } credit2;
>>> -        struct xen_domctl_sched_rtds {
>>> -            uint32_t period;
>>> -            uint32_t budget;
>>> -        } rtds;
>>> +        xen_domctl_sched_sedf_t sedf;
>>> +        xen_domctl_sched_credit_t credit;
>>> +        xen_domctl_sched_credit2_t credit2;
>>> +        xen_domctl_sched_rtds_t rtds;
>>> +        struct {
>>> +            XEN_GUEST_HANDLE_64(xen_domctl_schedparam_vcpu_t) vcpus;
>>> +            uint16_t nr_vcpus;
>>> +        } v;
>>
>> And there's still no explicit padding here at all (nor am I convinced
>> that uint16_t is really a good choice for nr_vcpus - uint32_t would
>> seem more natural without causing any problems or structure
>> growth).
> 
> I think the size of union u is equal to the size of
> xen_domctl_sched_sedf_t, which is 64*4 bits (if "vcpus" in struct v is
> just a pointer).

Which doesn't in any way address to complaint about missing explicit
padding - I'm not asking you to pad to the size of the union, but to the
size of the unnamed structure you add.

Jan

> The "nr_vcpus" is indeed better to be uint32_t. I'll change it in the
> next version.
> 
> Chong



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.