[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 for Xen 4.6 1/4] xen: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler



On Tue, May 26, 2015 at 4:08 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>> On 26.05.15 at 02:05, <lichong659@xxxxxxxxx> wrote:
>> Add two hypercalls(XEN_DOMCTL_SCHEDOP_getvcpuinfo/putvcpuinfo) to get/set a
>> domain's
>> per-VCPU parameters. Hypercalls are handled by newly added hook
>> (.adjust_vcpu) in the
>> scheduler interface.
>>
>> Add a new data structure (struct xen_domctl_scheduler_vcpu_op) for
>> transferring data
>> between tool and hypervisor.
>>
>> Signed-off-by: Chong Li <chong.li@xxxxxxxxx>
>> Signed-off-by: Meng Xu <mengxu@xxxxxxxxxxxxx>
>> Signed-off-by: Sisu Xi <xisisu@xxxxxxxxx>
>> ---
>
> Missing brief description of changes in v2 here.

Based on Dario's suggestion in PATCH v1, we think it would be better
to make the per-vcpu get/set functionalities more general, rather than
just serving for rtds scheduler. So, the changes in v2 are:

1) Add a new hypercall, XEN_DOMCTL_scheduler_vcpu_op. Any scheduler
can use it for per-vcpu get/set. There is a new data structure (struct
xen_domctl_scheduler_vcpu_op), which helps transferring data (per-vcpu
params) between tool and hypervisor when the hypercall is invoked.

2) The new hypercall is handled by function sched_adjust_vcpu (in
schedule.c), which would call a newly added hook (named as
.adjust_vcpu, added to the scheduler interface (struct scheduler)).

3) In RTDS scheduler, .adjust_vcpu hooks a function defined in
sched_rt.c, named as rt_vcpu_cntl. This function gets/sets the
per-vcpu params.


>> +
>> +    switch ( op->cmd )
>> +    {
>> +    case XEN_DOMCTL_SCHEDOP_getvcpuinfo:
>> +        spin_lock_irqsave(&prv->lock, flags);
>> +        list_for_each( iter, &sdom->vcpu )
>> +        {
>> +            svc = list_entry(iter, struct rt_vcpu, sdom_elem);
>> +            vcpuid = svc->vcpu->vcpu_id;
>> +
>> +            local_sched.budget = svc->budget / MICROSECS(1);
>> +            local_sched.period = svc->period / MICROSECS(1);
>> +            if ( copy_to_guest_offset(op->u.rtds.vcpus, vcpuid,
>
> What makes you think that the caller has provided enough slots?

This code works in a similar way as the one for XEN_SYSCTL_numainfo.
So if there is no enough slot in guest space, en error code is
returned.
>
>> +                    &local_sched, 1) )
>
> You're leaking hypervisor stack contents here, but by reading
> the structure from guest memory first to honor its vcpuid field
> (this making "get" match "put") you'd avoid this anyway.
>

The structure in guest memory has nothing, and it will hold per-vcpu
params after this XEN_DOMCTL_SCHEDOP_getvcpuinfo hypercall is well
served.
Does "leaking hypervisor stack" mean that I need to call
"local_sched.vcpuid = vcpuid" before copying this local_sched to the
guest memory?

>
>> +            {
>> +                spin_unlock_irqrestore(&prv->lock, flags);
>> +                return  -EFAULT;
>
> Setting rc to -EFAULT and break-ing would seem better for these
> error paths (avoiding the repeated spin_unlock_irqrestore()-s).
>
>> +            }
>> +            hypercall_preempt_check();
>
> ???

We've talked about this in patch v1
(http://lists.xenproject.org/archives/html/xen-devel/2015-05/msg01152.html).
When a domain has too many VCPUs, we need to make sure the spin lock
does not last for too long.
>
>> +        }
>> +        spin_unlock_irqrestore(&prv->lock, flags);
>> +        break;
>> +    case XEN_DOMCTL_SCHEDOP_putvcpuinfo:
>> +        spin_lock_irqsave(&prv->lock, flags);
>> +        for( i = 0; i < op->u.rtds.nr_vcpus; i++ )
>> +        {
>> +            if ( copy_from_guest_offset(&local_sched,
>> +                    op->u.rtds.vcpus, i, 1) )
>> +            {
>> +                spin_unlock_irqrestore(&prv->lock, flags);
>> +                return -EFAULT;
>> +            }
>> +            if ( local_sched.period <= 0 || local_sched.budget <= 0 )
>> +            {
>> +                spin_unlock_irqrestore(&prv->lock, flags);
>> +                return -EINVAL;
>> +            }
>> +            svc = rt_vcpu(d->vcpu[local_sched.vcpuid]);
>
> You mustn't assume local_sched.vcpuid to represent a valid array
> index.

Do you mean that I locate the vcpu by comparing the local_sched.vcpuid
with the IDs of each element in the vcpu array?

>
>
>> --- a/xen/common/schedule.c
>> +++ b/xen/common/schedule.c
>> @@ -1104,6 +1104,30 @@ long sched_adjust(struct domain *d, struct 
>> xen_domctl_scheduler_op *op)
>>      return ret;
>>  }
>>
>> +/* Adjust scheduling parameter for the vcpus of a given domain. */
>> +long sched_adjust_vcpu(
>> +    struct domain *d,
>> +    struct xen_domctl_scheduler_vcpu_op *op)
>> +{
>> +    long ret;
>
> I see no reason for this and the function return type to be "long".

The reason is stated in the begining.

>
> Jan
>

Best,
Chong

-- 
Chong Li
Department of Computer Science and Engineering
Washington University in St.louis

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.