[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] xen-evtchn: Bind dyn evtchn:qemu-dm interrupt to next online CPU



On 05/30/2017 11:17 AM, Anoob Soman wrote:
> On 16/05/17 20:02, Boris Ostrovsky wrote:
>
> Hi Boris,
>
> Sorry for the delay, I was out traveling.
>
>>>           rc = evtchn_bind_to_user(u, bind_interdomain.local_port);
>>> -        if (rc == 0)
>>> +        if (rc == 0) {
>>>               rc = bind_interdomain.local_port;
>>> +            selected_cpu = cpumask_next(selected_cpu,
>>> cpu_online_mask);
>>> +            if (selected_cpu >= nr_cpu_ids)
>>> +                selected_cpu = cpumask_first(cpu_online_mask);
>>> +            xen_rebind_evtchn_to_cpu(rc, selected_cpu);
>> Can you do proper assignment *instead of* binding to CPU0 as opposed to
>> rebinding the event channel later? Otherwise you are making an extra
>> hypercall.
>
> If I understood the code correctly, EVTCHNOP_bind_interdomain doesn't
> support sending in VCPU number, so I think we would require two
> hypercalls one for binding interdomain eventchn
> (EVTCHNOP_bind_interdomain) and another for binding it to a VCPU
> (EVTCHNOP_bind_vcpu). We can create EVTCHNOP_bind_interdomain_V2
> sub-op, which can take in VCPU id, if we want to avoid making multiple
> hypercalls.

This is not worth API change so I guess we are going to have to use
separate calls, as you originally proposed.


>
>> You also probably want to look at current IRQ affinity mask instead of
>> cpu_online_mask.
>>
>
> Do we need to look at IRQ affinity mask, if we are going to bind
> eventchannel to smp_processor_id(). If we definitely need to use IRQ
> affinity then binding to smp_processor_id() might not be the correct
> approach.

What if, for whatever reason, the current processor is not in the
affinity mask of the IRQ?

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.