[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/events/fifo: Consume unprocessed events when a CPU dies



On 30/06/15 13:26, Boris Ostrovsky wrote:
> On 06/30/2015 05:51 AM, Ross Lagerwall wrote:
>> On 06/29/2015 02:32 PM, Boris Ostrovsky wrote:
>>> On 06/29/2015 06:19 AM, Ross Lagerwall wrote:
>>>> On 06/19/2015 05:06 PM, David Vrabel wrote:
>>>>> On 19/06/15 17:02, Boris Ostrovsky wrote:
>>>>>> On 06/19/2015 11:15 AM, Ross Lagerwall wrote:
>>>>>>> When a CPU is offlined, there may be unprocessed events on a port
>>>>>>> for
>>>>>>> that CPU.  If the port is subsequently reused on a different CPU, it
>>>>>>> could be in an unexpected state with the link bit set, resulting in
>>>>>>> interrupts being missed. Fix this by consuming any unprocessed
>>>>>>> events
>>>>>>> for a particular CPU when that CPU dies.
>>>>>>>
>>>>>>> Signed-off-by: Ross Lagerwall <ross.lagerwall@xxxxxxxxxx>
>>>>>>> ---
>>>>>>>    drivers/xen/events/events_fifo.c | 23 ++++++++++++++++++-----
>>>>>>>    1 file changed, 18 insertions(+), 5 deletions(-)
>>>>>>>
>>>>>>> diff --git a/drivers/xen/events/events_fifo.c
>>>>>>> b/drivers/xen/events/events_fifo.c
>>>>>>> index 417415d..1dd0ba12 100644
>>>>>>> --- a/drivers/xen/events/events_fifo.c
>>>>>>> +++ b/drivers/xen/events/events_fifo.c
>>>>>>> @@ -281,7 +281,8 @@ static void handle_irq_for_port(unsigned port)
>>>>>>>      static void consume_one_event(unsigned cpu,
>>>>>>>                      struct evtchn_fifo_control_block
>>>>>>> *control_block,
>>>>>>> -                  unsigned priority, unsigned long *ready)
>>>>>>> +                  unsigned priority, unsigned long *ready,
>>>>>>> +                  bool drop)
>>>>>>>    {
>>>>>>>        struct evtchn_fifo_queue *q = &per_cpu(cpu_queue, cpu);
>>>>>>>        uint32_t head;
>>>>>>> @@ -313,13 +314,17 @@ static void consume_one_event(unsigned cpu,
>>>>>>>        if (head == 0)
>>>>>>>            clear_bit(priority, ready);
>>>>>>>    -    if (evtchn_fifo_is_pending(port) &&
>>>>>>> !evtchn_fifo_is_masked(port))
>>>>>>> -        handle_irq_for_port(port);
>>>>>>> +    if (evtchn_fifo_is_pending(port) &&
>>>>>>> !evtchn_fifo_is_masked(port)) {
>>>>>>> +        if (unlikely(drop))
>>>>>>> +            pr_warn("Dropping pending event for port %u\n", port);
>>>>>>
>>>>>> Maybe pr_info (or pr_notice)?
>>>>>
>>>>> We want a warning here because we think this shouldn't happen -- if it
>>>>> does we actually need to retrigger the event on its new CPU.
>>>>>
>>>>>> Also, why not do this (testing for unprocessed events) in
>>>>>> xen_evtchn_close()?
>>>>>
>>>>> We can't do anything about them when closing because they may be in
>>>>> the
>>>>> middle of a queue.
>>>
>>> (Sorry, I missed this)
>>>
>>> Why can't (actually, why doesn't) the cpu that is being offlined drain
>>> its queue?
>>>
>>
>> Where would this be done? I thought using CPU notifiers was the
>> correct way to hook when a CPU goes down without having to stick fifo
>> event channel code in the core Xen code.
> 
> In xen_evtchn_close(). We should be getting there (roughly) as cpu_die()
> -> xen_cpu_die() -> xen_smp_intr_free() -> unbind_from_irqhandler(). In
> fact, this path is taken right before cpu_down() sends CPU_DEAD
> notifications.
> 
> I think cleaning up in xen_evtchn_close() is better because it is
> possible to close event channel for reasons other than CPU going away,
> in which case we also may need to deal with unprocessed events.

Having looked at this further and attempted to do this, draining events
in close is difficult because

a) we can't wait for LINKED to clear when closing since we're holding
the desc spin lock; and deferring the close to a tasklet or work doesn't
work because:

b) rebinding a PIRQ will fail if the close is deferred and not yet
completed and there is no way to ensure the close happens promptly
without changes to core irq code.

c) Xen will be fixed to not reuse ports that are still LINKED.

I'm going to apply Ross's original patch and Cc stable.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.