[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 01/15] x86/IRQ: deal with move-in-progress state in fixup_irqs()


  • To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Jan Beulich <JBeulich@xxxxxxxx>
  • Date: Thu, 4 Jul 2019 09:32:05 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; test.office365.com 1;spf=none;dmarc=none;dkim=none;arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=testarcselector01; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=L8uUjJY61xiXwA0NSqkSyUwew+OXpS8jVq2DDzvyNHw=; b=EK2pVvqFf6+Mu8azjDWs45JDt9xX74R2xTeZxadBIiaCSs9qwUXJI9CM44++7kmXtOK10m5XpHDvz78S9TtCavSS7SJ7wzEXEJ6PnrV+CLF/t3UF2JL7qut2BJHOVq4D3fDvPCT4D/KHYxl0mF5mOqQLQ8UxZ3Tyc3uzttC7HaU=
  • Arc-seal: i=1; a=rsa-sha256; s=testarcselector01; d=microsoft.com; cv=none; b=becCkZ3x8Glqa+KYrnFdbIFym99Tf8B5V6IPonE2fWguSEE6biMw2QuBJBsSihP9JcC8iBXC6rPYDapBdJRaXbyLP8Bmdsha7mzVC5vpfIrhP+zvCQyw6tECSar9dI1s2O3/Z6IF8RaoK6xhFr7d5vDX0SrMnDmsodd66fJdRm0=
  • Authentication-results: spf=none (sender IP is ) smtp.mailfrom=JBeulich@xxxxxxxx;
  • Cc: Wei Liu <wei.liu2@xxxxxxxxxx>, Roger Pau Monne <roger.pau@xxxxxxxxxx>
  • Delivery-date: Thu, 04 Jul 2019 09:34:51 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHVMbWZ9IwNX6bswEGaHnGKN9GVPKa6Mz+A
  • Thread-topic: [Xen-devel] [PATCH v3 01/15] x86/IRQ: deal with move-in-progress state in fixup_irqs()

On 03.07.2019 17:39, Andrew Cooper wrote:
> On 17/05/2019 11:44, Jan Beulich wrote:
>> The flag being set may prevent affinity changes, as these often imply
>> assignment of a new vector. When there's no possible destination left
>> for the IRQ, the clearing of the flag needs to happen right from
>> fixup_irqs().
>>
>> Additionally _assign_irq_vector() needs to avoid setting the flag when
>> there's no online CPU left in what gets put into ->arch.old_cpu_mask.
>> The old vector can be released right away in this case.
> 
> This suggests that it is a bugfix, but it isn't clear what happens when
> things go wrong.

The vector cleanup wouldn't ever trigger, as the IRQ wouldn't get
raised anymore to any of its prior target CPUs. Hence the immediate
cleanup that gets done in that case. I thought the 2nd sentence
would make this clear. If it doesn't, do you have a suggestion on
how to improve the text?

>> --- a/xen/arch/x86/irq.c
>> +++ b/xen/arch/x86/irq.c
>> @@ -2418,15 +2462,18 @@ void fixup_irqs(const cpumask_t *mask, b
>>           if ( desc->handler->enable )
>>               desc->handler->enable(desc);
>>   
>> +        cpumask_copy(&affinity, desc->affinity);
>> +
>>           spin_unlock(&desc->lock);
>>   
>>           if ( !verbose )
>>               continue;
>>   
>> -        if ( break_affinity && set_affinity )
>> -            printk("Broke affinity for irq %i\n", irq);
>> -        else if ( !set_affinity )
>> -            printk("Cannot set affinity for irq %i\n", irq);
>> +        if ( !set_affinity )
>> +            printk("Cannot set affinity for IRQ%u\n", irq);
>> +        else if ( break_affinity )
>> +            printk("Broke affinity for IRQ%u, new: %*pb\n",
>> +                   irq, nr_cpu_ids, &affinity);
> 
> While I certainly prefer this version, I should point out that you
> refused to accept my patches like this, and for consistency with the
> rest of the codebase, you should be using cpumask_bits().

Oh, indeed. I guess I had converted a debugging only printk() into
this one without noticing the necessary tidying, the more that
elsewhere in the series I'm actually doing so already.

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.