[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/9] x86/IRQ: deal with move cleanup count state in fixup_irqs()



>>> On 03.05.19 at 17:21, <roger.pau@xxxxxxxxxx> wrote:
> On Mon, Apr 29, 2019 at 05:23:20AM -0600, Jan Beulich wrote:
>> The cleanup IPI may get sent immediately before a CPU gets removed from
>> the online map. In such a case the IPI would get handled on the CPU
>> being offlined no earlier than in the interrupts disabled window after
>> fixup_irqs()' main loop. This is too late, however, because a possible
>> affinity change may incur the need for vector assignment, which will
>> fail when the IRQ's move cleanup count is still non-zero.
>> 
>> To fix this
>> - record the set of CPUs the cleanup IPIs gets actually sent to alongside
>>   setting their count,
>> - adjust the count in fixup_irqs(), accounting for all CPUs that the
>>   cleanup IPI was sent to, but that are no longer online,
>> - bail early from the cleanup IPI handler when the CPU is no longer
>>   online, to prevent double accounting.
>> 
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> 
> Reviewed-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>

Thanks.

> Just as a note, this whole interrupt migration business seems
> extremely complex, and I wonder if Xen does really need it, or what's
> exactly it's performance gain compared to more simple solutions.

What more simple solutions would you think about? IRQ affinities
tracking their assigned-vCPU ones was added largely to avoid
high rate interrupts always arriving on a CPU other than the one
where the actual handling will take place. Arguably this may go
too far for low rate interrupts, but adding a respective heuristic
would rather further complicate handling.

> I understand this is just fixes, but IMO it's making the logic even more
> complex.
> 
> Maybe it would be simpler to have the interrupts hard-bound to pCPUs
> and instead have a soft-affinity on the guest vCPUs that are assigned
> as the destination?

How would the soft affinity of a vCPU be calculated that has
multiple IRQs (with at most partially overlapping affinities) to be
serviced by it?

>> ---
>> TBD: The proper recording of the IPI destinations actually makes the
>>      move_cleanup_count field redundant. Do we want to drop it, at the
>>      price of a few more CPU-mask operations?
> 
> AFAICT this is not a hot path, so I would remove the
> move_cleanup_count field and just weight the cpu bitmap when needed.

Added for v2 (pending successful testing).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.