[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 4/5] x86/smp: use a dedicated scratch cpumask in send_IPI_mask



On Wed, Feb 26, 2020 at 11:45:40AM +0100, Jan Beulich wrote:
> On 26.02.2020 11:18, Roger Pau Monné wrote:
> > On Wed, Feb 26, 2020 at 11:07:44AM +0100, Jan Beulich wrote:
> >> On 24.02.2020 11:46, Roger Pau Monne wrote:
> >>> Using scratch_cpumask in send_IPI_mask is not safe because it can be
> >>> called from interrupt context, and hence Xen would have to make sure
> >>> all the users of the scratch cpumask disable interrupts while using
> >>> it.
> >>>
> >>> Instead introduce a new cpumask to be used by send_IPI_mask, and
> >>> disable interrupts while using it.
> >>
> >> The alternative of also adding ...
> >>
> >>> --- a/xen/arch/x86/smp.c
> >>> +++ b/xen/arch/x86/smp.c
> >>> @@ -59,6 +59,7 @@ static void send_IPI_shortcut(unsigned int shortcut, 
> >>> int vector,
> >>>      apic_write(APIC_ICR, cfg);
> >>>  }
> >>>  
> >>> +DECLARE_PER_CPU(cpumask_var_t, send_ipi_cpumask);
> >>>  /*
> >>>   * send_IPI_mask(cpumask, vector): sends @vector IPI to CPUs in @cpumask,
> >>>   * excluding the local CPU. @cpumask may be empty.
> >>> @@ -67,7 +68,21 @@ static void send_IPI_shortcut(unsigned int shortcut, 
> >>> int vector,
> >>>  void send_IPI_mask(const cpumask_t *mask, int vector)
> >>>  {
> >>>      bool cpus_locked = false;
> >>> -    cpumask_t *scratch = this_cpu(scratch_cpumask);
> >>> +    cpumask_t *scratch = this_cpu(send_ipi_cpumask);
> >>> +    unsigned long flags;
> >>> +
> >>> +    if ( in_mc() || in_nmi() )
> >>
> >> ... in_irq() here was considered, and discarded because of? With
> >> this you'd not need the second CPU mask and you'd also be able
> >> to avoid disabling an re-enabling IRQs.
> > 
> > I aimed to use the shorthand as much as possible, but I would also be
> > fine with not using it in irq context. I assume there aren't that many
> > flushes in irq context, and then the IPIs sent are probably not
> > broadcast ones.
> 
> Well, here it's not just flushes, is it?

No, this covers all IPIs. My remark was that flush IPIs are more
likely to target all CPUs than other IPIs, together with global
rendezvous but I assume those aren't that frequent.

> Knowing some (typical)
> IRQ context uses could allow us take a better decision. After
> Sander's report, did you actually identify what path(s) the
> earlier change broke?

No, I assume something related to passthrough, but I haven't been able
to trigger it myself.

Going for the easier solution right now seems like the most sensible
approach, we can always add a dedicated mask if necessary.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.