[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 4/5] x86/smp: use a dedicated scratch cpumask in send_IPI_mask



On Wed, Feb 26, 2020 at 11:07:44AM +0100, Jan Beulich wrote:
> On 24.02.2020 11:46, Roger Pau Monne wrote:
> > Using scratch_cpumask in send_IPI_mask is not safe because it can be
> > called from interrupt context, and hence Xen would have to make sure
> > all the users of the scratch cpumask disable interrupts while using
> > it.
> > 
> > Instead introduce a new cpumask to be used by send_IPI_mask, and
> > disable interrupts while using it.
> 
> The alternative of also adding ...
> 
> > --- a/xen/arch/x86/smp.c
> > +++ b/xen/arch/x86/smp.c
> > @@ -59,6 +59,7 @@ static void send_IPI_shortcut(unsigned int shortcut, int 
> > vector,
> >      apic_write(APIC_ICR, cfg);
> >  }
> >  
> > +DECLARE_PER_CPU(cpumask_var_t, send_ipi_cpumask);
> >  /*
> >   * send_IPI_mask(cpumask, vector): sends @vector IPI to CPUs in @cpumask,
> >   * excluding the local CPU. @cpumask may be empty.
> > @@ -67,7 +68,21 @@ static void send_IPI_shortcut(unsigned int shortcut, int 
> > vector,
> >  void send_IPI_mask(const cpumask_t *mask, int vector)
> >  {
> >      bool cpus_locked = false;
> > -    cpumask_t *scratch = this_cpu(scratch_cpumask);
> > +    cpumask_t *scratch = this_cpu(send_ipi_cpumask);
> > +    unsigned long flags;
> > +
> > +    if ( in_mc() || in_nmi() )
> 
> ... in_irq() here was considered, and discarded because of? With
> this you'd not need the second CPU mask and you'd also be able
> to avoid disabling an re-enabling IRQs.

I aimed to use the shorthand as much as possible, but I would also be
fine with not using it in irq context. I assume there aren't that many
flushes in irq context, and then the IPIs sent are probably not
broadcast ones.

> 
> In order to not disturb the partial sentence, a small request on
> the previous hunk as well: Could you add a blank line after the
> new definition, please?
> 
> > +    {
> > +        /*
> > +         * When in #NMI or #MC context fallback to the old (and simpler) 
> > IPI
> 
> Note that conventional notation indeed is #MC but just NMI (applies
> here, in the description, and also elsewhere in the series).
> 
> > @@ -81,7 +96,15 @@ void send_IPI_mask(const cpumask_t *mask, int vector)
> >           local_irq_is_enabled() && (cpus_locked = get_cpu_maps()) &&
> >           (park_offline_cpus ||
> >            cpumask_equal(&cpu_online_map, &cpu_present_map)) )
> > +    {
> > +        /*
> > +         * send_IPI_mask can be called from interrupt context, and hence we
> > +         * need to disable interrupts in order to protect the per-cpu
> > +         * send_ipi_cpumask while being used.
> > +         */
> > +        local_irq_save(flags);
> >          cpumask_or(scratch, mask, cpumask_of(smp_processor_id()));
> > +    }
> >      else
> >      {
> >          if ( cpus_locked )
> > @@ -89,6 +112,7 @@ void send_IPI_mask(const cpumask_t *mask, int vector)
> >              put_cpu_maps();
> >              cpus_locked = false;
> >          }
> > +        local_irq_save(flags);
> >          cpumask_clear(scratch);
> >      }
> >  
> > @@ -97,6 +121,7 @@ void send_IPI_mask(const cpumask_t *mask, int vector)
> >      else
> >          alternative_vcall(genapic.send_IPI_mask, mask, vector);
> >  
> > +    local_irq_restore(flags);
> 
> Wouldn't it be better to re-enable interrupts in the "else" branch
> visible in context ahead of the call?

I think I will go with your suggestion and don't use the shorthand in
irq contenxt, and hence we won't need to disable interrupts then.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.