[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 0/6] x86: fixes/improvements for scratch cpumask



On Tue, Feb 18, 2020 at 08:40:58AM +0100, Jürgen Groß wrote:
> On 17.02.20 19:43, Roger Pau Monne wrote:
> > Hello,
> > 
> > Commit:
> > 
> > 5500d265a2a8fa63d60c08beb549de8ec82ff7a5
> > x86/smp: use APIC ALLBUT destination shorthand when possible
> > 
> > Introduced a bogus usage of the scratch cpumask: it was used in a
> > function that could be called from interrupt context, and hence using
> > the scratch cpumask there is not safe. Patch #5 is a fix for that usage,
> > together with also preventing the usage of any per-CPU variables when
> > send_IPI_mask is called from #MC or #NMI context. Previous patches are
> > preparatory changes.
> > 
> > Patch #6 adds some debug infrastructure to make sure the scratch cpumask
> > is used in the right context, and hence should prevent further missuses.
> 
> I wonder whether it wouldn't be better to have a common percpu scratch
> cpumask handling instead of introducing local ones all over the
> hypervisor.

But the scratch CPU mask already accomplishes this, it's a single
per-CPU mask allocated when the CPU is brought up.

> So basically an array of percpu cpumasks allocated when bringing up a
> cpu (this spares memory as the masks wouldn't need to cover NR_CPUS
> cpus), a percpu counter of the next free index and get_ and put_
> functions acting in a lifo manner.

Sizing this array would be complicated IMO, and it's possible that a
failure to get a mask in certain places could lead to a panic.

> This would help removing all the still existing cpumasks on the stack
> and any illegal nesting would be avoided. The only remaining question
> would be the size of the array.
> 
> Thoughts?

I'm mostly worried about the size of such stack, since we would then
allow nested calls to the get_ cpumask helper. Also I'm not sure
whether we should prevent the usage in interrupt context then, in
order to try to limit the nesting as much as possible.

I think this series is a move in the right direction, and we can add
the per-CPU stack afterwards (as the get_/put_ helpers will already be
there).

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.