[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 2/2] x86: add accessors for scratch cpu mask



On 28.02.2020 10:33, Roger Pau Monne wrote:
> Current usage of the per-CPU scratch cpumask is dangerous since
> there's no way to figure out if the mask is already being used except
> for manual code inspection of all the callers and possible call paths.
> 
> This is unsafe and not reliable, so introduce a minimal get/put
> infrastructure to prevent nested usage of the scratch mask and usage
> in interrupt context.
> 
> Move the definition of scratch_cpumask to smp.c in order to place the
> declaration and the accessors as close as possible.

You've changed one instance of "declaration", but not also the other.

> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -196,7 +196,7 @@ static void _clear_irq_vector(struct irq_desc *desc)
>  {
>      unsigned int cpu, old_vector, irq = desc->irq;
>      unsigned int vector = desc->arch.vector;
> -    cpumask_t *tmp_mask = this_cpu(scratch_cpumask);
> +    cpumask_t *tmp_mask = get_scratch_cpumask();
>  
>      BUG_ON(!valid_irq_vector(vector));
>  
> @@ -208,6 +208,7 @@ static void _clear_irq_vector(struct irq_desc *desc)
>          ASSERT(per_cpu(vector_irq, cpu)[vector] == irq);
>          per_cpu(vector_irq, cpu)[vector] = ~irq;
>      }
> +    put_scratch_cpumask(tmp_mask);
>  
>      desc->arch.vector = IRQ_VECTOR_UNASSIGNED;
>      cpumask_clear(desc->arch.cpu_mask);
> @@ -227,8 +228,9 @@ static void _clear_irq_vector(struct irq_desc *desc)
>  
>      /* If we were in motion, also clear desc->arch.old_vector */
>      old_vector = desc->arch.old_vector;
> -    cpumask_and(tmp_mask, desc->arch.old_cpu_mask, &cpu_online_map);
>  
> +    cpumask_and(tmp_mask, desc->arch.old_cpu_mask, &cpu_online_map);
> +    tmp_mask = get_scratch_cpumask();

Did you test this? It looks overwhelmingly likely that the two
lines need to be the other way around.

> @@ -4384,6 +4389,17 @@ static int __do_update_va_mapping(
>          break;
>      }
>  
> +    switch ( flags & ~UVMF_FLUSHTYPE_MASK )
> +    {
> +    case UVMF_LOCAL:
> +    case UVMF_ALL:
> +        break;
> +
> +    default:
> +        put_scratch_cpumask(mask);
> +    }
> +
> +
>      return rc;

No two successive blank lines please.

> --- a/xen/arch/x86/smp.c
> +++ b/xen/arch/x86/smp.c
> @@ -25,6 +25,31 @@
>  #include <irq_vectors.h>
>  #include <mach_apic.h>
>  
> +DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, scratch_cpumask);
> +
> +#ifndef NDEBUG
> +cpumask_t *scratch_cpumask(bool use)
> +{
> +    static DEFINE_PER_CPU(void *, scratch_cpumask_use);

I'd make this "const void *", btw.

> +    /*
> +     * Due to reentrancy scratch cpumask cannot be used in IRQ, #MC or #NMI
> +     * context.
> +     */
> +    BUG_ON(in_irq() || in_mce_handler() || in_nmi_handler());
> +
> +    if ( use && unlikely(this_cpu(scratch_cpumask_use)) )
> +    {
> +        printk("scratch CPU mask already in use by %ps (%p)\n",
> +               this_cpu(scratch_cpumask_use), this_cpu(scratch_cpumask_use));

Why the raw %p as well? We don't do so elsewhere, I think. Yes,
it's debugging code only, but I wonder anyway.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.