[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86: correct vCPU dirty CPU handling



On 26/04/18 10:41, Jan Beulich wrote:
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -1202,11 +1202,23 @@ void put_page_from_l1e(l1_pgentry_t l1e,
>               unlikely(((page->u.inuse.type_info & PGT_count_mask) != 0)) &&
>               (l1e_owner == pg_owner) )
>          {
> +            cpumask_t *mask = this_cpu(scratch_cpumask);
> +
> +            cpumask_clear(mask);
> +
>              for_each_vcpu ( pg_owner, v )
>              {
> -                if ( pv_destroy_ldt(v) )
> -                    flush_tlb_mask(cpumask_of(v->dirty_cpu));
> +                unsigned int cpu;
> +
> +                if ( !pv_destroy_ldt(v) )
> +                    continue;
> +                cpu = read_atomic(&v->dirty_cpu);
> +                if ( is_vcpu_dirty_cpu(cpu) )
> +                    __cpumask_set_cpu(cpu, mask);
>              }
> +
> +            if ( !cpumask_empty(mask) )
> +                flush_tlb_mask(mask);

Thinking about this, what is wrong with:

bool flush;

for_each_vcpu ( pg_owner, v )
    if ( pv_destroy_ldt(v) )
        flush = true;

if ( flush )
   flush_tlb_mask(pg_owner->dirty_cpumask);

This is far less complicated cpumask handling.  As the loop may be long,
it avoids flushing pcpus which have subsequently switched away from
pg_owner context.  It also avoids all playing with v->dirty_cpu.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.