[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86/hvm/viridian: flush remote tlbs by hypercall



>>> On 19.11.15 at 13:33, <paul.durrant@xxxxxxxxxx> wrote:
> +    case HvFlushVirtualAddressSpace:
> +    case HvFlushVirtualAddressList:
> +    {
> +        cpumask_var_t pcpu_mask;

cpumask_t * (or else ... [skip next comment]

> +        struct vcpu *v;
> +
> +        struct {

Stray blank line.

> +            uint64_t address_space;
> +            uint64_t flags;
> +            uint64_t vcpu_mask;
> +        } input_params;
> +
> +        /*
> +         * See Microsoft Hypervisor Top Level Spec. sections 12.4.2
> +         * and 12.4.3.
> +         */
> +        perfc_incr(mshv_flush);
> +
> +        /* These hypercalls should never use the fast-call convention. */
> +        status = HV_STATUS_INVALID_PARAMETER;
> +        if ( input.fast )
> +            break;
> +
> +        /* Get input parameters. */
> +        if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
> +                                      sizeof(input_params)) != HVMCOPY_okay )
> +            break;
> +
> +        /*
> +         * It is not clear from the spec. if we are supposed to
> +         * include current virtual CPU in the set or not in this case,
> +         * so err on the safe side.
> +         */
> +        if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
> +            input_params.vcpu_mask = ~0ul;
> +
> +        pcpu_mask = curr->arch.hvm_vcpu.viridian.flush_cpumask;

... you pointlessly copy the contents here with low NR_CPUS.
cpumask_var_t is defined such that you can use it as rvalue
of an assignment to cpumask_t *.

> +        cpumask_clear(pcpu_mask);
> +
> +        /*
> +         * For each specified virtual CPU flush all ASIDs to invalidate
> +         * TLB entries the next time it is scheduled and then, if it
> +         * is currently running, add its physical CPU to a mask of
> +         * those which need to be interrupted to force a flush. (Since
> +         * ASIDs have already been flushed they actually only need
> +         * forcing out of non-root mode, but this is as good a way as
> +         * any).
> +         */
> +        for_each_vcpu ( currd, v )
> +        {
> +            if ( !(input_params.vcpu_mask & (1ul << v->vcpu_id)) )
> +                continue;
> +
> +            hvm_asid_flush_vcpu(v);
> +            if ( v->is_running )
> +                cpumask_set_cpu(v->processor, pcpu_mask);
> +        }
> +
> +        if ( !cpumask_empty(pcpu_mask) )
> +            flush_tlb_mask(pcpu_mask);

Repeating my question you ignored on v1:  Is this correct when
scheduling happens between calculating pcpu_mask and actually
using it?

> +        status = HV_STATUS_SUCCESS;
> +        break;
> +    }
>      default:

Perhaps time to insert blank lines between case blocks?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.