[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/hvm/viridian: fix the TLB flush hypercall



>>> On 16.03.16 at 14:00, <paul.durrant@xxxxxxxxxx> wrote:
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -2576,12 +2576,9 @@ int hvm_vcpu_initialise(struct vcpu *v)
>      if ( rc != 0 )
>          goto fail6;
>  
> -    if ( is_viridian_domain(d) )
> -    {
> -        rc = viridian_vcpu_init(v);
> -        if ( rc != 0 )
> -            goto fail7;
> -    }
> +    rc = viridian_vcpu_init(v);
> +    if ( rc != 0 )
> +        goto fail7;

Well, it's only a CPU mask (i.e. right now at most 512 bytes), but
anyway - why can't this be done conditionally here as well as
when HVM_PARAM_VIRIDIAN gets set to a non-zero value? I
know you are of the opinion that the Viridian flag could (should)
always be set for HVM guests, but you also know that I disagree
(and Don't VMware patches, should they ever go in, would
support me, since iirc VMware and Viridian are exclusive of one
another).

That said, I now wonder anyway why this is a per-vCPU mask
instead of a per-pCPU one: There's no need for every vCPU in
the system to have its own afaics.

> @@ -658,6 +658,8 @@ int viridian_hypercall(struct cpu_user_regs *regs)
>          if ( !cpumask_empty(pcpu_mask) )
>              flush_tlb_mask(pcpu_mask);
>  
> +        output.rep_complete = input.rep_count;

So why would this not be necessary for the other hypercalls?
And what does a repeat count mean here in the first place?
Surely there isn't any expectation to do more then one flush?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.