[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3] x86/hvm/viridian: flush remote tlbs by hypercall



> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@xxxxxxxxxx]
> Sent: 19 November 2015 16:07
> To: Paul Durrant; xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Ian Jackson; Stefano Stabellini; Ian Campbell; Wei Liu; Keir (Xen.org); 
> Jan
> Beulich
> Subject: Re: [PATCH v3] x86/hvm/viridian: flush remote tlbs by hypercall
> 
> On 19/11/15 13:19, Paul Durrant wrote:
> > @@ -561,10 +584,81 @@ int viridian_hypercall(struct cpu_user_regs *regs)
> >      switch ( input.call_code )
> >      {
> >      case HvNotifyLongSpinWait:
> > +        /*
> > +         * See Microsoft Hypervisor Top Level Spec. section 18.5.1.
> > +         */
> >          perfc_incr(mshv_call_long_wait);
> >          do_sched_op(SCHEDOP_yield, guest_handle_from_ptr(NULL, void));
> >          status = HV_STATUS_SUCCESS;
> >          break;
> > +
> > +    case HvFlushVirtualAddressSpace:
> > +    case HvFlushVirtualAddressList:
> > +    {
> > +        cpumask_t *pcpu_mask;
> > +        struct vcpu *v;
> > +        struct {
> > +            uint64_t address_space;
> > +            uint64_t flags;
> > +            uint64_t vcpu_mask;
> > +        } input_params;
> > +
> > +        /*
> > +         * See Microsoft Hypervisor Top Level Spec. sections 12.4.2
> > +         * and 12.4.3.
> > +         */
> > +        perfc_incr(mshv_flush);
> > +
> > +        /* These hypercalls should never use the fast-call convention. */
> > +        status = HV_STATUS_INVALID_PARAMETER;
> > +        if ( input.fast )
> > +            break;
> > +
> > +        /* Get input parameters. */
> > +        if ( hvm_copy_from_guest_phys(&input_params, input_params_gpa,
> > +                                      sizeof(input_params)) != 
> > HVMCOPY_okay )
> > +            break;
> > +
> > +        /*
> > +         * It is not clear from the spec. if we are supposed to
> > +         * include current virtual CPU in the set or not in this case,
> > +         * so err on the safe side.
> > +         */
> > +        if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
> > +            input_params.vcpu_mask = ~0ul;
> > +
> > +        pcpu_mask = curr->arch.hvm_vcpu.viridian.flush_cpumask;
> > +        cpumask_clear(pcpu_mask);
> > +
> > +        /*
> > +         * For each specified virtual CPU flush all ASIDs to invalidate
> > +         * TLB entries the next time it is scheduled and then, if it
> > +         * is currently running, add its physical CPU to a mask of
> > +         * those which need to be interrupted to force a flush.
> > +         */
> > +        for_each_vcpu ( currd, v )
> > +        {
> > +            if ( !(input_params.vcpu_mask & (1ul << v->vcpu_id)) )
> > +                continue;
> 
> You need to cap this loop at a vcpu_id of 63, or the above conditional
> will become undefined.

The guest should not issue the hypercall if it has more than 64 vCPUs so to 
some extend I don't care what happens, as long as it is not harmful to the 
hypervisor in general, and I don't think that it is in this case.

> 
> It might also be wise to fail the vcpu_initialise() for a
> viridian-enabled domain having more than 64 vcpus.
> 
> > +
> > +            hvm_asid_flush_vcpu(v);
> > +            if ( v->is_running )
> > +                cpumask_set_cpu(v->processor, pcpu_mask);
> 
> __cpumask_set_cpu().  No need for atomic operations here.
> 

Ok.

> > +        }
> > +
> > +        /*
> > +         * Since ASIDs have now been flushed it just remains to
> > +         * force any CPUs currently running target vCPUs out of non-
> > +         * root mode. It's possible that re-scheduling has taken place
> > +         * so we may unnecessarily IPI some CPUs.
> > +         */
> > +        if ( !cpumask_empty(pcpu_mask) )
> > +            flush_tlb_mask(pcpu_mask);
> 
> Wouldn't it be easier to simply and input_params.vcpu_mask with
> d->vcpu_dirty_mask ?
> 

No, that may yield much too big a mask. All we need here is a mask of where the 
vcpus are running *now*, not everywhere they've been.

> > +
> > +        status = HV_STATUS_SUCCESS;
> > +        break;
> > +    }
> > +
> >      default:
> >          status = HV_STATUS_INVALID_HYPERCALL_CODE;
> >          break;
> > diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-
> x86/hvm/viridian.h
> > index c4319d7..2eec85e 100644
> > --- a/xen/include/asm-x86/hvm/viridian.h
> > +++ b/xen/include/asm-x86/hvm/viridian.h
> > @@ -22,6 +22,7 @@ union viridian_apic_assist
> >  struct viridian_vcpu
> >  {
> >      union viridian_apic_assist apic_assist;
> > +    cpumask_var_t flush_cpumask;
> >  };
> >
> >  union viridian_guest_os_id
> > @@ -117,6 +118,9 @@ viridian_hypercall(struct cpu_user_regs *regs);
> >  void viridian_time_ref_count_freeze(struct domain *d);
> >  void viridian_time_ref_count_thaw(struct domain *d);
> >
> > +int viridian_vcpu_init(struct vcpu *v);
> > +void viridian_vcpu_deinit(struct vcpu *v);
> > +
> >  #endif /* __ASM_X86_HVM_VIRIDIAN_H__ */
> >
> >  /*
> > diff --git a/xen/include/asm-x86/perfc_defn.h b/xen/include/asm-
> x86/perfc_defn.h
> > index 9ef092e..aac9331 100644
> > --- a/xen/include/asm-x86/perfc_defn.h
> > +++ b/xen/include/asm-x86/perfc_defn.h
> > @@ -115,6 +115,7 @@ PERFCOUNTER(mshv_call_sw_addr_space,    "MS
> Hv Switch Address Space")
> >  PERFCOUNTER(mshv_call_flush_tlb_list,   "MS Hv Flush TLB list")
> >  PERFCOUNTER(mshv_call_flush_tlb_all,    "MS Hv Flush TLB all")
> >  PERFCOUNTER(mshv_call_long_wait,        "MS Hv Notify long wait")
> > +PERFCOUNTER(mshv_call_flush,            "MS Hv Flush TLB")
> >  PERFCOUNTER(mshv_rdmsr_osid,            "MS Hv rdmsr Guest OS ID")
> >  PERFCOUNTER(mshv_rdmsr_hc_page,         "MS Hv rdmsr hypercall page")
> >  PERFCOUNTER(mshv_rdmsr_vp_index,        "MS Hv rdmsr vp index")
> > diff --git a/xen/include/public/hvm/params.h
> b/xen/include/public/hvm/params.h
> > index 356dfd3..5e54a84 100644
> > --- a/xen/include/public/hvm/params.h
> > +++ b/xen/include/public/hvm/params.h
> > @@ -1,3 +1,4 @@
> > +
> 
> Spurious change.

Yes, it is.

  Paul

> 
> ~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.