[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/4] xen/arm: set vpidr on the pcpu where the vcpu will run


  • To: Julien Grall <julien.grall@xxxxxxx>
  • From: Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • Date: Fri, 16 Feb 2018 12:31:54 -0800 (PST)
  • Authentication-results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org
  • Authentication-results: mail.kernel.org; spf=none smtp.mailfrom=sstabellini@xxxxxxxxxx
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxx
  • Delivery-date: Fri, 16 Feb 2018 20:32:13 +0000
  • Dmarc-filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2557D217A0
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Fri, 16 Feb 2018, Julien Grall wrote:
> Hi Stefano,
> 
> On 15/02/18 23:16, Stefano Stabellini wrote:
> > On big.LITTLE systems not all cores have the same midr. Instead of
> > initializing the vpidr to the boot cpu midr, set it to the value of the
> > midr of the pcpu where the vcpu will run.
> > 
> > This way, assuming that the vcpu has been created with the right pcpu
> > affinity, the guest will be able to read the right vpidr value, matching
> > the one of the physical cpu.
> > 
> > Signed-off-by: Stefano Stabellini <sstabellini@xxxxxxxxxx>
> > ---
> >   xen/arch/arm/domain.c | 19 ++++++++++++++++---
> >   1 file changed, 16 insertions(+), 3 deletions(-)
> > 
> > diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> > index 532e824..280125f 100644
> > --- a/xen/arch/arm/domain.c
> > +++ b/xen/arch/arm/domain.c
> > @@ -315,6 +315,22 @@ static void schedule_tail(struct vcpu *prev)
> >   static void continue_new_vcpu(struct vcpu *prev)
> >   {
> >       current->arch.actlr = READ_SYSREG32(ACTLR_EL1);
> > +    /*
> > +     * Default the virtual ID to match the physical.
> > +     *
> > +     * In case the big.LITTLE systems, a guest should be created with
> > +     * cpu affinity set so that all vcpus run on the same kind of pcpus.
> > +     * Warn if it is not the case.
> 
> continue_new_vcpu is only called once at domain creation. So this looks
> pointless to check that here and probably in ctxt_switch_to.
> 
> But I don't want to see such check at every context switch. This is expensive
> and we should not impact all the platforms for the benefits of an unsafe
> configuration.
> 
> If you really want to do that, then it should only be done when the vCPU is
> migrating. That will reduce a lot the performance impact.

I don't want a check for every context switch either. I added it here
because continue_new_vcpu is only called once per vcpu at domain
creation -- it is a one time check. vcpus are supposed to be pinned (or
cpu affinity specified) anyway, so I thought I wouldn't add the check in
vcpu_migrate too. In any case, I am also happy to remove the check
completely, as we have already warned the user enough.


> > +     */
> > +    if ( !current->domain->arch.vpidr )
> > +        current->domain->arch.vpidr = current_cpu_data.midr.bits;
> > +    else if ( current->domain->arch.vpidr != current_cpu_data.midr.bits )
> > +    {
> > +        gdprintk(XENLOG_WARNING,
> > +                 "WARNING: possible corruptions! d%uv%u is running on a
> > pcpu with different midr (%x) from the others (%x)\n",
> 
> NIT: I would prefer if you use uppercase for pcpu and midr. This is easier to
> read.
> 
> Also, gdprintk already print the domain vCPU. So it is not necessary to have
> it again in the message.

I'll make the changes


> 
> > +                 current->domain->domain_id, current->vcpu_id,
> > +                 current_cpu_data.midr.bits, current->domain->arch.vpidr);
> > +    }
> >         schedule_tail(prev);
> >   @@ -596,9 +612,6 @@ int arch_domain_create(struct domain *d, unsigned int
> > domcr_flags,
> >       if ( (d->shared_info = alloc_xenheap_pages(0, 0)) == NULL )
> >           goto fail;
> >   -    /* Default the virtual ID to match the physical */
> > -    d->arch.vpidr = boot_cpu_data.midr.bits;
> > -
> >       clear_page(d->shared_info);
> >       share_xen_page_with_guest(
> >           virt_to_page(d->shared_info), d, XENSHARE_writable);
> > 
> 
> Cheers,
> 
> -- 
> Julien Grall
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.