[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/4] xen/arm: set vpidr on the pcpu where the vcpu will run

Hi Stefano,

On 16/02/2018 21:34, Stefano Stabellini wrote:
On Fri, 16 Feb 2018, Julien Grall wrote:
Hi Stefano,

On 16/02/2018 20:59, Stefano Stabellini wrote:
On Fri, 16 Feb 2018, Julien Grall wrote:
On 16/02/2018 20:31, Stefano Stabellini wrote:
On Fri, 16 Feb 2018, Julien Grall wrote:
Hi Stefano,

On 15/02/18 23:16, Stefano Stabellini wrote:
On big.LITTLE systems not all cores have the same midr. Instead of
initializing the vpidr to the boot cpu midr, set it to the value of
midr of the pcpu where the vcpu will run.

This way, assuming that the vcpu has been created with the right
affinity, the guest will be able to read the right vpidr value,
the one of the physical cpu.

Signed-off-by: Stefano Stabellini <sstabellini@xxxxxxxxxx>
     xen/arch/arm/domain.c | 19 ++++++++++++++++---
     1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 532e824..280125f 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -315,6 +315,22 @@ static void schedule_tail(struct vcpu *prev)
     static void continue_new_vcpu(struct vcpu *prev)
         current->arch.actlr = READ_SYSREG32(ACTLR_EL1);
+    /*
+     * Default the virtual ID to match the physical.
+     *
+     * In case the big.LITTLE systems, a guest should be created
+     * cpu affinity set so that all vcpus run on the same kind of
+     * Warn if it is not the case.

continue_new_vcpu is only called once at domain creation. So this
pointless to check that here and probably in ctxt_switch_to.

But I don't want to see such check at every context switch. This is
and we should not impact all the platforms for the benefits of an

If you really want to do that, then it should only be done when the
migrating. That will reduce a lot the performance impact.

I don't want a check for every context switch either. I added it here
because continue_new_vcpu is only called once per vcpu at domain
creation -- it is a one time check. vcpus are supposed to be pinned (or
cpu affinity specified) anyway, so I thought I wouldn't add the check in
vcpu_migrate too. In any case, I am also happy to remove the check
completely, as we have already warned the user enough.
If you agree that continue_new_vcpu is only called once per vCPU. Then I
not sure to understand the purpose of the check. What are you trying to
the user with that?

The intention was to warn the user if she made a mistake with vcpu
pinning and/or cpu affinity.

Oh that is current->domain->arch.vpidr and not vCPU. Sorry for that.

Now your comments make sense! Yeah, I thought so too. I'll make the change.

vpidr should be per vCPU. It is very dangerous to recommend the user to pin
there all vCPUs of a domain to either only big or LITTLE. This is even worst
than what we have today.

Let's continue this discussion in the other thread which is more
appropriately about documentation.

Even with this series and vcpu pinning, I assumed that only scenarios
with vcpus assigned to pcpus of the same kind are allowed (see my other
reply). Thus, I added this check to test once at boot that all vcpus
in a domain have the same actlr.

That's plain wrong. You really can't assume that same actlr means same type of
CPUs. Imagine they are RES0 on both.

It would be a false negative, and wouldn't trigger the warning. A false
positive would be worse: different pcpus with the same midr. Is that
You mean having different value in ACTLR? I am not entirely sure, we do trap guest access on ACTLR_EL1. So they stay at the value initialized by the firmware or the hypervisor (we set SMP bit on Cortex-A15 cores).

In any case, at this point I am convinced that it is best to remove the

A warning would be more meaningful when migrating. To double-check the user did configure the pinning correctly.


Julien Grall

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.