This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] unnecessary VCPU migration happens again

To: "Xu, Anthony" <anthony.xu@xxxxxxxxx>
Subject: Re: [Xen-devel] unnecessary VCPU migration happens again
From: Emmanuel Ackaouy <ack@xxxxxxxxxxxxx>
Date: Wed, 6 Dec 2006 14:01:35 +0000
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, xen-ia64-devel <xen-ia64-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 06 Dec 2006 06:01:58 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <51CFAB8CB6883745AE7B93B3E084EBE207DD86@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Mail-followup-to: "Xu, Anthony" <anthony.xu@xxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, xen-ia64-devel <xen-ia64-devel@xxxxxxxxxxxxxxxxxxx>
References: <51CFAB8CB6883745AE7B93B3E084EBE207DD86@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.4.1i
Hi Anthony.

Could you send xentrace output for scheduling operations
in your setup?

Perhaps we're being a little too aggressive spreading
work across sockets. We do this on vcpu_wake right now.

I'm not sure I understand why HVM VCPUs would block
and wake more often than PV VCPUs though. Can you

If you could gather some scheduler traces and send
results, it will give us a good idea of what's going
on and why. The multi-core support is new and not
widely tested so it's possible that it is being
overly aggressive or perhaps even buggy.


On Fri, Dec 01, 2006 at 06:11:32PM +0800, Xu, Anthony wrote:
> Emmanue,
> I found that unnecessary VCPU migration happens again.
> My environment is,
> IPF two sockes, two cores per socket, 1 thread per core.
> There are 4 core totally.
> There are 3 domain, they are all UP,
> So there are 3 VCPU totally.
> One is domain0,
> The other two are VTI-domain.
> I found there are lots of migrations.
> This is caused by below code segment in function csched_cpu_pick.
> When I comments this code segment, there is no migration in above 
> enviroment. 
> I have a little analysis about this code.
> This code handls multi-core and multi-thread, that's very good,
> If two VCPUS running on LPs which belong to the same core, then the
> performance
> is bad, so if there are free LPS, we should let this two VCPUS run on
> different cores.
> This code may work well with para-domain.
> Because para-domain is seldom blocked,
> It may be block due to guest call "halt" instruction.
> This means if a idle VCPU is running on a LP,
> there is no non-idle VCPU running on this LP.
> In this evironment, I think below code should work well.
> But in HVM environment, HVM is blocked by IO operation,
> That is to say, if a idle VCPU is running on a LP, maybe a
> HVM VCPU is blocked, and HVM VCPU will run on this LP, when
> it is woken up.
> In this evironment, below code cause unnecessary migrations.
> I think this doesn't reach the goal ot this code segment.
> In IPF side, migration is time-consuming, so it caused some performance
> degradation.
> I have a proposal and it may be not good.
> We can change the meaning of idle-LP,
> Idle-LP means a idle-VCPU is running on this LP, and there is no VCPU
> blocked on this
> LP.( if this VCPU is woken up, this VCPU will run on this LP).
> --Anthony
>         /*
>          * In multi-core and multi-threaded CPUs, not all idle execution
>          * vehicles are equal!
>          *
>          * We give preference to the idle execution vehicle with the
> most
>          * idling neighbours in its grouping. This distributes work
> across
>          * distinct cores first and guarantees we don't do something
> stupid
>          * like run two VCPUs on co-hyperthreads while there are idle
> cores
>          * or sockets.
>          */
>         while ( !cpus_empty(cpus) )
>         {
>             nxt = first_cpu(cpus);
>             if ( csched_idler_compare(cpu, nxt) < 0 )
>             {
>                 cpu = nxt;
>                 cpu_clear(nxt, cpus);
>             }
>             else if ( cpu_isset(cpu, cpu_core_map[nxt]) )
>             {
>                 cpus_andnot(cpus, cpus, cpu_sibling_map[nxt]);
>             }
>             else
>             {
>                 cpus_andnot(cpus, cpus, cpu_core_map[nxt]);
>             }
>             ASSERT( !cpu_isset(nxt, cpus) );
>         }

Xen-devel mailing list