[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PV-vNUMA issue: topology is misinterpreted by the guest



On Tue, 2015-07-28 at 17:11 +0200, Juergen Gross wrote:
> On 07/28/2015 06:29 AM, Juergen Gross wrote:

> > I'll make some performance tests on a big machine (4 sockets, 60 cores,
> > 120 threads) regarding topology information:
> >
> > - bare metal
> > - "random" topology (like today)
> > - "simple" topology (all vcpus regarded as equal)
> > - "real" topology with all vcpus pinned
> >
> > This should show:
> >
> > - how intrusive would the topology patch(es) be?
> > - what is the performance impact of a "wrong" scheduling data base
> 
> On the above box I used a pvops kernel 4.2-rc4 plus a rather small patch
> (see attachment). I did 5 kernel builds in each environment:
> 
> make clean
> time make -j 120
> 
Right. If you have time, can you try '-j60' and '-j30' (maybe even -j45
and -j15, if you've got _a_lot_ of time! :-)).

I'm asking this because, with hyperthreading involved, I've sometimes
seen things being the worse when *not* (over)saturating the CPU
capacity.

The explanation is that, if every vcpu is busy, meaning that every
thread is busy, it does not make much difference where you schedule the
busy vcpus.

OTOH, if only 1/2 of the threads are busy, a properly setup system will
effectively spread the load in such a way that each vcpu has a full core
available; a messed up one will, when trying to do the same, end up
scheduling stuff on siblings, even if there are idle cores available.

In this case, things are a bit more tricky. In fact, I've observed the
above while looking after the Xen scheduler. In this case, it is the
guest (dom0) scheduler that we are looking at, and, e.g., if the load is
small enough, Xen's scheduler will fix things up, at least up to a
certain extent.

It's worth a try anyway, I guess, if you have time, of course.

> The first result of the 5 runs was always omitted as it would have to
> build up buffer caches etc. The Xen cases were all done in dom0, pinning
> of vcpus in the last scenario was done via dom0_vcpus_pin boot parameter
> of the hypervisor.
> 
> Here are the results (everything in seconds):
> 
>                      elapsed   user   system
> bare metal:            100    5770      805
> "random" topology:     283    6740    20700
> "simple" topology:     290    6740    22200
> "real" topology:       185    7800     8040
> 
> As expected bare metal is the best. Next is "real" topology with pinned
> vcpus (expected again - but system time already factor of 10 up!).
>
I also think that (massively) overloading biases things in favour of
pinning. In fact, pinning incurs in less overhead, as there are no
scheduling decisions involved, and no migrations of vcpus among pcpus.
With the system oversubscribed to to 200%, even in the non-pinning case
there shouldn't be much migrations, but certainly there will be some,
and they turn out to be pure overhead! In fact, they bring zero
benefits, as it's not possible that any of them will put the system in a
more advantageous state, performance wise: we're fully loaded and we
want to stay fully loaded!

> What I didn't expect is: "random" is better than "simple" topology. 
>
Weird indeed!

> I
> could test some other topologies (e.g. everything on one socket, or even
> on one core), but I'm not sure this makes sense. I didn't check the
> exact topology result of the "random" case, maybe I'll do that tomorrow
> with another measurement.
> 
So, my test box looks like this:
cpu_topology           :
cpu:    core    socket     node
  0:       0        1        0
  1:       0        1        0
  2:       1        1        0
  3:       1        1        0
  4:       9        1        0
  5:       9        1        0
  6:      10        1        0
  7:      10        1        0
  8:       0        0        1
  9:       0        0        1
 10:       1        0        1
 11:       1        0        1
 12:       9        0        1
 13:       9        0        1
 14:      10        0        1
 15:      10        0        1

In Dom0, here's what I see _without_ any pinning:

root@Zhaman:~# for i in `seq 0 15`;do cat 
/sys/devices/system/cpu/cpu$i/topology/thread_siblings_list ;done
0-1
0-1
2-3
2-3
4-5
4-5
6-7
6-7
8-9
8-9
10-11
10-11
12-13
12-13
14-15
14-15

root@Zhaman:~# cat /proc/cpuinfo |grep "physical id"
physical id     : 1
physical id     : 1
physical id     : 1
physical id     : 1
physical id     : 1
physical id     : 1
physical id     : 1
physical id     : 1
physical id     : 0
physical id     : 0
physical id     : 0
physical id     : 0
physical id     : 0
physical id     : 0
physical id     : 0
physical id     : 0

root@Zhaman:~# cat /proc/cpuinfo |grep "core id"
core id         : 0
core id         : 0
core id         : 1
core id         : 1
core id         : 9
core id         : 9
core id         : 10
core id         : 10
core id         : 0
core id         : 0
core id         : 1
core id         : 1
core id         : 9
core id         : 9
core id         : 10
core id         : 10

root@Zhaman:~# cat /proc/cpuinfo |grep "cpu cores"
cpu cores       : 4
<same for all cpus>

root@Zhaman:~# cat /proc/cpuinfo |grep "siblings" 
siblings        : 8
<same for all cpus>

So, basically, as far as Dom0 on my test box is concerned, "random"
actually matches the host topology.

Sure, without pinning, this looks equally wrong, as Xen's scheduler can
well execute, say, vcpu 0 and vcpu 4, which are not siblings, on the
same core. But then again, if the load is small, it just won't happen
(e.g., if there are only those two busy vcpus, Xen will send them on
!siblings core), while if it's too hugh, it won't matter... :-/

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.