[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PV-vNUMA issue: topology is misinterpreted by the guest



On Tue, 2015-07-28 at 18:17 +0200, Dario Faggioli wrote:

> So, my test box looks like this:
> cpu_topology           :
> cpu:    core    socket     node
>   0:       0        1        0
>   1:       0        1        0
>   2:       1        1        0
>   3:       1        1        0
>   4:       9        1        0
>   5:       9        1        0
>   6:      10        1        0
>   7:      10        1        0
>   8:       0        0        1
>   9:       0        0        1
>  10:       1        0        1
>  11:       1        0        1
>  12:       9        0        1
>  13:       9        0        1
>  14:      10        0        1
>  15:      10        0        1
> 
> In Dom0, here's what I see _without_ any pinning:
> 
And now I've tried with dom0_vcpus_pin:

root@Zhaman:~# xl vcpu-list 
Name                                ID  VCPU   CPU State   Time(s) Affinity 
(Hard / Soft)
Domain-0                             0     0    0   -b-       6.8  0 / all
Domain-0                             0     1    1   -b-       1.6  1 / all
Domain-0                             0     2    2   -b-       2.3  2 / all
Domain-0                             0     3    3   -b-       1.5  3 / all
Domain-0                             0     4    4   -b-       3.2  4 / all
Domain-0                             0     5    5   -b-       1.5  5 / all
Domain-0                             0     6    6   -b-       2.0  6 / all
Domain-0                             0     7    7   -b-       2.2  7 / all
Domain-0                             0     8    8   -b-       1.6  8 / all
Domain-0                             0     9    9   -b-       1.6  9 / all
Domain-0                             0    10   10   r--       2.1  10 / all
Domain-0                             0    11   11   -b-       1.5  11 / all
Domain-0                             0    12   12   -b-       2.4  12 / all
Domain-0                             0    13   13   -b-       1.1  13 / all
Domain-0                             0    14   14   -b-       2.4  14 / all
Domain-0                             0    15   15   -b-       2.4  15 / all

> root@Zhaman:~# for i in `seq 0 15`;do cat 
> /sys/devices/system/cpu/cpu$i/topology/thread_siblings_list ;done
> 0-1
> 0-1
> 2-3
> 2-3
> 4-5
> 4-5
> 6-7
> 6-7
> 8-9
> 8-9
> 10-11
> 10-11
> 12-13
> 12-13
> 14-15
> 14-15
> 
Same result.

> root@Zhaman:~# cat /proc/cpuinfo |grep "physical id"
> physical id   : 1
> physical id   : 1
> physical id   : 1
> physical id   : 1
> physical id   : 1
> physical id   : 1
> physical id   : 1
> physical id   : 1
> physical id   : 0
> physical id   : 0
> physical id   : 0
> physical id   : 0
> physical id   : 0
> physical id   : 0
> physical id   : 0
> physical id   : 0
> 
Same result.

> root@Zhaman:~# cat /proc/cpuinfo |grep "core id"
> core id               : 0
> core id               : 0
> core id               : 1
> core id               : 1
> core id               : 9
> core id               : 9
> core id               : 10
> core id               : 10
> core id               : 0
> core id               : 0
> core id               : 1
> core id               : 1
> core id               : 9
> core id               : 9
> core id               : 10
> core id               : 10
> 
Same result.

> root@Zhaman:~# cat /proc/cpuinfo |grep "cpu cores"
> cpu cores     : 4
> <same for all cpus>
> 
Same result.

> root@Zhaman:~# cat /proc/cpuinfo |grep "siblings" 
> siblings      : 8
> <same for all cpus>
> 
And same result here as well.

So, for Dom0, pinning does not make much difference as far as what
topology is guessed, which actually makes sense, considering how pcpus
are brought up in Xen, and assuming that Linux does something similar
(no, I'm not familiar with that code).

It should be making a difference in terms of whether and how much this
topology, although matching the host one, mislead the guest scheduler.
Actually, I still think it does, it's just harder to identify than we
expected, which may be seen as a good thing (it does not look like it is
a big deal, after all :-D), or a bad thing (it may start biting us
anytime, without us noticing promptly :-/).

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.