[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 0/2] Credit2: fix per-socket runqueue setup



On lun, 2014-09-01 at 14:59 +0100, George Dunlap wrote:
> On 08/25/2014 09:31 AM, Jan Beulich wrote:
> >>>> On 22.08.14 at 19:15, <dario.faggioli@xxxxxxxxxx> wrote:
> >> root@tg03:~# xl dmesg |grep -i runqueue
> >> (XEN) Adding cpu 0 to runqueue 1
> >> (XEN)  First cpu on runqueue, activating
> >> (XEN) Adding cpu 1 to runqueue 1
> >> (XEN) Adding cpu 2 to runqueue 1
> >> (XEN) Adding cpu 3 to runqueue 1
> >> (XEN) Adding cpu 4 to runqueue 1
> >> (XEN) Adding cpu 5 to runqueue 1
> >> (XEN) Adding cpu 6 to runqueue 1
> >> (XEN) Adding cpu 7 to runqueue 1
> >> (XEN) Adding cpu 8 to runqueue 1
> >> (XEN) Adding cpu 9 to runqueue 1
> >> (XEN) Adding cpu 10 to runqueue 1
> >> (XEN) Adding cpu 11 to runqueue 1
> >> (XEN) Adding cpu 12 to runqueue 0
> >> (XEN)  First cpu on runqueue, activating
> >> (XEN) Adding cpu 13 to runqueue 0
> >> (XEN) Adding cpu 14 to runqueue 0
> >> (XEN) Adding cpu 15 to runqueue 0
> >> (XEN) Adding cpu 16 to runqueue 0
> >> (XEN) Adding cpu 17 to runqueue 0
> >> (XEN) Adding cpu 18 to runqueue 0
> >> (XEN) Adding cpu 19 to runqueue 0
> >> (XEN) Adding cpu 20 to runqueue 0
> >> (XEN) Adding cpu 21 to runqueue 0
> >> (XEN) Adding cpu 22 to runqueue 0
> >> (XEN) Adding cpu 23 to runqueue 0
> >>
> >> Which makes a lot more sense. :-)
> > But it looks suspicious that the low numbered CPUs get assigned to
> > runqueue 1. Is there an explanation for this, or are surprises to be
> > expected on larger than dual-socket systems?
> 
Not sure what kind of surprises you're thinking to, but I have a big box
handy. I'll test the new version of the series on it, and report what
happens.

> Well the explanation is most likely from the cpu_topology info from the 
> cover letter (0/2): On his machine, cpus 0-11 are on socket 1, and cpus 
> 12-23 are on socket 0.  
>
Exactly, here it is again, coming from `xl info -n'.

cpu_topology           :
cpu:    core    socket     node
  0:       0        1        0
  1:       0        1        0
  2:       1        1        0
  3:       1        1        0
  4:       2        1        0
  5:       2        1        0
  6:       8        1        0
  7:       8        1        0
  8:       9        1        0
  9:       9        1        0
 10:      10        1        0
 11:      10        1        0
 12:       0        0        1
 13:       0        0        1
 14:       1        0        1
 15:       1        0        1
 16:       2        0        1
 17:       2        0        1
 18:       8        0        1
 19:       8        0        1
 20:       9        0        1
 21:       9        0        1
 22:      10        0        1
 23:      10        0        1

> Why that's the topology reported (I presume in 
> ACPI?) I'm not sure.
> 
Me neither. BTW, on baremetal, here's what I see:
root@tg03:~# numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22
node 0 size: 18432 MB
node 0 free: 17927 MB
node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23
node 1 size: 18419 MB
node 1 free: 17926 MB
node distances:
node   0   1 
  0:  10  20 
  1:  20  10 

Also:
root@tg03:~# for i in `seq 0 23`;do echo "CPU$i is on socket `cat 
/sys/bus/cpu/devices/cpu$i/topology/physical_package_id`";done
CPU0 is on socket 1
CPU1 is on socket 0
CPU2 is on socket 1
CPU3 is on socket 0
CPU4 is on socket 1
CPU5 is on socket 0
CPU6 is on socket 1
CPU7 is on socket 0
CPU8 is on socket 1
CPU9 is on socket 0
CPU10 is on socket 1
CPU11 is on socket 0
CPU12 is on socket 1
CPU13 is on socket 0
CPU14 is on socket 1
CPU15 is on socket 0
CPU16 is on socket 1
CPU17 is on socket 0
CPU18 is on socket 1
CPU19 is on socket 0
CPU20 is on socket 1
CPU21 is on socket 0
CPU22 is on socket 1
CPU23 is on socket 0

I've noticed this before, but, TBH, I never dug the cause of the
discrepancy between us and Linux.

Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.