[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen on ARM IRQ latency and scheduler overhead



On Fri, 17 Feb 2017, Stefano Stabellini wrote:
> On Fri, 17 Feb 2017, Julien Grall wrote:
> > Hi,
> > 
> > On 02/17/2017 11:02 AM, Dario Faggioli wrote:
> > > Just very quickly...
> > > 
> > > On Thu, 2017-02-16 at 15:07 -0800, Stefano Stabellini wrote:
> > > > (XEN) Active queues: 1
> > > > (XEN)   default-weight     = 256
> > > > (XEN) Runqueue 0:
> > > > (XEN)   ncpus              = 4
> > > > (XEN)   cpus               = 0-3
> > > > (XEN)   max_weight         = 256
> > > > (XEN)   instload           = 1
> > > > (XEN)   aveload            = 3208 (~1%)
> > > > (XEN) l(XEN)    idlers: 00000000,00000000,00000000,0000000a
> > > > a(XEN)  tickled: 00000000,00000000,00000000,00000000
> > > > t(XEN)  fully idle cores: 00000000,00000000,00000000,0000000a
> > > > e(XEN) Domain info:
> > > > n(XEN)  Domain: 0 w 256 v 4
> > > > c(XEN)    1: [0.0] flags=2 cpu=0 credit=10500000 [w=256] load=3170 (~1%)
> > > > (XEN)     2: y[0.1] flags=0 cpu=1  credit=10500000 [w=256]( load=131072
> > > > (~50%)
> > > > (XEN)     3: n[0.2] flags=0 cpu=2s credit=10500000 [w=256]) load=131072
> > > > (~50%):
> > > > (XEN)     4:  [0.3] flags=0 cpu=3m credit=10500000 [w=256]a load=131072
> > > > (~50%)x
> > > > 
> > > Status of vcpus 2, 3 and 4 is a bit weird. I'll think about it.
> > > 
> > > > =(XEN)  Domain: 1 w 256 v 1
> > > > 1(XEN)    5: 1[1.0] flags=2 cpu=2 credit=9713074 [w=256] load=56 (~0%)
> > > > (XEN) Runqueue info:
> > > > 6(XEN) runqueue 0:
> > > > 9(XEN) CPUs info:
> > > > 0(XEN) CPU[00]   runq=0, sibling=00000000,00000000,00000000,00000001,
> > > > wcore=00000000,00000000,00000000,00000001
> > > > 
> > > This tells me that nr_cpu_ids is very big (I think it tells it is 128,
> > > i.e., ARM default), which means cpumask_t-s are huge.
> > > 
> > > What does `xl info' says. On my (x86) test box, it's like this:
> > > 
> > >  ...
> > >  nr_cpus                : 16
> > >  max_cpu_id             : 63
> > >  ...
> > > 
> > > (and I have NR_CPUS=256, i.e., x86 the default).
> 
> Indeed I have 127
> 
> 
> > > Cpumasks being bigger also means cpumask operation being slower, and
> > > this matters quite a bit in Credit2, because we use cpumasks a lot (but
> > > also in Credit1, because we use cpumasks a little less than in Credit2,
> > > but still quite a bit).
> > > 
> > > Isn't there a way, on ARM, to figure out online that you're not going
> > > to have 128 cpus in the platform?
> > 
> > It is just we never set nr_cpu_ids on ARM :/. There was a patch on the ML a
> > while ago [1] but never got applied.
> > 
> > Stefano, I think the patch is still valid. Could you apply it?
> > [1] https://patchwork.kernel.org/patch/8177261/
> 
> I pushed the patch
> 
> 
> > It would probably be worth to do the benchmark again with this patch 
> > applied.
> 
> Unfortunately the numbers haven't changed much:
> 
>                                         AVG     MIN     MAX     WARM MAX
> NODEBUG vwfi=sleep credit2 fix-cpumasks       8020    7670    10320   8390
> NODEBUG vwfi=sleep credit1 fix-cpumasks       6400    6330    9650    6720
> 
> In addition to the mysterious difference between credit1 and credit2, we
> also have the difference between vwfi=idle and vwfi=sleep to deal with:
> 
> NODEBUG vwfi=idle credit2 fix cpumasks        4000    2370    4500    3350
> NODEBUG vwfi=idle credit1 fix cpumasks        3220    2180    4500    4320

Actually those are still the old numbers, sorry! I didn't update the xen
binary properly. These are the new numbers:

                                    AVG     MIN     MAX     WARM MAX
vwfi=sleep credit2 fix cpumasks 5910    5800    8520    6180
vwfi=sleep credit1 fix cpumasks 4900    4810    6910    4980
vwfi=idle  credit2 fix cpumasks 2800    1560    4550    4200
vwfi=idle  credit1 fix cpumasks 2800    1610    3420    1770

The difference between credit2 and credit1 is smaller now. In fact it's
zero, when vwfi=idle. However, with vwfi=sleep, the larger MAX value is
a bit worrying.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.