[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] compute performace problem



I turned on rrobin
    rack116-xen:~# xm dmesg | grep -i sched
    (XEN) Using scheduler: Round-Robin Scheduler (rrobin)
and get the same range of execution times as with bvt:
Run Time  =    183.780
Run Time  =    157.980
Run Time  =     65.770
Run Time  =     65.530
Run Time  =     86.000
Run Time  =     65.530
Run Time  =     79.270
Run Time  =     88.150
Run Time  =     69.600
Run Time  =     64.900
Run Time  =    246.310
Run Time  =    252.230
Run Time  =     64.880



" Does the domU have the same amount of memory as the native Linux? Is 

Yes.  I reran on native linux with 512MB and the job ran in 64s every time.

" the native Linux running on a single cpu, just like the domU?

Yes.  The Dell1650 has 1 cpu installed, and no HT (PIII).
I've seen the effect on dual P4s as well.

" domU definitely quiescent apart from the mpi job?

there are some background daemons like gmond and rwhod, but that is the
same on all setups.

" directly observed the app taking 250 seconds

good question.  I wondered the same thing so I now made the script
ssh to the ntp server to print the date between each run.  And ...
yes the elasped times match the wallclock from the ntp server.

" If the app is cpu-bound, there are no other apps running in the domain, 
" and no other domains contending for that cpu, then it is hard to 
" imagine where the slowdown coudl come from.

agreed.  If the native linux execution time wasn't so consistent, I'd blame
the app.  I sent mail upstream to the app authors to see if they have a
suggestion.  It is part of the CardioWave simulation of electical pulses
that flow through the heart (http://cardiowave.duke.edu)

I tried some tight loops and got consistent durations for time scales from
fractions of a second to 2000 seconds.  The loops are like this:
    time for ((i=0;i<100000;++i)); do : ;done


Here are /proc stats during the app compute phase:
xenU vmstat:
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
 1  0      0   4584    456 248044    0    0     8     0  108    15 100  0  0  0
 1  0      0   4584    456 248044    0    0     0     0  106    11 100  0  0  0
 1  0      0   4584    456 248044    0    0     0     0  106     9 100  0  0  0
 1  0      0   4584    456 248044    0    0     0    24  110    17 100  0  0  0 

xen0 vmstat:
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
 0  0   9108   1648   2324  10532    0    0     0    24   59    28  0  0 100  0
 0  0   9108   1648   2324  10532    0    0     0     0   38    14  0  0 100  0
 0  0   9108   1640   2332  10532    0    0     0    88   62    37  0  1 94  5


xenU interrupt per second:
irq128:         0 Dynamic-irq  misdire  irq131:         0 Dynamic-irq  blkif   
irq129:         0 Dynamic-irq  ctrl-if  irq132:         7 Dynamic-irq  eth0    
irq130:       100 Dynamic-irq  timer   



xen0 interrupt per second:
irq  1:         0 Phys-irq  i8042       irq128:         0 Dynamic-irq  misdire 
irq  6:         0                       irq129:         0 Dynamic-irq  ctrl-if 
irq 12:         0                       irq130:        38 Dynamic-irq  timer   
irq 14:         0 Phys-irq  ide0        irq131:         0 Dynamic-irq  console 
irq 17:         6 Phys-irq  eth0        irq132:         0 Dynamic-irq  net-be- 
irq 18:         6 Phys-irq  aic7xxx     irq133:         0 Dynamic-irq  blkif-b 
irq 19:         0 Phys-irq  aic7xxx     irq134:         0 Dynamic-irq  vif2.0  


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.