[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] HYBRID: PV in HVM container



> JFYI.. as expected, running in ring 0 and not bouncing syscalls thru
> xen, syscalls do very well. fork/execs are slow prob beause VPIDs are 
> turned off right now. I'm trying to figure VPIDs out, and hopefully
> that would help. BTW, dont' compare to anything else, both kernels
> below are unoptimized debug kernels.
> 
> LMbench:
> Processor, Processes - times in microseconds - smaller is better
> ----------------------------------------------------------------
> Host                 OS  Mhz null null      open selct sig  sig  fork
> exec sh call  I/O stat clos TCP   inst hndl proc proc proc
> --------- ------------- ---- ---- ---- ---- ---- ----- ---- ---- ----
> ---- ---- STOCK     Linux 2.6.39+ 2771 0.68 0.91 2.13 4.45 4.251 0.82
> 3.87 433. 1134 3145 HYBRID    Linux 2.6.39m 2745 0.13 0.22 0.88 2.04
> 3.287 0.28 1.11 526. 1393 3923
> 

JFYI again, I seem to have caught up with pure PV on almost all with some
optimizations:

Processor, Processes - times in microseconds - smaller is better
----------------------------------------------------------------
Host                 OS  Mhz null null      open selct sig  sig  fork exec sh
                             call  I/O stat clos TCP   inst hndl proc proc proc
--------- ------------- ---- ---- ---- ---- ---- ----- ---- ---- ---- ---- ----
STOCK:    Linux 2.6.39+ 2771 0.68 0.91 2.13 4.45 4.251 0.82 3.87 433. 1134 3145
N4        Linux 2.6.39m 2745 0.13 0.21 0.86 2.03 3.279 0.28 1.18 479. 1275 3502
N5        Linux 2.6.39m 2752 0.13 0.21 0.91 2.07 3.284 0.28 1.14 439. 1168 3155

Context switching - times in microseconds - smaller is better
-------------------------------------------------------------
Host                 OS 2p/0K 2p/16K 2p/64K 8p/16K 8p/64K 16p/16K 16p/64K
                        ctxsw  ctxsw  ctxsw ctxsw  ctxsw   ctxsw   ctxsw
--------- ------------- ----- ------ ------ ------ ------ ------- -------
STOCK:    Linux 2.6.39+ 5.800 6.2400 6.8700 6.6700 8.4600 7.13000 8.63000
N4        Linux 2.6.39m 6.420 6.9300 8.0100 7.2600 8.7600 7.97000 9.25000
N5        Linux 2.6.39m 6.650 7.0000 7.8400 7.3900 8.8000 7.90000 9.06000

*Local* Communication latencies in microseconds - smaller is better
-------------------------------------------------------------------
Host                 OS 2p/0K  Pipe AF     UDP  RPC/   TCP  RPC/ TCP
                        ctxsw       UNIX         UDP         TCP conn
--------- ------------- ----- ----- ---- ----- ----- ----- ----- ----
STOCK:    Linux 2.6.39+ 5.800  18.9 22.3  28.7  32.8  34.9  44.6 89.8
N4        Linux 2.6.39m 6.420  17.1 18.1  26.9  28.7  34.2  40.1 76.3
N5        Linux 2.6.39m 6.650  18.1 17.7  24.4  33.4  33.9  40.7 76.7

File & VM system latencies in microseconds - smaller is better
--------------------------------------------------------------
Host                 OS   0K File      10K File      Mmap    Prot    Page
                        Create Delete Create Delete  Latency Fault   Fault
--------- ------------- ------ ------ ------ ------  ------- -----   -----
STOCK:    Linux 2.6.39+                               3264.0 0.828 3.00000
N4        Linux 2.6.39m                               3990.0 1.351 4.00000
N5        Linux 2.6.39m                               3362.0 0.235 4.00000


where the only difference between N4 and N5 is that in N5 I've enabled
vmexits only for page faults on write protection, ie, err code 0x3. 

I'm trying to figure out how vtlb implemention relates to SDM 28.3.5.
It seems in xen, vtlb is mostly for shadows glancing at the code, which
I am not worrying for now (I've totally ignored migration for now). 
Any thoughts any body?

Also, at present I am not using vtsc, is it worth looking into? some of
the tsc stuff makes my head spin just like the shadow code does :)... 

thanks,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.