[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [PATCH] turn off writable page tables


  • To: "Andrew Theurer" <habanero@xxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
  • Date: Tue, 25 Jul 2006 23:41:46 +0100
  • Delivery-date: Tue, 25 Jul 2006 15:42:11 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcawN+DBqUlUt41GTgCJAL/hGwVE3wARI4mg
  • Thread-topic: [Xen-devel] [PATCH] turn off writable page tables

> on Xeon MP processor, uniprocessor dom0 kernel, pae=y:
> 
> benchmark                c/s 10729 force_emulate
> ------------------------ --------- -------------
> lmbench fork+exit:       469.5833  470.3913   usec, lower is better
> lmbench fork+execve:     1241.0000 1225.7778  usec, lower is better
> lmbench fork+/sbin/bash: 12190.000 12119.000  usec, lower is better

It's kinda weird that these scores are so close -- I guess its just
coincidence that we must be getting something like an average of 10-20
pte's updated per pagetable page and the cost of doing multiple emulates
perfectly balances the cost of unhooking/rehooking.

I would like to make sure we fully understand what's going on, though.

I'd like to make sure there's no 'dumb stuff' happening, and the
writeable pagetables isn't being used erroneously where we don't expect
it (hence crippling the scores), and that its actually functioning as
intended i.e. that we get one fault to unhook, and then a fault causing
a rehook once we move to the next page in the fork.
   
If you write a little test program that dirties a large chunk of memory
just before the fork, we should see writeable pagetables winning easily.

It would also be good to use some of the trace buffer stuff to find out
exactly what the sequence of faults and flushes is.

I have no problem with enabling force emulation, I'd just like to fully
understand the tradeoff. I suspect the answer is that typically only a
handful of PTEs are dirty, and hence there are relatively few updates to
the parent process's page tables. It's worth understanding this as it
also has implications for shadow pagetables.


Thanks,
Ian

> dbench 3.03              186.354   191.278    MB/sec
> reaim_aim9               1890.01   2055.97    jobs/min
> reaim_compute            2538.75   2522.90    jobs/min
> reaim_dbase              3852.14   3739.38    jobs/min
> reaim_fserver            4437.93   4389.71    jobs/min
> reaim_shared             2365.85   2362.97    jobs/min
> SPEC SDET                4315.91   4312.02    scripts/hr
> 
> These are all within the noise level (some slightly better, some
> slightly worse for emulate).  There really isn't much of difference
> here.  I'd like to propose turning on the emulate path all the time in
> xen.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.