[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [PATCH] turn off writable page tables



> And it does make a difference in this case.  I now have a test program
> which dirties a number of virtually contiguous pages then forks (it
also
> resets xen perf counters before fork and collects perf counters right
> after fork), then records the elapsed time for the fork.  The
difference
> is quite amazing in this case.  For both writable and emulate, I ran
> with a range of dirty pages, from 1280 to 128000.  The elapsed times
for
> fork a quite linear from small number to large number of dirty pages.
> Below are the min and max:
> 
>          1280 pages    128000 pages
> wtpt:     813 usec      37552 usec
> emulate: 3279 usec     283879 usec

Good, at least that suggests that the code works for the usage it was
intended for. 

> So, in a -perfect-world- this works great.  Problem is most workloads
> don't appear to have a vast percentage of entries that need to be
> updated.   I'll go ahead and  expand this test to find out what the
> threshold is to break even.  I'll also see if we can implement a
batched
> call in fork to update the parent -I hope this will show just as good
> performance even when most entries need modification and even better
> performance over wtpt with a low number of entries modified.

With license to make more invasive changes to core Linux mm it certainly
should be possible to optimize this specific case with a batched update
fairly easily. You could even go further an implement a 'make all PTEs
in pagetable RO' hypercall, possibly including a copy to the child. This
could potentially work better than current 'late pin', at least the
validation would be incremental rather than in one big hit at the end. 

Ian

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.