[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] windows tmem

> On Wed, May 29, 2013 at 12:19:25AM +0000, James Harper wrote:
> > >
> > > I am not familiar with the Windows APIs, but it sounds like you
> > > want to use the tmem ephermeal disk cache as an secondary cache
> > > (which is BTW what Linux does too).
> > >
> > > That is OK the only thing you need to keep in mind that the
> > > hypervisor might flush said cache out if it decides to do it
> > > (say a new guest is launched and it needs the memory that
> > > said cache is using).
> > >
> > > So the tmem_get might tell that it does not have the page anymore.
> >
> > Yes I've read the brief :)
> >
> > I actually wanted to implement the equivalent of 'frontswap' originally by
> > trapping writes to the pagefile. A bit of digging and testing suggests it 
> > may
> > not be possible to determine when a page written to the pagefile is
> > discarded, meaning that tmem use would just grow until fill and then stop
> > being useful unless I eject pages on an LRU basis or something, so ephemeral
> > tmem as a best-effort write-through cache might be the best and easiest
> > starting point.
> >
> <nods>

Unfortunately it gets worse... I'm testing on windows 2003 at the moment, and 
it seems to always write out data in 64k chunks, which are aligned to a 4k 
boundary. Then it reads in one or more of those pages, and maybe later re-uses 
the same part of the swapfile for something else. It seems that all reads are 
4k in size, but there may be some grouping of those requests at a lower layer.

So I would end up caching up to 16x the actual data, with no way of knowing 
which of those 16 pages are actually being swapped out and which are just 
optimistically being written to disk without actually being paged out.

I'll do a bit of analysis of the MDL being written as that may give me some 
more information but it's not looking as good as I'd hoped.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.