[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] windows tmem



> > Fresh install of 2008r2 with 512kb of memory and tmem active, with
> > updates
> > installing for the last 30 minutes:
> >
> > put_success_count = 1286906
> > put_fail_count    = 0
> > get_success_count = 511937
> > get_fail_count    = 286789
> >
> > a 'get fail' is a 'miss'.
> >
> 
> Hmm. On the face of it a much higher miss rate than 2K3, but the workload is
> different so it's hard to tell how comparable the numbers are. I wonder
> whether use of ephemeral tmem is an issue because of the get-implies-flush
> characteristic. I guess you'd always expect a put between gets for a pagefile
> but it might be interesting to see what miss rate you get with persistent
> tmem.
> 

After running for a while longer:

put_success_count = 15732240
put_fail_count    = 0
get_success_count = 10330032
get_fail_count    = 4460352

which is a similar hit rate of get_success vs get_fail (~70%), but a much 
better hit rate of put_success vs get_success. If ephemeral pages are discarded 
on read then this tells me that around 66% of pages I put into tmem were read 
back in, vs around 40% in my first sample.

For persistent tmem to work I'd need to know when Windows will not need the 
memory again which is information I don't have access to, or alternatively 
maintain my own LRU structure. What I really need to know is when Windows 
discards a page from memory, but all I know so far is when it writes out a page 
of memory to disk which only tells me that at some future time it  might 
discard the page from memory.

I'm only testing this one vm on a physical machine, so xen isn't trying to do 
any balancing of tmem pools against other vm's. Assigning 384mb (I said 512mb 
before but I was mistaken) to a Windows 2008R2 server isn't even close to a 
realistic scenario, and with a bunch of vm's all competing for ephemeral tmem 
memory might mean that pages are mostly discarded before they need to be 
retrieved.

James


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.