[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: tmem - really default to on?



> >This has likely been avoided by luck when lots of memory is
> >flushed from tmem and returned to the Xen heap and consolidated.
> >
> >Are you suggesting that the domain structure could/should have
> >two sizes, dynamically chosen by machine size?  Or something
> >else?
> 
> No, it just should be split into parts each of which fits in a page
> independent of architecture. But that's nothing I would consider
> realistic for 4.0.

OK.  Agreed this is too big a change for 4.0 but I'm thinking
about post-4.0.

The order=2 shadow page allocation should also probably be
considered a "bug" for post-4.0 as, I think, even ballooning
will eventually fragment memory and theoretically 75% of
physical memory might be unused and domain creation (or PV
migration) will fail.

Since (I think) this affects other Xen 4.0 dynamic memory
utilization solutions, I'll post a separate basenote to
discuss that.
 
> >In any case, I'd still suggest turning tmem off in your dom0
> >is the best short-term solution.
> 
> I'm still not following you here: For one, I can't recall a way to turn
> of tmem on a per-domain basis. Then I can't see why it should be
> only our Dom0 to be affected. And finally I can't see how the same
> couldn't happen when only DomU-s use tmem.

I'm suggesting disabling CONFIG_TMEM for default dom0 compile
(for all dom0 for now).  Then only environments that consciously
run a domU with a tmem-enabled kernel could be affected.
The failure can only occur if at least one domU/dom0 enables
tmem, and even then should only occur in certain workloads,
though I suppose eventually sufficient fragmentation may
occur in any workload.

Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.