[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] RFC: initial libxl support for xenpaging



On Thu, 2012-02-23 at 10:42 +0000, Ian Campbell wrote:
> What if you switch back without making sure you are in such a state? I
> think switching between the two is where the potential for unexpected
> behaviour is most likely.

Yeah, correctly predicting what would happen requires understanding what
mem-set does under the hood.

> I like that you have to explicitly ask for the safety wheels to come off
> and explicitly put them back on again. It avoids the corner cases I
> alluded to above (at least I hope so).

Yes, I think your suggestion sounds more like driving a car with a
proper hood, and less like driving a go-kart with the engine
exposed. :-)

> Without wishing to put words in Andres' mouth I expect that he intended
> "footprint" to cover other technical means than paging too --
> specifically I expect he was thinking of page sharing. (I suppose it
> also covers PoD to some extent too, although that is something of a
> special case)
> 
> While I don't expect there will be a knob to control number of shared
> pages (either you can share some pages or not, the settings would be
> more about how aggressively you search for sharable pages) it might be
> useful to consider the interaction between paging and sharing, I expect
> that most sharing configurations would want to have paging on at the
> same time (for safety). It seems valid to me to want to say "make the
> guest use this amount of actual RAM" and to achieve that by sharing what
> you can and then paging the rest.

Yes, it's worth thinking about; as long as it doesn't stall the paging
UI too long. :-)

The thing is, you can't actually control how much sharing happens.  That
depends largely on whether the guests create and maintain pages which
are share-able, and whether the sharing detection algorithm can find
such pages.  Even if two guests are sharing 95% of their pages, at any
point one of the guests may simply go wild and change them all.  So it
seems to me that shared pages need to be treated like sunny days in the
UK: Enjoy them while they're there, but don't count on them. :-)

Given that, I think that each VM should have a "guaranteed minimum
memory footprint", which would be the amount of actual host ram it will
have if suddenly no shared pages become available.  After that, there
should be a policy of how to use the "windfall" or "bonus" pages
generated by sharing.

One sensible default policy would be "givers gain": Every guest which
creates a page which happens to be shared by another VM gets a share of
the pages freed up by the sharing.  Another policy might be "communism",
where the freed up pages are shared among all VMs, regardless of whose
pages made the benefit possible.  (In fact, if shared pages come from
zero pages, they should probably be given to VMs with no zero pages,
regardless of the policy.)

However, I'd say the main public "knobs" should be just consist of two
things: 
* xl mem-set memory-target.  This is the minimum amount of physical RAM
a guest can get; we make sure that the sum of these for all VMs does not
exceed the host capacity.
* xl sharing-policy [policy].  This tells the sharing system how to use
the "windfall" pages gathered from page sharing.  

Then internally, the sharing system should combine the "minimum
footprint" with the number of extra pages and the policy to set the
amount of memory actually used (via balloon driver or paging).

You could imagine a manual mode, where the administrator shares out the
extra pages manually to VMs that he thinks needs them; but because those
extra pages may go away at any time, that needs to be a separate knob
(and preferably one which most admins don't ever touch).

Andres, what do you think?

 -George


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.