[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] domain creation vs querying free memory (xend and xl)



> From: Andres Lagar-Cavilla [mailto:andreslc@xxxxxxxxxxxxxx]
> Subject: Re: [Xen-devel] domain creation vs querying free memory (xend and xl)

Hi Andres --

Re reply just sent to George...

I think you must be on a third planet, revolving somewhere between
George's and mine.  I say that because I agree completely with some
of your statements and disagree with the conclusions you draw from
them! :-)

> Domains can be cajoled into obedience via the max_pages tweak -- which I 
> profoundly dislike. If
> anything we should change the hypervisor to have a "current_allowance" or 
> similar field with a more
> obvious meaning. The abuse of max_pages makes me cringe. Not to say I 
> disagree with its usefulness.

Me cringes too.  Though I can see from George's view that it makes
perfect sense.  Since the toolstack always controls exactly how
much memory is assigned to a domain and since it can cache the
"original max", current allowance and the hypervisors view of
max_pages must always be the same.

Only if the hypervisor or the domain or the domain's administrator
can tweak current memory usage without the knowledge of the
toolstack (which is closer to my planet) does an issue arise.
And, to me, that's the foundation of this whole thread.

> Once you guarantee no "ex machina" entities fudging the view of the memory 
> the toolstack has, then all
> known methods can be bounded in terms of their capacity to allocate memory 
> unsupervised.
> Note that this implies as well, I don't see the need for a pool of "unshare" 
> pages. It's all in the
> heap. The toolstack ensures there is something set apart.

By "ex machina" do you mean "without the toolstack's knowledge"?

Then how does page-unsharing work?  Does every page-unshare done by
the hypervisor require serial notification/permission of the toolstack?
Or is this "batched", in which case a pool is necessary, isn't it?
(Not sure what you mean by "no need for a pool" and then "toolstack
ensures there is something set apart"... what's the difference?)

My point is, whether there is no pool or a pool that sometimes
runs dry, are you really going to put the toolstack in the hypervisor's
path for allocating a page so that the hypervisor can allocate
a new page for CoW to fulfill an unshare?

> Something that I struggle with here is the notion that we need to extend the 
> hypervisor for any aspect
> of the discussion we've had so far. I just don't see that. The toolstack has 
> (or should definitely
> have) a non-racy view of the memory of the host. Reservations are therefore 
> notions the toolstack
> manages.

In a perfect world where the toolstack has an oracle for the
precise time-varying memory requirements for all guests, I
would agree.

In that world, there's no need for a CPU scheduler either...
the toolstack can decide exactly when to assign each VCPU for
each VM onto each PCPU, and when to stop and reassign.
And then every PCPU would be maximally utilized, right?

My point: Why would you resource-manage CPUs differently from
memory?  The demand of real-world workloads varies dramatically
for both... don't you want both to be managed dynamically,
whenever possible?

If yes (dynamic is good), in order for the toolstack's view of
memory to be non-racy, doesn't every hypervisor page allocation
need to be serialized with the toolstack granting notification/permission?

> I further think the pod cache could be converted to this model. Why have 
> specific per-domain lists of
> cached pages in the hypervisor? Get them back from the heap! Obviously places 
> a decoupled requirement
> of certain toolstack features. But allows to throw away a lot of complex code.

IIUC in George's (Xapi) model (or using Tim's phrase, "balloon-to-fit")
the heap is "always" empty because the toolstack has assigned all memory.
So I'm still confused... where does "page unshare" get memory from
and how does it notify and/or get permission from the toolstack?

> My two cents for the new iteration

I'll see your two cents, and raise you a penny! ;-)

Dan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.