I also think it is a little strange in current PoD implementation, different with my image.
In my mind, as tinnycloud mentioned, the PoD cache should be a pool as large as the idle memory in VMM, and shared by all guests. If a guest usages was always smaller than the predefined PoD cache size, the unused part could be appropriated for others. On the contrary, If a guest balloon was delayed to be started, the VMM should populate more memory to satisfy it (supposing the VMM has enough memory).
In current PoD, the balloon should be started as soon as possbile, otherwise the guest will be crashed after the PoD cache is exhausted(supposing the emergency sweep does not work). That's dangerous, although in most cases it works well.
George, I wonder why do you implement it as this? It looks better to used a resilient PoD cache, intead of a fixed one. Your wonderful work was appreciated, I just want to know your thought.
在 2010年11月29日 下午7:19,George Dunlap <George.Dunlap@xxxxxxxxxxxxx>写道:
On 29/11/10 10:55, tinnycloud wrote: > So that is, if we run out of PoD cache before balloon works, Xen will > crash domain(goto out_of_memory),
That's right; PoD is only meant to allow a guest to run from boot until
the balloon driver can load. It's to allow a guest to "boot ballooned."
> and at this situation, domain U swap(dom U can’t use swap memory) is not > available , right?
I don't believe swap and PoD are integrated at the moment, no.
> And when balloon actually works, the pod cached will finally decrease to > 0, and no longer be used any more, right?
Conceptually, yes. What actually happens is that ballooning will reduce
it so that pod_entries==cache_size. Entries will stay PoD until the guest touches them. It's likely that eventually the guest will touch all the pages, at which point the PoD cache will be 0.
> could we use this method to implement a tmem like memory overcommit?
PoD does require guest knowledge -- it requires the balloon driver to be loaded soon after boot so the so the guest will limit its memory usage.
It also doesn't allow overcommit. Memory in the PoD cache is already allocated to the VM, and can't be used for something else.
You can't to overcommit without either: * The guest knowing that it might not get the memory back, and being OK
with that (tmem), or * Swapping, which doesn't require PoD at all.
If you're thinking about scanning for zero pages and automatically reclaiming them, for instance, you have to be able to deal with a
situation where the guest decides to use a page you've reclaimed but you've already given your last free page to someone else, and there are no more zero pages anywhere on the system. That would mean either just
pausing the VM indefinitely, or choosing another guest page to swap out.
-George
> > *From:* Chu Rui [mailto:ruichu@xxxxxxxxx]
> *TO:* tinnycloud > *CC:* xen-devel@xxxxxxxxxxxxxxxxxxx; George.Dunlap@xxxxxxxxxxxxx; > dan.magenheimer@xxxxxxxxxx
> *Subject:* Re: [Xen-devel] Xen balloon driver discuss > > I am also interested with tinnycloud's problem. > > It looks that the pod cache has been used up like this: > > if ( p2md->pod.count == 0 )
> goto out_of_memory; > > George, would you please take a look on this problem, and, if possbile, > tell a little more about what does PoD cache mean? Is it a memory pool > for PoD allocation?
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|