> PoD is a mechanism designed for exactly one purpose: to allow a VM to
> "boot ballooned". It's designed to allow the guest to run on less
> than the amount of memory it thinks it has until the balloon driver
> loads. After that, its job is done. So you're right, it is designed
> to work for the system initialization stage.
>
> Regarding disk caching: I disagree about the guest IO cache. I'd say
> if one cache is to go, it should be the dom0 cache. There are lots of
> reasons for this:
> * It's more fair: if you did all caching in dom0, then VM A might be
> able to use almost the entire cache, leaving VM B without. If each
> guest does its own caching, then it's using its own resources and not
> impacting someone else.
> * I think the guest OS has a better idea what blocks need to be cached
> and which don't. It's much better to let that decision happen
> locally, than to try to guess it from dom0, where we don't know
> anything about processes, disk layout, &c.
> * As Dan said, for write caching there's a consistency issue; better
> to let the guest decide when it's safe not to write a page.
> * If dom0 memory isn't being used for something else, it doesn't hurt
> to have duplicate copies of things in memory. But ideally guest disk
> caching shouldn't take away from anything else on the system.
>
> My $0.02. :-)
>
> -George
>
> 2010/11/16 Chu Rui <
ruichu@xxxxxxxxx>:
>> Thank you for your kind reply, George.
>>
>> I am interested on the PoD memory. In my perspective, PoD mainly works in
>> the system initialization stage. Before the balloon driver begins to work,
>> it can limit the memory consumption of the guests. However, after a while
>> the guest OS will commit more memory, but PoD cannot reclaim any more at
>> that time even when the committed pages is IO cache. While the balloon keeps
>> work all of the time.
>>
>> Would you please tell me whether my thought is correct?
>>
>> Actually, in my opinion, the guest IO cache is mostly useless, since the
>> Dom0 will cache the IO operations. Such a double-cache wastes the memory
>> resources. Is there any good idea for that like Transcendent Memory while
>> works with HVM?
>>
>> 在 2010年11月16日 下午8:56,George Dunlap <
dunlapg@xxxxxxxxx>写道:
>>>
>>> 2010/11/16 牛立新 <
topperxin@xxxxxxx>:
>>> > o, it's strange, the old version have no this limitation.
>>>
>>> No; unfortunately a great deal of functionality present in "classic
>>> xen" has been lost in the process of getting the core dom0 support
>>> into the pvops kernel. I think the plan is, once we have the
>>> necessary changes to non-xen code pushed up stream, we can start
>>> working on getting feature parity with classic xen.
>>>
>>> >
>>> >
>>> > At 2010-11-16 19:35:50,"Stefano Stabellini"
>>> > <
stefano.stabellini@xxxxxxxxxxxxx> wrote:
>>> >
>>> >>On Tue, 16 Nov 2010, Chu Rui wrote:
>>> >>> Hi,
>>> >>> I have noticed that, in the code of linux/drivers/xen/balloon.c, there
>>> >>> exists the snippet as this:
>>> >>>
>>> >>> static int __init balloon_init(void)
>>> >>> {
>>> >>> unsigned long pfn;
>>> >>> struct page *page;
>>> >>> if (!xen_pv_domain())
>>> >>> return -ENODEV;
>>> >>> .....
>>> >>> }
>>> >>>
>>> >>> Does it means the driver will not work in HVM? If so, where is the
>>> >>> HVN-enabled code for that?
>>> >>
>>> >>not yet, even though I have a patch ready to enable it:
>>> >>
>>> >>git://
xenbits.xen.org/people/sstabellini/linux-pvhvm.git
>>> >> 2.6.36-rc7-pvhvm-v1
>>> >
>>> >
>>> > ________________________________
>>> > 网易163/126邮箱百分百兼容iphone ipad邮件收发
>>> > _______________________________________________
>>> > Xen-devel mailing list
>>> >
Xen-devel@xxxxxxxxxxxxxxxxxxx>>> >
http://lists.xensource.com/xen-devel
>>> >
>>> >
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>>
Xen-devel@xxxxxxxxxxxxxxxxxxx
>>
http://lists.xensource.com/xen-devel>>
>>
>