RE: Re: [Xen-devel] Balloon driver for Linux/HVM
As George pointed out in a separate branch of this email thread,
disabling a guests caching is probably a bad idea in general.
The goal of tmem is to explore if physical memory utilization can be improved
when the guest is aware that it is running as a guest and when the guest kernel
can be modified (slightly) for that case. This implies that Windows would
have to be modified to use tmem, though it has been suggested that a Windows
kernel expert might be able to somehow interpose binary code to do a similar
thing. Since I know nothing about Windows, someone else will have to
From: Chu Rui
Sent: Tuesday, November 16, 2010 7:28 PM
To: Dan Magenheimer; xen-devel@xxxxxxxxxxxxxxxxxxx; George Dunlap
Subject: Re: Re: [Xen-devel] Balloon driver for Linux/HVM
It is a pity that tmem cannot be used for Windows guest. But
can we disable the guest Windows caching? If so, the guest OS is no longer
a memory hog (as referred in your talk), and maybe we can manage its
memory consumption on demand, as a ring3 application does.
BTW, as far as I am concerned, Windows XP does NOT zeros all
of the memory at startup stage. Actually, even the memory allocated in ring3
application was not commited until it was really accessed. So PoD memory
may work well in that case.
在 2010年11月17日 上午1:10，Dan
FYI, Transcendent Memory does work with
HVM, with a recent Xen and the proper Linux guest-side patches (including
Stefano’s PV-on-HVM patchset). There is extra
overhead in an HVM for each tmem call due to vmenter/vmexit and I have not
measured performance, but this overhead should not be too large on newer
processors. Also, of course, Transcendent Memory will not work with
Windows guests (or any guests that do not have tmem patches), while PoD is
primarily intended to work with Windows (because, IIRC, Windows zeroes all of
I agree that guest IO cacheing is mostly
useless for CLEAN pages if the dom0 page cache is large enough for all guests
(or if tmem is working). For dirty pages, using dom0 cacheing risks data
integrity problems (e.g. the guest believes a transaction to disk is complete
but the data is in a dom0 cache that has not been flushed to disk).
you for your kind reply, George.
am interested on the PoD memory. In my perspective, PoD mainly works in the
system initialization stage. Before the balloon driver begins to work, it can
limit the memory consumption of the guests. However, after a while the
guest OS will commit more memory, but PoD cannot reclaim any more at that time
even when the committed pages is IO cache. While the balloon keeps work all of
Would you please tell me whether my thought is correct?
Actually, in my opinion, the guest IO cache is
mostly useless, since the Dom0 will cache the IO operations. Such a
double-cache wastes the memory resources. Is there any good idea for that like
Transcendent Memory while works with HVM?
在 2010年11月16日 下午8:56，George Dunlap <dunlapg@xxxxxxxxx>写道：
it's strange, the old version have no this limitation.
unfortunately a great deal of functionality present in "classic
xen" has been lost in the process of getting the core dom0 support
into the pvops kernel. I think the plan is, once we have the
necessary changes to non-xen code pushed up stream, we can start
working on getting feature parity with classic xen.
> At 2010-11-16 19:35:50，"Stefano
Stabellini" <stefano.stabellini@xxxxxxxxxxxxx> wrote:
>>On Tue, 16 Nov 2010, Chu Rui wrote:
>>> I have noticed that, in the code of linux/drivers/xen/balloon.c,
there exists the snippet as this:
>>> static int __init balloon_init(void)
>>> unsigned long pfn;
>>> struct page *page;
>>> if (!xen_pv_domain())
>>> return -ENODEV;
>>> Does it means the driver will not work in HVM? If so, where is the
HVN-enabled code for that?
>>not yet, even though I have a patch ready to enable it:
> Xen-devel mailing list
Xen-devel mailing list