[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] 5x dom0 memory increase from Xen/Linux 3.4/2.6.18 to 4.1/3.0.0

On 20/06/2011 13:45, Konrad Rzeszutek Wilk wrote:
> On Fri, Jun 17, 2011 at 04:31:11PM +0100, Anthony Wright wrote:
>> Lowering swiotlb helped, and got me down to 200M for dom0. What is the 
>> effect of reducing this value?
> Less amount of bounce buffer. But you don't need the bounce buffer for PCI 
> devices b/c you don't
> have more than 4GB of physical memory in the machine.
Do I only need bounce buffers if I have > 4GB of physical memory? In
this case should I allocate the 64M, or is it a sliding memory requirement?
>> I set CONFIG_XEN_MAX_DOMAIN_MEMORY down to 8, but that didn't seem to have 
>> any effect on dom0's memory requirement. What is this value? Does it only 
>> apply to a domU's memory usage?
> It makes some internal datastructures (P2M) smaller. They are set up for 
> 128GB or so machines initially.
It sounds like this value applies to DomUs, does this config variables
set the maximum amount of memory 128GB per DomU or across all DomUs,
i.e. if I have 16 DomUs and a CONFIG_XEN_MAX_DOMAIN_MEMORY of 16, do
they each have a maximum of 16GB, or do the get 1GB each?
>> I tried the memblock=debug options, and while I got lots of output, I could 
>> see very little on the subject of memory usage.
> The numbers are what amount of memory is reserved. You can find out which are 
> are is eating the most
> by computing the difference.
Maybe I'm misreading the output, but I couldn't see any numbers that
look like memory being assigned. I've attached the dmesg output. Do I
need to enable a CONFIG variable to get the output I need or am I
missing something.



Attachment: dmesg - memory debug
Description: Text document

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.