[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 2/2] x86/dom0: improve paging memory usage calculations



On Wed, Dec 12, 2018 at 03:57:41AM -0700, Jan Beulich wrote:
> >>> On 12.12.18 at 11:16, <roger.pau@xxxxxxxxxx> wrote:
> > There are also further issues that I wanted to discuss in a separate
> > thread, what about foreign mappings? Dom0 will likely map a non
> > trivial amount of grants and foreign mappings, which will also require
> > p2m/IOMMU page table entries.
> 
> Hmm, good point. Then again this is a runtime requirement,
> whereas here we want to get the boot time estimate right. At
> runtime lack of memory for P2M tables will simply result in
> -ENOMEM.

But Xen runtime memory is also tied to the boot estimates if there's
no dom0_mem parameter specified on the command line. I would expect
Dom0 to balloon down memory when it attempts to map BARs, even at
runtime.

> > Should we maybe size Dom0 p2m/iommu internal paging structures to be
> > able to map up to max_page at least?
> 
> Well, max_page is a gross over-estimate of RAM (especially with
> dom0_mem= in effect) and doesn't help at all with MMIO or the
> foreign/grant maps you mention.
> 
> I wonder whether for Dom0 we don't need to change the entire
> approach of how we set it up in PVH mode: Instead of a single
> paging_set_allocation(), why don't we call the function
> repeatedly whenever we run out of space, shrinking what we
> actually give to Dom0 accordingly (and incrementally).

This could work given a suitable dom0_mem value is specified at the
command line. Without the Dom0 amount of memory being assigned by the
admin, Xen still needs to estimate how much memory is needed for it's
internal structures (p2m, IOMMU page tables) and we are back to the
same scenario.

> For the
> PCI BAR mappings this would require doing so when the Dom0
> kernel is already running, but I think that's acceptable. PCI
> device add would fail with -ENOMEM when the allocation pools
> can't be suitably grown.

The default Xen free slack memory is 128MB, which I'm afraid would be
consumed quite easily.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.