[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 2/2] x86/dom0: improve paging memory usage calculations



On Tue, Dec 11, 2018 at 08:19:34AM -0700, Jan Beulich wrote:
> >>> On 05.12.18 at 15:55, <roger.pau@xxxxxxxxxx> wrote:
> > +unsigned long __init dom0_hap_pages(const struct domain *d,
> > +                                    unsigned long nr_pages)
> > +{
> > +    /*
> > +     * Attempt to account for at least some of the MMIO regions by adding 
> > the
> > +     * size of the holes in the memory map to the amount of pages to map. 
> > Note
> > +     * this will obviously not account for MMIO regions that are past the 
> > last
> > +     * RAM range in the memory map.
> > +     */
> > +    nr_pages += max_page - total_pages;
> > +    /*
> > +     * Approximate the memory required for the HAP/IOMMU page tables by
> > +     * pessimistically assuming each page will consume a 8 byte page table
> > +     * entry.
> > +     */
> > +    return DIV_ROUND_UP(nr_pages * 8, PAGE_SIZE << PAGE_ORDER_4K);
> 
> With enough memory handed to Dom0 the memory needed for
> L2 and higher page tables will matter as well.

The above calculation assumes all chunks will be mapped as 4KB
entries, but this is very unlikely, so there's some room for higher
page tables. If that doesn't seem enough I can add some extra space
here, maybe a +5% or +10%?

> I'm anyway having difficulty seeing why HAP and shadow would
> have to use different calculations, the more that shadow relies
> on the same P2M code that shadow uses in the AMD/SVM case.

For once shadow needs to take the number of vCPUs into account while
HAP doesn't.

> Plus, as iirc was said by someone else already, I don't think we
> can (continue to) neglect the MMIO space needs for MMCFG
> and PCI devices, especially with devices having multi-Gb BARs.

Well, there's the comment above that notes this approach only takes
into account the holes in the memory map as regions to be mapped. This
can be improved later on, but I think the important point here is to
know where this numbers come from in order to tweak it in the future.

> > +        else
> > +            avail -= dom0_shadow_pages(d, nr_pages) +
> > +                     dom0_hap_pages(d, nr_pages);
> >      }
> 
> Doesn't dom0_shadow_pages() (mean to) already include the
> amount needed for the P2M?

libxl code mentions: "plus 1 page per MiB of RAM for the P2M map," so
I guess the shadow calculation takes into account the memory used by
the IOMMU page tables?

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.