[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 2/2] x86/dom0: improve paging memory usage calculations



>>> On 11.12.18 at 16:36, <roger.pau@xxxxxxxxxx> wrote:
> On Tue, Dec 11, 2018 at 08:19:34AM -0700, Jan Beulich wrote:
>> >>> On 05.12.18 at 15:55, <roger.pau@xxxxxxxxxx> wrote:
>> > +unsigned long __init dom0_hap_pages(const struct domain *d,
>> > +                                    unsigned long nr_pages)
>> > +{
>> > +    /*
>> > +     * Attempt to account for at least some of the MMIO regions by adding 
>> > the
>> > +     * size of the holes in the memory map to the amount of pages to map. 
>> > Note
>> > +     * this will obviously not account for MMIO regions that are past the 
>> > last
>> > +     * RAM range in the memory map.
>> > +     */
>> > +    nr_pages += max_page - total_pages;
>> > +    /*
>> > +     * Approximate the memory required for the HAP/IOMMU page tables by
>> > +     * pessimistically assuming each page will consume a 8 byte page table
>> > +     * entry.
>> > +     */
>> > +    return DIV_ROUND_UP(nr_pages * 8, PAGE_SIZE << PAGE_ORDER_4K);
>> 
>> With enough memory handed to Dom0 the memory needed for
>> L2 and higher page tables will matter as well.
> 
> The above calculation assumes all chunks will be mapped as 4KB
> entries, but this is very unlikely, so there's some room for higher
> page tables.

Right, but there's no dependency on 2M and/or 1G pages being
available, nor does the comment give any hint towards that
implication.

> If that doesn't seem enough I can add some extra space
> here, maybe a +5% or +10%?

A percentage doesn't do imo. From the memory map it should
be clear how many L2, L3, and L4 tables are going to be needed.
We do such a calculation in the PV case as well, after all.

>> I'm anyway having difficulty seeing why HAP and shadow would
>> have to use different calculations, the more that shadow relies
>> on the same P2M code that shadow uses in the AMD/SVM case.
> 
> For once shadow needs to take the number of vCPUs into account while
> HAP doesn't.

Yes, and as said - adding that shadow-specific amount on top of
the generic calculation would seem better to me.

>> Plus, as iirc was said by someone else already, I don't think we
>> can (continue to) neglect the MMIO space needs for MMCFG
>> and PCI devices, especially with devices having multi-Gb BARs.
> 
> Well, there's the comment above that notes this approach only takes
> into account the holes in the memory map as regions to be mapped. This
> can be improved later on, but I think the important point here is to
> know where this numbers come from in order to tweak it in the future.

You've given this same argument to Wei before. I agree the
calculation adjustments are an improvement even without
taking that other aspect into consideration, but I'm not happy
to see an important portion left out. What if the sum of all
BARs exceeds the amount of RAM? What if enough BARs are
so undesirably placed that every one of them needs a full
separate chain of L4, L3, L2, and L1 entries?

>> > +        else
>> > +            avail -= dom0_shadow_pages(d, nr_pages) +
>> > +                     dom0_hap_pages(d, nr_pages);
>> >      }
>> 
>> Doesn't dom0_shadow_pages() (mean to) already include the
>> amount needed for the P2M?
> 
> libxl code mentions: "plus 1 page per MiB of RAM for the P2M map," so
> I guess the shadow calculation takes into account the memory used by
> the IOMMU page tables?

I think that comment refers to the P2M needs, not the IOMMU ones.
Iirc in shadow mode the IOMMU uses separate page tables, albeit I
don't recall why that is when the P2M is really only used by software
in that case.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.