[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/2] x86/dom0: rename paging function



>>> On 13.12.18 at 15:20, <roger.pau@xxxxxxxxxx> wrote:
> On Thu, Dec 13, 2018 at 03:17:05AM -0700, Jan Beulich wrote:
>> >>> On 13.12.18 at 10:14, <roger.pau@xxxxxxxxxx> wrote:
>> > On Thu, Dec 13, 2018 at 12:45:07AM -0700, Jan Beulich wrote:
>> >> >>> On 12.12.18 at 18:05, <roger.pau@xxxxxxxxxx> wrote:
>> >> > On Wed, Dec 12, 2018 at 09:15:09AM -0700, Jan Beulich wrote:
>> >> >> The MMIO side of things of course still remains unclear.
>> >> > 
>> >> > Right, for the MMIO and the handling of grant and foreign mappings it's
>> >> > not clear how we want to proceed.
>> >> > 
>> >> > Maybe account for all host RAM (total_pages) plus MMIO BARs?
>> >> 
>> >> Well, I thought we've already settled on it being impossible to
>> >> account for all MMIO BARs at this point.
>> > 
>> > Well, I could iterate over all the registered PCI devices and size
>> > the BARs (without VF BARs at least initially). This is quite
>> > cumbersome, my other option would be using max_page and hope that
>> > there are enough holes to make up for BAR MMIO regions.
>> 
>> Well, maybe we could live with this for now. I certainly would
>> prefer to have a 3rd opinion though, as I continue to feel uneasy
>> with this rather imprecise estimation (i.e. I'd much prefer a more
>> dynamic / on-demand approach).
> 
> I agree it's not a perfect solution, but I think what's currently done
> is even worse, and we already had bug reports of users seeing Xen
> panic at PVH Dom0 build time if no dom0_mem parameter is specified.
> 
> Would you be OK with using max_page then?

I'm not going to say yes or no here without having seen a (qualified)
3rd opinion.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.