[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v1 04/10] vfio/type1: Prepare is_invalid_reserved_pfn() for PG_reserved changes



On Thu, Nov 7, 2019 at 2:07 PM David Hildenbrand <david@xxxxxxxxxx> wrote:
>
> On 07.11.19 19:22, David Hildenbrand wrote:
> >
> >
> >> Am 07.11.2019 um 16:40 schrieb Dan Williams <dan.j.williams@xxxxxxxxx>:
> >>
> >> On Thu, Oct 24, 2019 at 5:12 AM David Hildenbrand <david@xxxxxxxxxx> 
> >> wrote:
> >>>
> >>> Right now, ZONE_DEVICE memory is always set PG_reserved. We want to
> >>> change that.
> >>>
> >>> KVM has this weird use case that you can map anything from /dev/mem
> >>> into the guest. pfn_valid() is not a reliable check whether the memmap
> >>> was initialized and can be touched. pfn_to_online_page() makes sure
> >>> that we have an initialized memmap (and don't have ZONE_DEVICE memory).
> >>>
> >>> Rewrite is_invalid_reserved_pfn() similar to kvm_is_reserved_pfn() to make
> >>> sure the function produces the same result once we stop setting 
> >>> ZONE_DEVICE
> >>> pages PG_reserved.
> >>>
> >>> Cc: Alex Williamson <alex.williamson@xxxxxxxxxx>
> >>> Cc: Cornelia Huck <cohuck@xxxxxxxxxx>
> >>> Signed-off-by: David Hildenbrand <david@xxxxxxxxxx>
> >>> ---
> >>> drivers/vfio/vfio_iommu_type1.c | 10 ++++++++--
> >>> 1 file changed, 8 insertions(+), 2 deletions(-)
> >>>
> >>> diff --git a/drivers/vfio/vfio_iommu_type1.c 
> >>> b/drivers/vfio/vfio_iommu_type1.c
> >>> index 2ada8e6cdb88..f8ce8c408ba8 100644
> >>> --- a/drivers/vfio/vfio_iommu_type1.c
> >>> +++ b/drivers/vfio/vfio_iommu_type1.c
> >>> @@ -299,9 +299,15 @@ static int vfio_lock_acct(struct vfio_dma *dma, long 
> >>> npage, bool async)
> >>>   */
> >>> static bool is_invalid_reserved_pfn(unsigned long pfn)
> >>> {
> >>> -       if (pfn_valid(pfn))
> >>> -               return PageReserved(pfn_to_page(pfn));
> >>> +       struct page *page = pfn_to_online_page(pfn);
> >>
> >> Ugh, I just realized this is not a safe conversion until
> >> pfn_to_online_page() is moved over to subsection granularity. As it
> >> stands it will return true for any ZONE_DEVICE pages that share a
> >> section with boot memory.
> >
> > That should not happen right now and I commented back when you introduced 
> > subsection support that I don’t want to have ZONE_DEVICE mixed with online 
> > pages in a section. Having memory block devices that partially span 
> > ZONE_DEVICE would be ... really weird. With something like pfn_active() - 
> > as discussed - we could at least make this check work - but I am not sure 
> > if we really want to go down that path. In the worst case, some MB of RAM 
> > are lost ... I guess this needs more thought.
> >
>
> I just realized the "boot memory" part. Is that a real thing? IOW, can
> we have ZONE_DEVICE falling into a memory block (with holes)? I somewhat
> have doubts that this would work ...

One of the real world failure cases that started the subsection effect
is that Persistent Memory collides with System RAM on a 64MB boundary
on shipping platforms. System RAM ends on a 64MB boundary and due to a
lack of memory controller resources PMEM is mapped contiguously at the
end of that boundary. Some more details in the subsection cover letter
/ changelogs [1] [2]. It's not sufficient to just lose some memory,
that's the broken implementation that lead to the subsection work
because the lost memory may change from one boot to the next and
software can't reliably inject a padding that conforms to the x86
128MB section constraint.

Suffice to say I think we need your pfn_active() to get subsection
granularity pfn_to_online_page() before PageReserved() can be removed.

[1]: 
https://lore.kernel.org/linux-mm/156092349300.979959.17603710711957735135.stgit@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/
[2]: 
https://lore.kernel.org/linux-mm/156092354368.979959.6232443923440952359.stgit@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.