[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 11/18] x86/mem_sharing: ASSERT that p2m_set_entry succeeds



On Fri, Jan 17, 2020 at 2:23 AM Jan Beulich <jbeulich@xxxxxxxx> wrote:
>
> On 16.01.2020 17:12, Tamas K Lengyel wrote:
> > On Thu, Jan 16, 2020 at 9:07 AM Jan Beulich <jbeulich@xxxxxxxx> wrote:
> >>
> >> On 08.01.2020 18:14, Tamas K Lengyel wrote:
> >>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@xxxxxxxxx>
> >>> ---
> >>>  xen/arch/x86/mm/mem_sharing.c | 42 +++++++++++++++++------------------
> >>>  1 file changed, 21 insertions(+), 21 deletions(-)
> >>>
> >>> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
> >>> index 93e7605900..3f36cd6bbc 100644
> >>> --- a/xen/arch/x86/mm/mem_sharing.c
> >>> +++ b/xen/arch/x86/mm/mem_sharing.c
> >>> @@ -1117,11 +1117,19 @@ int add_to_physmap(struct domain *sd, unsigned 
> >>> long sgfn, shr_handle_t sh,
> >>>          goto err_unlock;
> >>>      }
> >>>
> >>> +    /*
> >>> +     * Must succeed, we just read the entry and hold the p2m lock
> >>> +     * via get_two_gfns.
> >>> +     */
> >>>      ret = p2m_set_entry(p2m, _gfn(cgfn), smfn, PAGE_ORDER_4K,
> >>>                          p2m_ram_shared, a);
> >>> +    ASSERT(!ret);
> >>
> >> And there's no risk of -ENOMEM because of needing to split a
> >> larger order page? At the very least the reasoning in the
> >> comment would need further extending.
> >
> > No because we are plugging a hole in the domain. There is no larger
> > page mapped in here that would need to be broken up.
>
> p2m_is_hole() also covers p2m_mmio_dm and p2m_invalid. The former
> (should really be the latter) is what you'll get back for e.g. a
> GFN beyond max_mapped_pfn. Aiui such a "set" may then require
> table population, which may still yield -ENOMEM (at least EPT
> looks to return -ENOENT in this case instead; I guess I'll make
> a patch).

Yes, actually that is what's expected in the fork case to happen since
the fork has no entries in its EPT when it starts at all. So there
will be allocations happening there for the pagetable entries. But for
forks that's not of concern since we'll setup the same HAP allocation
the parent VM has during the fork hypercall. So it is guaranteed that
the fork will have the same amount of memory for its pagetables its
parent has.

Now as for using add_to_physmap on a non-forked VM when plugging a
hole like that, yes, I guess there is the possibility that the VM is
going to run out of space for its pagetable. So I guess we should skip
this patch.

Thanks,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.