[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 for-4.14] x86/vmx: use P2M_ALLOC in vmx_load_pdptrs instead of P2M_UNSHARE



On Thu, Jun 18, 2020 at 7:26 AM Jan Beulich <jbeulich@xxxxxxxx> wrote:
>
> On 18.06.2020 15:00, Tamas K Lengyel wrote:
> > On Thu, Jun 18, 2020 at 6:52 AM Roger Pau Monné <roger.pau@xxxxxxxxxx> 
> > wrote:
> >>
> >> On Thu, Jun 18, 2020 at 02:46:24PM +0200, Jan Beulich wrote:
> >>> On 18.06.2020 14:39, Tamas K Lengyel wrote:
> >>>> On Thu, Jun 18, 2020 at 12:31 AM Jan Beulich <jbeulich@xxxxxxxx> wrote:
> >>>>>
> >>>>> On 17.06.2020 18:19, Tamas K Lengyel wrote:
> >>>>>> While forking VMs running a small RTOS system (Zephyr) a Xen crash has 
> >>>>>> been
> >>>>>> observed due to a mm-lock order violation while copying the HVM CPU 
> >>>>>> context
> >>>>>> from the parent. This issue has been identified to be due to
> >>>>>> hap_update_paging_modes first getting a lock on the gfn using get_gfn. 
> >>>>>> This
> >>>>>> call also creates a shared entry in the fork's memory map for the cr3 
> >>>>>> gfn. The
> >>>>>> function later calls hap_update_cr3 while holding the paging_lock, 
> >>>>>> which
> >>>>>> results in the lock-order violation in vmx_load_pdptrs when it tries 
> >>>>>> to unshare
> >>>>>> the above entry when it grabs the page with the P2M_UNSHARE flag set.
> >>>>>>
> >>>>>> Since vmx_load_pdptrs only reads from the page its usage of 
> >>>>>> P2M_UNSHARE was
> >>>>>> unnecessary to start with. Using P2M_ALLOC is the appropriate flag to 
> >>>>>> ensure
> >>>>>> the p2m is properly populated and to avoid the lock-order violation we
> >>>>>> observed.
> >>>>>
> >>>>> Using P2M_ALLOC is not going to address the original problem though
> >>>>> afaict: You may hit the mem_sharing_fork_page() path that way, and
> >>>>> via nominate_page() => __grab_shared_page() => mem_sharing_page_lock()
> >>>>> you'd run into a lock order violation again.
> >>>>
> >>>> Note that the nominate_page you see in that path is for the parent VM.
> >>>> The paging lock is not taken for the parent VM thus nominate_page
> >>>> succeeds without any issues any time fork_page is called. There is no
> >>>> nominate_page called for the client domain as there is nothing to
> >>>> nominate when plugging a hole.
> >>>
> >>> But that's still a lock order issue then, isn't it? Just one that
> >>> the machinery can't detect / assert upon.
> >>
> >> Yes, mm lock ordering doesn't differentiate between domains, and the
> >> current lock order on the pCPU is based on the last lock taken
> >> (regardless of the domain it belongs to).
> >
> > I see, makes sense. In that case the issue is avoided purely due to
> > get_gfn being called that happens before the paging_lock is taken.
> > That would have to be the way-to-go on other paths leading to
> > vmx_load_pdptrs as well but since all other paths leading there do it
> > without the paging lock being taken there aren't any more adjustments
> > necessary right now that I can see.
>
> If this is indeed the case, then I guess all that's needed is a further
> extended / refined commit message in v3.

Alright.

Thanks,
Tamas



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.