[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH for-4.15 2/4] xen/iommu: x86: Free the IOMMU page-tables with the pgtables.lock held



> -----Original Message-----
> From: Jan Beulich <jbeulich@xxxxxxxx>
> Sent: 15 January 2021 11:07
> To: Julien Grall <julien@xxxxxxx>; Paul Durrant <paul@xxxxxxx>
> Cc: hongyxia@xxxxxxxxxxxx; Julien Grall <jgrall@xxxxxxxxxx>; 
> xen-devel@xxxxxxxxxxxxxxxxxxxx; Andrew
> Cooper <andrew.cooper3@xxxxxxxxxx>
> Subject: Re: [PATCH for-4.15 2/4] xen/iommu: x86: Free the IOMMU page-tables 
> with the pgtables.lock
> held
> 
> On 14.01.2021 20:19, Julien Grall wrote:
> >
> >
> > On 23/12/2020 14:01, Julien Grall wrote:
> >> Hi Jan,
> >>
> >> On 23/12/2020 13:48, Jan Beulich wrote:
> >>> On 22.12.2020 16:43, Julien Grall wrote:
> >>>> From: Julien Grall <jgrall@xxxxxxxxxx>
> >>>>
> >>>> The pgtables.lock is protecting access to the page list pgtables.list.
> >>>> However, iommu_free_pgtables() will not held it. I guess it was assumed
> >>>> that page-tables cannot be allocated while the domain is dying.
> >>>>
> >>>> Unfortunately, there is no guarantee that iommu_map() will not be
> >>>> called while a domain is dying (it looks like to be possible from
> >>>> XEN_DOMCTL_memory_mapping).
> >>>
> >>> I'd rather disallow any new allocations for a dying domain, not
> >>> the least because ...
> >>
> >> Patch #4 will disallow such allocation. However...
> >
> > It turns out that I can't disallow it because there is at least one call
> > of iommu_map() while an HVM domain is destroyed:
> >
> >      (XEN)    [<ffff82d04026e399>] R iommu_map+0xf2/0x171
> >      (XEN)    [<ffff82d04026e43e>] F iommu_legacy_map+0x26/0x63
> >      (XEN)    [<ffff82d040302a42>] F
> > arch/x86/mm/p2m-ept.c#ept_set_entry+0x6b2/0x730
> >      (XEN)    [<ffff82d0402f761d>] F p2m_set_entry+0x91/0x128
> >      (XEN)    [<ffff82d0402f8643>] F guest_physmap_add_entry+0x3d7/0x4e0
> >      (XEN)    [<ffff82d0402f87cb>] F guest_physmap_add_page+0x7f/0xfe
> >      (XEN)    [<ffff82d0402ba0a2>] F
> > arch/x86/hvm/ioreq.c#hvm_add_ioreq_gfn+0x6f/0x8d
> >      (XEN)    [<ffff82d0402ba0f9>] F
> > arch/x86/hvm/ioreq.c#hvm_ioreq_server_disable+0x39/0x4f
> >      (XEN)    [<ffff82d0402bb50e>] F hvm_destroy_all_ioreq_servers+0x6f/0x9b
> >      (XEN)    [<ffff82d0402afc15>] F
> > hvm_domain_relinquish_resources+0x3e/0x92
> >      (XEN)    [<ffff82d04031f278>] F domain_relinquish_resources+0x355/0x372
> >      (XEN)    [<ffff82d040205dd4>] F domain_kill+0xc7/0x150
> >      (XEN)    [<ffff82d04023baf0>] F do_domctl+0xba7/0x1a09
> >      (XEN)    [<ffff82d040312c8f>] F pv_hypercall+0x2f0/0x55f
> >      (XEN)    [<ffff82d040391432>] F lstar_enter+0x112/0x120
> >
> > This one resulted to a domain crash rather than a clean destruction.
> 
> A domain crash despite the domain already getting cleaned up is
> something we may at least want to consider doing better: There
> already is a !d->is_shutting_down conditional printk() there.
> What would people think about avoiding the domain_crash() call
> in similar ways? (It could even be considered to make
> domain_crash() itself behave like this, but such a step may make
> it necessary to first audit all use sites.)
> 
> > I would still like to disallow page-table allocation, so I am think to
> > ignore any request in iommu_map() if the domain is dying.
> 
> Ignoring requests there seems fragile to me. Paul - what are your
> thoughts about bailing early from hvm_add_ioreq_gfn() when the
> domain is dying?
> 

I think that's ok. Really, the only reason for putting the pages back in the 
P2M is to allow migration to work so if the domain is dying then we don't 
really care do we?

  Paul

> Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.