[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v14 4/9] iommu: don't domain_crash() inside iommu_map/unmap_page()

> -----Original Message-----
> From: George Dunlap [mailto:george.dunlap@xxxxxxxxxx]
> Sent: 04 October 2018 17:30
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Wei Liu <wei.liu2@xxxxxxxxxx>; Stefano Stabellini
> <sstabellini@xxxxxxxxxx>; Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Ian
> Jackson <Ian.Jackson@xxxxxxxxxx>; Konrad Rzeszutek Wilk
> <konrad.wilk@xxxxxxxxxx>; Tim (Xen.org) <tim@xxxxxxx>; Jun Nakajima
> <jun.nakajima@xxxxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>
> Subject: Re: [PATCH v14 4/9] iommu: don't domain_crash() inside
> iommu_map/unmap_page()
> On 10/04/2018 11:45 AM, Paul Durrant wrote:
> > This patch removes the implicit domain_crash() from iommu_map(),
> > unmap_page() and iommu_iotlb_flush() and turns them into straightforward
> > wrappers that check the existence of the relevant iommu_op and call
> > through to it. This makes them usable by PV IOMMU code to be delivered
> in
> > future patches.
> Hmm, apparently I gave this an R-b before, but now I'm not totally happy
> with it.  The point of putting the domain_crash() inside those functions
> was because forgetting to crash the domain, particularly in the event of
> an unmap or a flush, was very likely to be a security issue.
> Would it be possible to either add a `mayfail` parameter, or a new
> function (iommu_map_mayfail() or something), that the PV IOMMU code
> could use instead?
> It looks like git is comfortable putting this patch at the end; all the
> other patches look like they probably have enough acks to go in while we
> discuss this one.

I still think an implicit domain_crash() doesn't really belong in something 
that looks like a straightforward wrapper around a per-implementation jump 
table. How about iommu_map/unmap_may_crash() instead to highlight the semantic?

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.