[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/2] iommu: elide flushing for higher order map/unmap operations



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 03 December 2018 15:29
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> Cc: Brian Woods <brian.woods@xxxxxxx>; Suravee Suthikulpanit
> <suravee.suthikulpanit@xxxxxxx>; Julien Grall <julien.grall@xxxxxxx>;
> Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Roger Pau Monne
> <roger.pau@xxxxxxxxxx>; Wei Liu <wei.liu2@xxxxxxxxxx>; Kevin Tian
> <kevin.tian@xxxxxxxxx>; Stefano Stabellini <sstabellini@xxxxxxxxxx>; xen-
> devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
> Subject: RE: [Xen-devel] [PATCH 2/2] iommu: elide flushing for higher
> order map/unmap operations
> 
> >>> On 03.12.18 at 16:18, <Paul.Durrant@xxxxxxxxxx> wrote:
> >> From: Xen-devel [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxxx] On
> Behalf
> >> Of Jan Beulich
> >> Sent: 03 December 2018 15:03
> >>
> >> >>> On 30.11.18 at 11:45, <paul.durrant@xxxxxxxxxx> wrote:
> >> > --- a/xen/drivers/passthrough/vtd/iommu.c
> >> > +++ b/xen/drivers/passthrough/vtd/iommu.c
> >> > @@ -631,11 +631,14 @@ static int __must_check
> iommu_flush_iotlb(struct
> >> domain *d, dfn_t dfn,
> >> >      return rc;
> >> >  }
> >> >
> >> > -static int __must_check iommu_flush_iotlb_pages(struct domain *d,
> >> > -                                                dfn_t dfn,
> >> > -                                                unsigned int
> >> page_count)
> >> > +static int __must_check iommu_flush_iotlb_pages(
> >> > +    struct domain *d, dfn_t dfn, unsigned int page_count,
> >> > +    enum iommu_flush_type flush_type)
> >>
> >> Is the re-flowing really needed?
> >>
> >
> > Yes. The enum is long and won't fit within 80 chars otherwise.
> 
> How about calling the parameter by a shorter name, e.g. ft?
>

Well I'm going to passing flags around now, so I may as well use an unsigned 
int.
 
> >> > @@ -674,9 +677,6 @@ static int __must_check dma_pte_clear_one(struct
> >> domain *domain, u64 addr)
> >> >      spin_unlock(&hd->arch.mapping_lock);
> >> >      iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
> >> >
> >> > -    if ( !this_cpu(iommu_dont_flush_iotlb) )
> >> > -        rc = iommu_flush_iotlb_pages(domain, daddr_to_dfn(addr), 1);
> >>
> >> This code not getting replaced by another addition right in this
> >> source file, and this function's only caller being
> >> intel_iommu_unmap_page() makes me wonder why you don't
> >> have the unmap functions similarly hand back a flush indicator.
> >
> > Well, the assumption is that unmap is always modifying an existing
> entry. Is
> > that assumption wrong?
> 
> I could certainly see an unmap happening for an already
> unmapped area.

Ok, I'll generalise it then.

  Paul

> 
> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.