[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8 7/7] vtd: add lookup_page method to iommu_ops



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 13 September 2018 14:45
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> Cc: George Dunlap <George.Dunlap@xxxxxxxxxx>; Wei Liu
> <wei.liu2@xxxxxxxxxx>; Kevin Tian <kevin.tian@xxxxxxxxx>; xen-devel <xen-
> devel@xxxxxxxxxxxxxxxxxxxx>
> Subject: RE: [PATCH v8 7/7] vtd: add lookup_page method to iommu_ops
> 
> >>> On 13.09.18 at 15:34, <Paul.Durrant@xxxxxxxxxx> wrote:
> >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> >> Sent: 13 September 2018 14:29
> >>
> >> >>> On 13.09.18 at 12:31, <paul.durrant@xxxxxxxxxx> wrote:
> >> > This patch adds a new method to the VT-d IOMMU implementation to find
> >> the
> >> > MFN currently mapped by the specified DFN along with a wrapper
> function
> >> > in generic IOMMU code to call the implementation if it exists.
> >>
> >> Would you mind clarifying what the use of this is when ...
> >>
> >> > +static int intel_iommu_lookup_page(struct domain *d, dfn_t dfn,
> mfn_t
> >> *mfn,
> >> > +                                   unsigned int *flags)
> >> > +{
> >> > +    struct domain_iommu *hd = dom_iommu(d);
> >> > +    struct dma_pte *page, val;
> >> > +    u64 pg_maddr;
> >> > +
> >> > +    /*
> >> > +     * If VT-d shares EPT page table or if the domain is the
> hardware
> >> > +     * domain and iommu_passthrough is set then pass back the dfn.
> >> > +     */
> >> > +    if ( iommu_use_hap_pt(d) ||
> >> > +         (iommu_passthrough && is_hardware_domain(d)) )
> >> > +    {
> >> > +        *mfn = _mfn(dfn_x(dfn));
> >> > +        return 0;
> >> > +    }
> >> > +
> >> > +    spin_lock(&hd->arch.mapping_lock);
> >> > +
> >> > +    pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 0);
> >> > +    if ( !pg_maddr )
> >> > +    {
> >> > +        spin_unlock(&hd->arch.mapping_lock);
> >> > +        return -ENOMEM;
> >> > +    }
> >> > +
> >> > +    page = map_vtd_domain_page(pg_maddr);
> >> > +    val = page[dfn_x(dfn) & LEVEL_MASK];
> >> > +
> >> > +    unmap_vtd_domain_page(page);
> >> > +    spin_unlock(&hd->arch.mapping_lock);
> >> > +
> >> > +    if ( !dma_pte_present(val) )
> >> > +        return -ENOENT;
> >> > +
> >> > +    *mfn = maddr_to_mfn(dma_pte_addr(val));
> >> > +    *flags = dma_pte_read(val) ? IOMMUF_readable : 0;
> >> > +    *flags |= dma_pte_write(val) ? IOMMUF_writable : 0;
> >> > +
> >> > +    return 0;
> >> > +}
> >>
> >> ... the locking used here suggests that the result is stale by the
> >> time the caller gets to look at it? If this relies on locking in the
> >> callers, then I think this should be spelled out in comments next
> >> to the function definitions and/or declarations.
> >
> > Why would this be any different to locking map and unmap against each
> other?
> > Clearly that needs to be done for sensible results. I can add a comment
> but
> > I'd take it to be self evident that manipulation of mappings needs to be
> > locked against querying them.
> 
> But you realize that the implied question here was: If there is caller
> side locking required, what's the point of the locking the function does?

Well you could say the same about map and unmap. It's implementation locking so 
I assume the original implementor of map and unmap had a reason to make it 
implementation specific rather than generic. I could, of course, introduce 
generic locking as well.

> 
> Furthermore, racy maps and unmaps would solely result in IOMMU
> faults if things go wrong, which impacts the guest (it'll likely get
> killed) only. Handing back stale data to callers inside the hypervisor,
> who then base further decision upon that stale data, may have
> more severe impact.
> 

Ok. I'll spell it out in the header if you think it is non-obvious.

  Paul

> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.