[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8 2/3] VT-d: Wrap a _sync version for all VT-d flush interfaces



On March 24, 2016 11:06pm, <dario.faggioli@xxxxxxxxxx> wrote:
> On Thu, 2016-03-24 at 14:56 +0100, Dario Faggioli wrote:
> > On Thu, 2016-03-24 at 13:57 +0800, Quan Xu wrote:
> > >
> > > @@ -134,8 +133,8 @@ int dev_invalidate_iotlb(struct iommu *iommu,
> > > u16
> > > did,
> > >              /* invalidate all translations:
> > > sbit=1,bit_63=0,bit[62:12]=1 */
> > >              sbit = 1;
> > >              addr = (~0UL << PAGE_SHIFT_4K) & 0x7FFFFFFFFFFFFFFF;
> > > -            rc = qinval_device_iotlb(iommu, pdev->ats_queue_depth,
> > > -                                     sid, sbit, addr);
> > > +            ret = qinval_device_iotlb_sync(iommu, pdev-
> > > >
> > > > ats_queue_depth,
> > > +                                           sid, sbit,
> addr);
> > >              break;
> > >          case DMA_TLB_PSI_FLUSH:
> > >              if ( !device_in_domain(iommu, pdev, did) ) @@ -154,16
> > > +153,13 @@ int dev_invalidate_iotlb(struct iommu *iommu,
> > > u16 did,
> > >                  addr |= (((u64)1 << (size_order - 1)) - 1) <<
> > > PAGE_SHIFT_4K;
> > >              }
> > >
> > > -            rc = qinval_device_iotlb(iommu, pdev->ats_queue_depth,
> > > -                                     sid, sbit, addr);
> > > +            ret = qinval_device_iotlb_sync(iommu, pdev-
> > > >
> > > > ats_queue_depth,
> > > +                                           sid, sbit,
> addr);
> > >              break;
> > >          default:
> > >              dprintk(XENLOG_WARNING VTDPREFIX, "invalid vt-d
> flush
> > > type\n");
> > >              return -EOPNOTSUPP;
> > >          }
> > > -
> > > -        if ( !ret )
> > > -            ret = rc;
> > >      }
> > >
> > >      return ret;
> > >
> > Am I misreading something or we are introducing synchronous handling,
> > which was not there before?
> >
> > If yes, is it ok to do this in this patch?
> >
> > And if yes again, I think that it at least should be noted in the
> > changelog, as it would mean that the patch is not only introducing
> > some wrappers.
> >
> Ok, I think I see what's happening here. Before this patch,
> invalidate_sync() was being called inside qinval_device_iotlb(), so we were
> synchronous already, and we need to continue to be like that, by calling the
> _sync() variants.
> 
yes, it not as well understood, but to me, it is difficult to describe it in 
changelog.
Let me elaborate briefly on the evolution:
1. In original code, without my patch, it is:
        ...
        if ( flush_dev_iotlb )
            ret = dev_invalidate_iotlb(iommu, did, addr, size_order, type);
        rc = invalidate_sync(iommu);
        ...


In dev_invalidate_iotlb(), it scans ats_devices list the calls 
qinval_device_iotlb() to flush the Device-TLB via 'Device-TLB Invalidate 
Descriptor'.
In invalidate_sync(), it synchronize with hardware for the invalidation request 
descriptors submitted before the wait descriptor via 'Invalidation Wait 
Descriptor'.

If we assign multiple ATS devices to a domain and invalidate_sync() times out, 
we couldn't find which one times out. Then,

2. in my previous patch, I put the invalidate_sync() variant (-as we need to 
pass down the device's SBDF to hide the ATS device) within 
dev_invalidate_iotlb() to flush 
the ATS device one by one. if it timed out, I could find the bogus device and 
hide it.

3. as Kevin mentioned, I put invalidation sync within dev_invalidate_iotlb, 
while for all other IOMMU invalidations the sync is put after. That is the 
consistency issue.
  That's also why I provide _sync version in v8. (could you help me enhance the 
description? :) )

> Yes, if this is what happens, this also looks ok to me.
> Sorry for the noise.
> 
Dario, feel free to express yourself. As you know, I'd appreciate your (and the 
other's) comments.:)

Quan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.