[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 07/11] IOMMU/MMU: propagate IOMMU Device-TLB flush error up to iommu_iotlb_flush{, _all} (top level ones).



On April 26, 2016 8:49 PM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
> >>> On 26.04.16 at 14:23, <quan.xu@xxxxxxxxx> wrote:
> > On April 25, 2016 6:19 PM,  Jan Beulich <JBeulich@xxxxxxxx> wrote:
> >> >>> On 18.04.16 at 16:00, <quan.xu@xxxxxxxxx> wrote:
> >> > --- a/xen/common/memory.c
> >> > +++ b/xen/common/memory.c
> >> > @@ -678,8 +678,9 @@ static int xenmem_add_to_physmap(struct
> domain
> >> *d,
> >> >      if ( need_iommu(d) )
> >> >      {
> >> >          this_cpu(iommu_dont_flush_iotlb) = 0;
> >> > -        iommu_iotlb_flush(d, xatp->idx - done, done);
> >> > -        iommu_iotlb_flush(d, xatp->gpfn - done, done);
> >> > +        rc = iommu_iotlb_flush(d, xatp->idx - done, done);
> >> > +        if ( !rc )
> >> > +            rc = iommu_iotlb_flush(d, xatp->gpfn - done, done);
> >>
> >> And again - best effort flushing in all cases. I.e. you shouldn't
> >> bypass the second one if the first one failed, but instead properly
> >> accumulate the return value, and also again not clobber any negative value
> rc may already hold.
> >
> >
> > Thanks for correcting me. I did like accumulating the return value.
> > :( I will follow your suggestion in next v3.
> >
> >> What's worse (and makes me wonder whether this got tested) - you also
> >> clobber the positive return value this function may produce, even in
> >> the case the flushes succeed (and hence the function as a whole was
> successful).
> >>
> > I have tested it with previous patches (i.e.  1st patch), launching a
> > VM with PT ATS device to invoke this call tree.
> > btw, I fix the positive issue in 1st patch.
> > I know this is not strict, as I assume this patch set will be
> > committed at the same time.
> 
> Hmm, the "positive" here has nothing to do with the "positive" in patch 1.
> Please just have a look at xenmem_add_to_physmap() as a whole.
> 

Thanks for reminding me. The 'positive'  is ' rc = start + done'..

Think twice, I found I need two other new return values for this  function  
(correct me if I am wrong!).
If the first iommu_iotlb_flush() is failed, I shouldn't  accumulate the return 
value of
the second iommu_iotlb_flush() -- but instead properly accumulate the return 
value.

The following is my modification:

--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -640,6 +640,10 @@ static int xenmem_add_to_physmap(struct domain *d,
     unsigned int done = 0;
     long rc = 0;

+#ifdef CONFIG_HAS_PASSTHROUGH
+    int ret, rv;
+#endif
+
     if ( xatp->space != XENMAPSPACE_gmfn_range )
         return xenmem_add_to_physmap_one(d, xatp->space, DOMID_INVALID,
                                          xatp->idx, xatp->gpfn);
@@ -678,8 +682,15 @@ static int xenmem_add_to_physmap(struct domain *d,
     if ( need_iommu(d) )
     {
         this_cpu(iommu_dont_flush_iotlb) = 0;
-        iommu_iotlb_flush(d, xatp->idx - done, done);
-        iommu_iotlb_flush(d, xatp->gpfn - done, done);
+        ret = iommu_iotlb_flush(d, xatp->idx - done, done);
+
+        if ( rc >= 0 && ret < 0 )
+            rc = ret;
+
+        rv = iommu_iotlb_flush(d, xatp->gpfn - done, done);
+
+        if ( rc >=0 && ret == 0 && rv < 0 )
+            rc = rv;
     }
 #endif


Quan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.