[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 6/8] common/grant_table: batch flush I/O TLB



On 10.09.2020 16:17, Paul Durrant wrote:
>> From: Jan Beulich <jbeulich@xxxxxxxx>
>> Sent: 10 September 2020 14:49
>>
>> On 07.09.2020 09:40, Paul Durrant wrote:
>>> --- a/xen/common/grant_table.c
>>> +++ b/xen/common/grant_table.c
>>> @@ -241,7 +241,13 @@ struct gnttab_unmap_common {
>>>      grant_ref_t ref;
>>>  };
>>>
>>> -/* Number of unmap operations that are done between each tlb flush */
>>> +/* Number of map operations that are done between each pre-emption check */
>>> +#define GNTTAB_MAP_BATCH_SIZE 32
>>> +
>>> +/*
>>> + * Number of unmap operations that are done between each tlb flush and
>>> + * pre-emption check.
>>> + */
>>>  #define GNTTAB_UNMAP_BATCH_SIZE 32
>>
>> Interesting - I don't think I've ever seen preemption spelled like
>> this (with a hyphen). In the interest of grep-ability, would you
>> mind changing to the more "conventional" spelling? Albeit I now
>> notice we have two such spellings in the tree already ...
>>
> 
> Sure, I'll change it.
> 
>>> @@ -979,7 +985,7 @@ static unsigned int mapkind(
>>>
>>>  static void
>>>  map_grant_ref(
>>> -    struct gnttab_map_grant_ref *op)
>>> +    struct gnttab_map_grant_ref *op, bool do_flush, unsigned int 
>>> *flush_flags)
>>
>> Why two parameters? Simply pass NULL for singletons instead? Although,
>> you'd need another local variable then, which maybe isn't overly much
>> nicer.
>>
> 
> No, I think the extra arg is clearer.
> 
>>> @@ -1319,29 +1324,60 @@ static long
>>>  gnttab_map_grant_ref(
>>>      XEN_GUEST_HANDLE_PARAM(gnttab_map_grant_ref_t) uop, unsigned int count)
>>>  {
>>> -    int i;
>>> -    struct gnttab_map_grant_ref op;
>>> +    struct domain *currd = current->domain;
>>
>> Is this a worthwhile variable to have in this case? It's used
>> exactly once, granted in the loop body, but still not the inner
>> one.
>>
> 
> I thought it was nicer for consistency with the unmap function (where curd is 
> used more than once) but I'll drop it.
> 
>>> +    unsigned int done = 0;
>>> +    int rc = 0;
>>>
>>> -    for ( i = 0; i < count; i++ )
>>> +    while ( count )
>>>      {
>>> -        if ( i && hypercall_preempt_check() )
>>> -            return i;
>>> +        unsigned int i, c = min_t(unsigned int, count, 
>>> GNTTAB_MAP_BATCH_SIZE);
>>> +        unsigned int flush_flags = 0;
>>>
>>> -        if ( unlikely(__copy_from_guest_offset(&op, uop, i, 1)) )
>>> -            return -EFAULT;
>>> +        for ( i = 0; i < c; i++ )
>>> +        {
>>> +            struct gnttab_map_grant_ref op;
>>>
>>> -        map_grant_ref(&op);
>>> +            if ( unlikely(__copy_from_guest(&op, uop, 1)) )
>>> +            {
>>> +                rc = -EFAULT;
>>> +                break;
>>> +            }
>>>
>>> -        if ( unlikely(__copy_to_guest_offset(uop, i, &op, 1)) )
>>> -            return -EFAULT;
>>> +            map_grant_ref(&op, c == 1, &flush_flags);
>>> +
>>> +            if ( unlikely(__copy_to_guest(uop, &op, 1)) )
>>> +            {
>>> +                rc = -EFAULT;
>>> +                break;
>>> +            }
>>> +
>>> +            guest_handle_add_offset(uop, 1);
>>> +        }
>>> +
>>> +        if ( c > 1 )
>>> +        {
>>> +            int err = iommu_iotlb_flush_all(currd, flush_flags);
>>
>> There's still some double flushing involved in the error case here.
>> Trying to eliminate this (by having map_grant_ref() write zero
>> into *flush_flags) may not be overly important, but I thought I'd
>> still mention it.
>>
> 
> This only kicks in if there's more than one operation and is it safe to 
> squash the flush_flags if some ops succeed and others fail? If all ops fail 
> then flush_flags should still be zero because the call to iommu_map() is the 
> last failure point in map_grant_ref() anyway.

Well, yes, not a common thing to run into. With the small changes
further up
Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.