[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Ping²: [PATCH] xen-netback: correct success/error reporting for the SKB-with-fraglist case


  • To: paul@xxxxxxx, Wei Liu <wl@xxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 16 Sep 2021 17:45:56 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=0yDVeRNlDDrwVTmm+uLJzYWe4+HRUO5u12ZGOtIo0O0=; b=STkNGivzQfVM535X4cUbeQBbvMP4Ip3tJb9UrMyLdkD9X7CgHU/3/VK9ggKySdRROumP79+U+LRZYyEu+FOMAta82sRUhlcsKgq8IaPguQPBJTLOteJ6dEGd3D3pB2fPKZbwN4SrqVazRIwCDkO8xWzKQnDguOcWj2Qlw8s6ABLnR8xGGJifJ4UfgTGgoIM9oImfpQ94H7gN9a8CtaqUS6LU5PAJBSxYBZNKcjgu7X9kTtrSbipGAGZr2vp3x1+Lp0YZH0hLbZJvIw9XKKPXbSLfellxG7MLqkWc9xxlcIlHmfHDPmt024N3cmQSnzR5/dog0HHH/cNP7n/AJA8W6g==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=jpiQ8Vd4nU83PtalOUyyEHBeQTMw1MbVGJS5QrMHL3Y7nkVl3IMVpAQgm13wq3D64FuxmMdctFzRjXkV9PmLtzuBf17zcrdWHnuzUGz1hUvFn9M/OAgmLufDu9Rznv9Q++PO8NBNKviaxDhIa/RpG3kSXB7mHKJCzaVjamlYSLtXEUqHGNK6Scw1Yfbzbj40rTkjPyQ/isRiooEEDKWHxUOmCDEMObjvWIW0SLTcjekqIikDXi8dJstu2aGLZbCD2iFkLFSD9obDwC0cFUGY75+aVs6FbOV4taRdWizE0jqpl1v5JLdQ7vEYnBvY9ji/477a1qc4wSYtQDZjiH+9lA==
  • Authentication-results: vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=none action=none header.from=suse.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "netdev@xxxxxxxxxxxxxxx" <netdev@xxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 16 Sep 2021 15:46:06 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 15.07.2021 10:58, Jan Beulich wrote:
> On 20.05.2021 13:46, Jan Beulich wrote:
>> On 25.02.2021 17:23, Paul Durrant wrote:
>>> On 25/02/2021 14:00, Jan Beulich wrote:
>>>> On 25.02.2021 13:11, Paul Durrant wrote:
>>>>> On 25/02/2021 07:33, Jan Beulich wrote:
>>>>>> On 24.02.2021 17:39, Paul Durrant wrote:
>>>>>>> On 23/02/2021 16:29, Jan Beulich wrote:
>>>>>>>> When re-entering the main loop of xenvif_tx_check_gop() a 2nd time, the
>>>>>>>> special considerations for the head of the SKB no longer apply. Don't
>>>>>>>> mistakenly report ERROR to the frontend for the first entry in the 
>>>>>>>> list,
>>>>>>>> even if - from all I can tell - this shouldn't matter much as the 
>>>>>>>> overall
>>>>>>>> transmit will need to be considered failed anyway.
>>>>>>>>
>>>>>>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>>>>>>>>
>>>>>>>> --- a/drivers/net/xen-netback/netback.c
>>>>>>>> +++ b/drivers/net/xen-netback/netback.c
>>>>>>>> @@ -499,7 +499,7 @@ check_frags:
>>>>>>>>                                 * the header's copy failed, and they 
>>>>>>>> are
>>>>>>>>                                 * sharing a slot, send an error
>>>>>>>>                                 */
>>>>>>>> -                              if (i == 0 && sharedslot)
>>>>>>>> +                              if (i == 0 && !first_shinfo && 
>>>>>>>> sharedslot)
>>>>>>>>                                        xenvif_idx_release(queue, 
>>>>>>>> pending_idx,
>>>>>>>>                                                           
>>>>>>>> XEN_NETIF_RSP_ERROR);
>>>>>>>>                                else
>>>>>>>>
>>>>>>>
>>>>>>> I think this will DTRT, but to my mind it would make more sense to clear
>>>>>>> 'sharedslot' before the 'goto check_frags' at the bottom of the 
>>>>>>> function.
>>>>>>
>>>>>> That was my initial idea as well, but
>>>>>> - I think it is for a reason that the variable is "const".
>>>>>> - There is another use of it which would then instead need further
>>>>>>     amending (and which I believe is at least part of the reason for
>>>>>>     the variable to be "const").
>>>>>>
>>>>>
>>>>> Oh, yes. But now that I look again, don't you want:
>>>>>
>>>>> if (i == 0 && first_shinfo && sharedslot)
>>>>>
>>>>> ? (i.e no '!')
>>>>>
>>>>> The comment states that the error should be indicated when the first
>>>>> frag contains the header in the case that the map succeeded but the
>>>>> prior copy from the same ref failed. This can only possibly be the case
>>>>> if this is the 'first_shinfo'
>>>>
>>>> I don't think so, no - there's a difference between "first frag"
>>>> (at which point first_shinfo is NULL) and first frag list entry
>>>> (at which point first_shinfo is non-NULL).
>>>
>>> Yes, I realise I got it backwards. Confusing name but the comment above 
>>> its declaration does explain.
>>>
>>>>
>>>>> (which is why I still think it is safe to unconst 'sharedslot' and
>>>>> clear it).
>>>>
>>>> And "no" here as well - this piece of code
>>>>
>>>>            /* First error: if the header haven't shared a slot with the
>>>>             * first frag, release it as well.
>>>>             */
>>>>            if (!sharedslot)
>>>>                    xenvif_idx_release(queue,
>>>>                                       XENVIF_TX_CB(skb)->pending_idx,
>>>>                                       XEN_NETIF_RSP_OKAY);
>>>>
>>>> specifically requires sharedslot to have the value that was
>>>> assigned to it at the start of the function (this property
>>>> doesn't go away when switching from fragments to frag list).
>>>> Note also how it uses XENVIF_TX_CB(skb)->pending_idx, i.e. the
>>>> value the local variable pending_idx was set from at the start
>>>> of the function.
>>>>
>>>
>>> True, we do have to deal with freeing up the header if the first map 
>>> error comes on the frag list.
>>>
>>> Reviewed-by: Paul Durrant <paul@xxxxxxx>
>>
>> Since I've not seen this go into 5.13-rc, may I ask what the disposition
>> of this is?
> 
> I can't seem to spot this in 5.14-rc either. I have to admit I'm
> increasingly puzzled ...

Another two months (and another release) later and still nothing. Am
I doing something wrong? Am I wrongly assuming that maintainers would
push such changes up the chain?

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.