[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH net-next] xen-netfront: convert to GRO API and advertise this feature



On Sep 22, 2013, at 11:22 PM, annie li <annie.li@xxxxxxxxxx> wrote:

> 
> On 2013-9-23 13:02, Jason Wang wrote:
>> On 09/23/2013 07:04 AM, Anirban Chakraborty wrote:
>>> On Sep 22, 2013, at 5:09 AM, Wei Liu <wei.liu2@xxxxxxxxxx> wrote:
>>> 
>>>> On Sun, Sep 22, 2013 at 02:29:15PM +0800, Jason Wang wrote:
>>>>> On 09/22/2013 12:05 AM, Wei Liu wrote:
>>>>>> Anirban was seeing netfront received MTU size packets, which downgraded
>>>>>> throughput. The following patch makes netfront use GRO API which
>>>>>> improves throughput for that case.
>>>>>> 
>>>>>> Signed-off-by: Wei Liu <wei.liu2@xxxxxxxxxx>
>>>>>> Signed-off-by: Anirban Chakraborty <abchak@xxxxxxxxxxx>
>>>>>> Cc: Ian Campbell <ian.campbell@xxxxxxxxxx>
>>>>> Maybe a dumb question: doesn't Xen depends on the driver of host card to
>>>>> do GRO and pass it to netfront? What the case that netfront can receive
>>>> The would be the ideal situation. Netback pushes large packets to
>>>> netfront and netfront sees large packets.
>>>> 
>>>>> a MTU size packet, for a card that does not support GRO in host? Doing
>>>> However Anirban saw the case when backend interface receives large
>>>> packets but netfront sees MTU size packets, so my thought is there is
>>>> certain configuration that leads to this issue. As we cannot tell
>>>> users what to enable and what not to enable so I would like to solve
>>>> this within our driver.
>>>> 
>>>>> GRO twice may introduce extra overheads.
>>>>> 
>>>> AIUI if the packet that frontend sees is large already then the GRO path
>>>> is quite short which will not introduce heavy penalty, while on the
>>>> other hand if packet is segmented doing GRO improves throughput.
>>>> 
>>> Thanks Wei, for explaining and submitting the patch. I would like add 
>>> following to what you have already mentioned.
>>> In my configuration, I was seeing netback was pushing large packets to the 
>>> guest (Centos 6.4) but the netfront was receiving MTU sized packets. With 
>>> this patch on, I do see large packets received on the guest interface. As a 
>>> result there was substantial throughput improvement in the guest side (2.8 
>>> Gbps to 3.8 Gbps). Also, note that the host NIC driver was enabled for GRO 
>>> already.
>>> 
>>> -Anirban
>> In this case, even if you still want to do GRO. It's better to find the
>> root cause of why the GSO packet were segmented
> 
> Totally agree, we need to find the cause why large packets is segmented only 
> in different host case.

It appears (from looking at the netback code) that although GSO is turned on at 
the netback, the guest receives large packet:
1. if it is a local packet (vm to vm on the same host), in which case netfront 
does a LRO or,
2. via turning on GRO explicitly (with this patch).

-Anirban

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.