[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen-netfront: pull on receive skb may need to happen earlier



On Wed, Jul 10, 2013 at 01:50:44PM +0100, Ian Campbell wrote:
> On Wed, 2013-07-10 at 11:46 +0100, Jan Beulich wrote:
> > >>> On 10.07.13 at 12:04, Wei Liu <wei.liu2@xxxxxxxxxx> wrote:
> > > Jan, looking at the commit log, the overrun issue in
> > > xennet_get_responses was not introduced by __pskb_pull_tail. The call to
> > > xennet_fill_frags has always been in the same place.
> > 
> > I'm convinced it was: Prior to that commit, if the first response slot
> > contained up to RX_COPY_THRESHOLD bytes, it got entirely
> > consumed into the linear portion of the SKB, leaving the number of
> > fragments available for filling at MAX_SKB_FRAGS. Said commit
> > dropped the early copying, leaving the fragment count at 1
> > unconditionally, and now accumulates all of the response slots into
> > fragments, only pulling after all of them got filled in. It neglected to
> > realize - due to the count now always being 1 at the beginning - that
> > this can lead to MAX_SKB_FRAGS + 1 frags getting filled, corrupting
> > memory.
> 
> That argument makes sense to me.
> 
> Is it possible to hit a scenario where we need to pull more than
> RX_COPY_THRESHOLD in order to fit all of the data in MAX_SKB_FRAGS ?
> 
> > Ian - I have to admit that I'm slightly irritated by you so far not
> > having participated at all in sorting out the fix for this bug that a
> > change of yours introduced.
> 
> Sorry I've been travelling and not following closely enough to realise
> this was related to something I'd done.
> 
> Does this relate somehow to the patch Annie has sent out recently too?
> 

No. That's not related.


Wei.


> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.