WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: SKB paged fragment lifecycle on receive

To: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Subject: [Xen-devel] Re: SKB paged fragment lifecycle on receive
From: Eric Dumazet <eric.dumazet@xxxxxxxxx>
Date: Fri, 24 Jun 2011 19:56:23 +0200
Cc: netdev@xxxxxxxxxxxxxxx, Rusty Russell <rusty@xxxxxxxxxxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, Ian Campbell <Ian.Campbell@xxxxxxxxxx>
Delivery-date: Fri, 24 Jun 2011 10:57:27 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:subject:from:to:cc:in-reply-to:references :content-type:date:message-id:mime-version:x-mailer :content-transfer-encoding; bh=mhYZTHC16v6MlTg7m4X5eU9Xr0ZgE6fODOuJknOQLEw=; b=ScD7Bx10xilxRs8hZFMCsq9vOUfmPJ/eaNDNPdRRcmNzXnXEcn8f7N6chH1oRZemPy ayUSHQm7N2oGLW4XGcECgP3UO1Zzz2mlL1x9yMrVJukEEDop7p4ABcynbLjkdHPbHw5M DEnckx6Ay0Tek521fXLBhImF6zd9aah6JjrYw=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=subject:from:to:cc:in-reply-to:references:content-type:date :message-id:mime-version:x-mailer:content-transfer-encoding; b=diRiKwT6EsAxnqP0j9BQ1XQQcZdM047bNcx9htQKy3PTQQwkD6wKsZ2cyC32g51Now a/jxKw60/PCNBwNFnoOrqGWAGezjwsztJSckHYVfi+faadH5xr5DyaAAyHaObst+qm7g kttAEdG3YPdRy/ewLR6Q+05ajbxEc+10S94GE=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4E04C961.9010302@xxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <1308930202.32717.144.camel@xxxxxxxxxxxxxxxxxxxxxx> <4E04C961.9010302@xxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Le vendredi 24 juin 2011 à 10:29 -0700, Jeremy Fitzhardinge a écrit :
> On 06/24/2011 08:43 AM, Ian Campbell wrote:
> > We've previously looked into solutions using the skb destructor callback
> > but that falls over if the skb is cloned since you also need to know
> > when the clone is destroyed. Jeremy Fitzhardinge and I subsequently
> > looked at the possibility of a no-clone skb flag (i.e. always forcing a
> > copy instead of a clone) but IIRC honouring it universally turned into a
> > very twisty maze with a number of nasty corner cases etc. It also seemed
> > that the proportion of SKBs which get cloned at least once appeared as
> > if it could be quite high which would presumably make the performance
> > impact unacceptable when using the flag. Another issue with using the
> > skb destructor is that functions such as __pskb_pull_tail will eat (and
> > free) pages from the start of the frag array such that by the time the
> > skb destructor is called they are no longer there.
> >
> > AIUI Rusty Russell had previously looked into a per-page destructor in
> > the shinfo but found that it couldn't be made to work (I don't remember
> > why, or if I even knew at the time). Could that be an approach worth
> > reinvestigating?
> >
> > I can't really think of any other solution which doesn't involve some
> > sort of driver callback at the time a page is free()d.
> 

This reminds me the packet mmap (tx path) games we play with pages.

net/packet/af_packet.c : tpacket_destruct_skb(), poking
TP_STATUS_AVAILABLE back to user to tell him he can reuse space...

> One simple approach would be to simply make sure that we retain a page
> reference on any granted pages so that the network stack's put pages
> will never result in them being released back to the kernel.  We can
> also install an skb destructor.  If it sees a page being released with a
> refcount of 1, then we know its our own reference and can free the page
> immediately.  If the refcount is > 1 then we can add it to a queue of
> pending pages, which can be periodically polled to free pages whose
> other references have been dropped.
> 
> However, the question is how large will this queue get?  If it remains
> small then this scheme could be entirely practical.  But if almost every
> page ends up having transient stray references, it could become very
> awkward.
> 
> So it comes down to "how useful is an skb destructor callback as a
> heuristic for page free"?
> 

Dangerous I would say. You could have a skb1 page transferred to another
skb2, and call skb1 destructor way before page being released.

TCP stack could do that in tcp_collapse() [ it currently doesnt play
with pages ]




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel