[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 5/6] xen-netback: coalesce slots before copying



> > Because the check is >= MAX_SKB_FRAGS originally and James Harper told
> > me that "Windows stops counting on 20".
> >
> 
> For the Citrix PV drivers I lifted the #define of MAX_SKB_FRAGS from the
> dom0 kernel (i.e. 18). If a packet coming from the stack has more than that
> number of fragments then it's copied and coalesced. The value advertised
> for TSO size is chosen such that a maximally sized TSO will always fit in 18
> fragments after coalescing but (since this is Windows) the drivers don't trust
> the stack to stick to that limit and will drop a packet if it won't fit.
> 
> It seems reasonable that, since the backend is copying anyway, that it should
> handle any fragment list coming from the frontend that it can. This would
> allow the copy-and-coalesce code to be removed from the frontend (and the
> double-copy avoided). If there is a maximum backend packet size though
> then I think this needs to be advertised to the frontend. The backend should
> clearly bin packets coming from the frontend that exceed that limit but
> advertising that limit in xenstore allows the frontend to choose the right TSO
> maximum size to advertise to its stack, rather than having to make it based
> on some historical value that actually has little meaning (in the absence of
> grant mapping).
> 

As stated previously, I've observed windows issuing staggering numbers of 
buffers to NDIS miniport drivers, so you will need to coalesce in a windows 
driver anyway. I'm not sure what the break even point is but I think it's safe 
to say that in the choice between using 1000 (worst case) ring slots (with the 
resulting mapping overheads) and coalescing in the frontend, coalescing is 
going to be the better option.

James


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.