|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH net-next v2 6/9] xen-netback: Handle guests with too many frags
On 13/12/13 15:43, Wei Liu wrote: Well, it was pretty hard to reproduce that behaviour even with NFS. I don't think it happens often enough that it causes a noticable performance regression. Anyway, it would be just as slow as the current grant copy with coalescing, maybe a bit slower due to the unmapping. But at least we use a core network function to do the coalescing. Or, if you mean the generic performance, if this problem doesn't appear, then no, I don't see performance regression.On Thu, Dec 12, 2013 at 11:48:14PM +0000, Zoltan Kiss wrote:Xen network protocol had implicit dependency on MAX_SKB_FRAGS. Netback has to handle guests sending up to XEN_NETBK_LEGACY_SLOTS_MAX slots. To achieve that: - create a new skb - map the leftover slots to its frags (no linear buffer here!) - chain it to the previous through skb_shinfo(skb)->frag_list - map them - copy the whole stuff into a brand new skb and send it to the stack - unmap the 2 old skb's pagesDo you see performance regression with this approach?
Ok, I've added this:
/* At this point shinfo->nr_frags is in fact the number of
* slots, which can be as large as XEN_NETBK_LEGACY_SLOTS_MAX.
*/
+ if (shinfo->nr_frags > MAX_SKB_FRAGS) {
+ if (shinfo->nr_frags > XEN_NETBK_LEGACY_SLOTS_MAX) return NULL;
+ frag_overflow = shinfo->nr_frags - MAX_SKB_FRAGS;
OK _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |