[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] Problem with MTU > 1500, ifconfig segmentation fault



I modified the
xen-2.0.7/linux-2.6.11-xen-sparse/arch/xen/kernel/skbuff.c to borrow
changes from the same file in the 3.0 branch. I omitted calls to
"xen_create_contiguous_region" and "xen_destroy_contiguous_region" since
they seemed to call newer hypercalls. The resulting patch is pasted
below. I am able to ping with large frames (up to 8174 bytes). I have
not tested this beyond "ping -s", so I am not sure if this is the right
way to do it.

Is the call to "xen_create_contiguous_region" necessary or is it a
performance enhancement?

Patch follows:

--- skbuff_orig_c       2005-09-11 22:55:51.000000000 -0400
+++ skbuff.c    2005-09-12 10:58:04.000000000 -0400
@@ -24,10 +24,24 @@ EXPORT_SYMBOL(__dev_alloc_skb);
 #define XEN_SKB_SIZE \
     ((PAGE_SIZE - sizeof(struct skb_shared_info)) & ~(SMP_CACHE_BYTES -
1))
 
+#define MAX_SKBUFF_ORDER 2
+static kmem_cache_t *skbuff_order_cachep[MAX_SKBUFF_ORDER + 1];
+
 struct sk_buff *__dev_alloc_skb(unsigned int length, int gfp_mask)
 {
     struct sk_buff *skb;
-    skb = alloc_skb_from_cache(skbuff_cachep, length + 16, gfp_mask);
+    int order;
+    length = SKB_DATA_ALIGN(length+16)+sizeof(struct skb_shared_info);
+
+    order = get_order(length);
+    if(order > MAX_SKBUFF_ORDER) {
+                printk(KERN_ALERT "Attempt to allocate order %d skbuff.
"
+                       "Increase MAX_SKBUFF_ORDER.\n", order);
+                return NULL;
+     }
+
+    skb = alloc_skb_from_cache(
+               skbuff_order_cachep[order], length /*+ 16*/, gfp_mask);
     if ( likely(skb != NULL) )
         skb_reserve(skb, 16);
     return skb;
@@ -35,13 +49,29 @@ struct sk_buff *__dev_alloc_skb(unsigned
 
 static void skbuff_ctor(void *buf, kmem_cache_t *cachep, unsigned long
unused)
 {
-    scrub_pages(buf, 1);
+    int order = 0;
+
+    while (skbuff_order_cachep[order] != cachep)
+                order++;
+
+    scrub_pages(buf, 1 << order);
 }
 
 static int __init skbuff_init(void)
 {
-    skbuff_cachep = kmem_cache_create(
-        "xen-skb", PAGE_SIZE, PAGE_SIZE, 0, skbuff_ctor, NULL);
+    static char name[MAX_SKBUFF_ORDER + 1][20];
+    unsigned long size;
+    int order;
+
+    for (order = 0; order <= MAX_SKBUFF_ORDER; order++) {
+             size = PAGE_SIZE << order;
+             sprintf(name[order], "xen-skb-%lu", size);
+             skbuff_order_cachep[order] = kmem_cache_create(
+                     name[order], size, size, 0, skbuff_ctor, NULL);
+    }
+
+    skbuff_cachep = skbuff_order_cachep[0];
+
     return 0;
 }
 __initcall(skbuff_init);


-----Original Message-----
From: Keir Fraser [mailto:Keir.Fraser@xxxxxxxxxxxx] 
Sent: Friday, September 09, 2005 4:24 PM
To: satish_raghunath@xxxxxxxxxxx
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] Problem with MTU > 1500, ifconfig segmentation
fault



This is fixed in 3.0, but no plans to backport the fix to 2.0 series 
right now. It may not be that hard though -- the file containing 
Xen-specific dev_alloc_skb() may transfer straight over.

  -- Keir


On 9 Sep 2005, at 19:53, Satish Raghunath wrote:

> Hi all,
>
> I am using Xen 2.0.7. I have Broadcom NetXtreme BCM5704 Gigabit
> Ethernet (rev 02) cards which support frames greater than 1500.
>
> However when I boot into Xen and try to set the MTU to anything higher
> than 1500 (e.g., 4000, 8000 etc) I get a segmentation fault. After 
> this fault, every command fails with a segmentation fault. I saw a 
> similar bug report posted here:
>
> http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=152
>
> If there is a patch available for this, I would appreciate it very
> much if you can point me to it.
>
> I tried setting the default MTU in
> $XEN/linux-2.6.11-xen0/net/ethernet/eth.c to 9000 and that allowed the

> interface have an MTU of 9000. But even then ping packets with large 
> sizes would not work.
>
> Any additionaly pointers will be highly appreciated.
>
> Thank you,
> Satish
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.