WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

RE: [Xen-users] dom0 lost packets.

To: <brudas@xxxxxxxxxxx>, <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-users] dom0 lost packets.
From: "Ross S. W. Walker" <rwalker@xxxxxxxxxxxxx>
Date: Wed, 23 Apr 2008 13:12:11 -0400
Delivery-date: Wed, 23 Apr 2008 10:12:49 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
Importance: normal
In-reply-to: <38949.82.209.246.195.1208969606.squirrel@xxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Priority: normal
References: <38949.82.209.246.195.1208969606.squirrel@xxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcilYqWB/t2MWDyXTRehKNGTWq7ErgAAfNrg
Thread-topic: [Xen-users] dom0 lost packets.
brudas@xxxxxxxxxxx wrote:
> 
> I try to get working together vlan and bonding both for dom0 and domU. I
> lost packets sent to dom0 while domU is OK.
> 
> Nightly stats for dom0:
> 52879 packets transmitted, 45293 received, 14% packet loss, time 52879599ms
> rtt min/avg/max/mdev = 0.144/0.224/717.306/5.129 ms
> 
> Nightly stats for domU:
> 52952 packets transmitted, 52952 received, 0% packet loss, time 52952554ms
> rtt min/avg/max/mdev = 0.157/0.209/3.166/0.048 ms
> 
> Alsow I ping dom0 on production server (which have no bonding) in same
> rack and switch and see no packet drop. I see nothing strange in logs.
> 
> System config:
> CentOS 5, Kernel 2.6.21, Xen 3.1.0
> 
> # brctl show
> bridge name     bridge id               STP enabled     interfaces
> br0             8000.001d0921ddee       no              bond0
> xenbrVLAN2000           8000.001d0921ddee       no              vif1.0
>                                                         bond0.2000
> # cat /proc/net/bonding/bond0
> Ethernet Channel Bonding Driver: v3.1.2 (January 20, 2007)
> 
> Bonding Mode: fault-tolerance (active-backup)
> Primary Slave: None
> Currently Active Slave: eth0
> MII Status: up
> MII Polling Interval (ms): 80
> Up Delay (ms): 0
> Down Delay (ms): 0
> 
> Slave Interface: eth0
> MII Status: up
> Link Failure Count: 0
> Permanent HW addr: 00:1d:09:21:dd:ee
> 
> Slave Interface: eth1
> MII Status: up
> Link Failure Count: 0
> Permanent HW addr: 00:1d:09:21:dd:f0
> 
> # cat /proc/net/vlan/config
> VLAN Dev name    | VLAN ID
> Name-Type: VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD
> bond0.2000     | 2000  | bond0
> 
> I'm sorry if anyone will get this message twice because I sent this
> message this mornining and still did not receive it back.

I heard on the list that disabling checksum offloading on the NIC
with ethtool helps clear it up. I believe it is on TX only and this
is of course on the physical interface in dom0.

-Ross

______________________________________________________________________
This e-mail, and any attachments thereto, is intended only for use by
the addressee(s) named herein and may contain legally privileged
and/or confidential information. If you are not the intended recipient
of this e-mail, you are hereby notified that any dissemination,
distribution or copying of this e-mail, and any attachments thereto,
is strictly prohibited. If you have received this e-mail in error,
please immediately notify the sender and permanently delete the
original and any copy or printout thereof.


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>