WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: new netfront and occasional receive path lockup

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] Re: new netfront and occasional receive path lockup
From: Gerald Turner <gturner@xxxxxxxxxx>
Date: Sat, 11 Sep 2010 18:00:57 -0700
Cancel-lock: sha1:7v4nZTIx167KB3iKCUaZlFukBuc=
Delivery-date: Sat, 11 Sep 2010 18:05:59 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: None
References: <1282495384.12843.11.camel@xxxxxxxxxxxxxxxxxxxx> <4C73166D.3030000@xxxxxxxx> <D5AB6E638E5A3E4B8F4406B113A5A19A2A44184D@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <20100909185058.GR2804@xxxxxxxxxxx> <4C8981E5.6010000@xxxxxxxx> <D5AB6E638E5A3E4B8F4406B113A5A19A2A5ED71F@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Gnus/5.110006 (No Gnus v0.6) Emacs/21.4 (gnu/linux)
"Xu, Dongxiao" <dongxiao.xu@xxxxxxxxx> writes:

> Hi Jeremy and Pasi,
>
> I was frustrated that I couldn't reproduce this bug in my site.
>
> However I investigated the code, indeed there is one race condition
> that probably cause the bug. See the attached patch.
>
> Could anybody who can see this bug help to try it? Appreciate much!
>

Hello, I experienced this problem with netfront and the smartpoll code
causing their bridge interfaces to fail.

I've been building a Xen server using Debian Squeeze, Xen 4.0.1-rc6.
For weeks the server had been running solid with just three domU's.  In
the last few days I significantly increased the number of domU's (13
total) and have been having terrible packet drop problems.  Randomly,
maybe after 10 to 60 minutes of uptime, a domU or two will fall victim
to bridge failure.  There's no syslog/dmesg output.  The only report of
the problem can by seen through network stats on dom0 (the domU vifX.X
interfaces have huge TX drops), and 'brctl showmacs' output is missing
the MAC addresses for the domU's that have failed.

I'm not doing anything interesting with networking.  eth0/peth0 on dom0
with static IP, vifX.0 on domU, no DHCP, no firewall rules (other than
fail2ban), static IP assigned within in each domU.

I'm using PV and all Debian -xen-amd64 flavor kernel in dom0 and domU
(no interest in HVM).

I've tried dozens of attempts to solve this:

  * Screwed with ethtool -K XXX tx off on dom0, domU, physical
    interface.

  * Removed 'network-bridge' setup from xend and setup 'br0' the Debian
    Way.

  * Commented out 'iptables_setup' from 'vif-bridge' script which was
    producing lots of iptables noise.

  * Use 'mac=' in domU vif config.

  * Tried latest vanilla 2.6.35.5 kernel (netfront driver is
    pre-smartpoll) - I didn't give this kernel enough time to break, I
    saw TX drops on boot and assumed the problem was still there, but my
    judgement was incorrect - all domU's get a few TX drops while the
    kernel boots (probably ARPs while vifX.X is up but before the domU
    ifup's it's eth0 on boot).

Friday morning a fellow named 'Nrg_' on ##xen immediately diagnosed this
as possibly being related to the smartpoll bug in the netfront driver.

I examined the Debian linux-image-2.6.32-5-xen-amd64 package and
confirmed the netfront driver is patched with an earlier version the
smartpoll code.

I manually merged Debian's kernel with Jeremy's updates to the netfront
driver in his git repository.

  $ git diff 
5473680bdedb7a62e641970119e6e9381a8d80f4..3b966565a89659f938a4fd662c8475f0c00e0606

Deployed this new image on all domU's (except for two of them, as a
control group) and updated grub kernel parameter with
xen_netfront.use_smartpoll=0.

Problem solved.  Only the two domU's I left unpatched get victimized.
The rest of the hosts have been up for over a day and have not lost any
packets.

P.S. this is my first NNTP post thru gmane, I have no idea if it will
reach the list, keep Message-Id/References intact, and CC Christophe,
Jeremy, Dongxiao et al.


> Jeremy Fitzhardinge wrote:
>>  On 09/10/2010 04:50 AM, Pasi Kärkkäinen wrote:
>>> On Wed, Aug 25, 2010 at 08:51:09AM +0800, Xu, Dongxiao wrote:
>>>> Hi Christophe,
>>>>
>>>> Thanks for finding and checking the problem.
>>>> I will try to reproduce the issue and check what caused the
>>>> problem.
>>>>
>>> Hello,
>>>
>>> Was this issue resolved? Some users have been complaining "network
>>> freezing up" issues recently on ##xen on irc..
>>
>> Yeah, I'll add a command-line parameter to disable smartpoll (and
>> leave it off by default).
>>
>>     J
>>
>>> -- Pasi
>>>
>>>> Thanks,
>>>> Dongxiao
>>>>
>>>> Jeremy Fitzhardinge wrote:
>>>>>  On 08/22/2010 09:43 AM, Christophe Saout wrote:
>>>>>> Hi,
>>>>>>
>>>>>> I've been playing with some of the new pvops code, namely DomU
>>>>>> guest code.  What I've been observing on one of the virtual
>>>>>> machines is that the network (vif) is dying after about ten to
>>>>>> sixty minutes of uptime. The unfortunate thing here is that I can
>>>>>> only repoduce it on a production VM and have been unlucky so far
>>>>>> to trigger the bug on a test machine.  While this has not been
>>>>>> tragic - rebooting fixed the issue, unfortunately I can't spend
>>>>>> very much time on debugging after the issue pops up.
>>>>> Ah, OK.  I've seen this a couple of times as well.  And it just
>>>>> happened to me then...
>>>>>
>>>>>
>>>>>> Now, what is happening is that the receive path goes dead.  The
>>>>>> DomU can send packets to Dom0 and those are visible using tcpdump
>>>>>> on the Dom0 on the virtual interface, but not the other way
>>>>>> around.
>>>>> I hadn't got to that level of diagnosis, but I can confirm that
>>>>> that's what seems to be happening here too.
>>>>>
>>>>>> Now, I have done more than one change at a time (I'd like to
>>>>>> avoid going into pinning it down since I can only reproduce it on
>>>>>> a production machine, as I said, so suggestions are welcome), but
>>>>>> my suspicion is that it might have to do with the new "smart
>>>>>> polling" feature in xen/netfront.  Note that I have also updated
>>>>>> Dom0 to pull in the latest dom0/backend and netback changes, just
>>>>>> to make sure it's not due to an issue that has been fixed there,
>>>>>> but I'm still seeing the same.
>>>>> I agree.  I think I started seeing this once I merged smartpoll
>>>>> into netfront.
>>>>>
>>>>>     J
>>>>>
>>>>>> The production machine is a machine that doesn't have much
>>>>>> network load, but deals with a lot of small network requests (DNS
>>>>>> and smtp mostly).  A workload which is hard to reproduce on the
>>>>>> test machine. Heavy network load (NFS, FTP and so on) for days
>>>>>> hasn't triggered the problem.  Also, segmentation offloading and
>>>>>> similar settings don't have any effect.
>>>>>>
>>>>>> The machine has 2 physical and the VM 2 virtual CPUs, DomU has
>>>>>> PREEMPT enabled.
>>>>>>
>>>>>> I've been looking at the code, if there might be a race condition
>>>>>> somewhere, something like where one could run into a situation
>>>>>> where the hrtimer doesn't run and Dom0 believes the DomU should
>>>>>> be polling and doesn't emit an interrupt or something, but I'm
>>>>>> afraid I don't know enough to judge this (I mean, there are
>>>>>> spinlocks which look safe to me).
>>>>>>
>>>>>> Do you have any suggestions what to try?  I can trigger the issue
>>>>>> on the production VM again, but debugging should not take more
>>>>>> than a few minutes if it happens.  Access is only possible via
>>>>>> the console. Neither Dom0 nor the guest show anything unusual in
>>>>>> the kernel message and continue to behave normally after the
>>>>>> network goes dead (also able to shut down the guest normally).
>>>>>>

-- 
Gerald Turner  Email: gturner@xxxxxxxxxx  JID: gturner@xxxxxxxxxxxxxxxxx
GPG: 0xFA8CD6D5  21D9 B2E8 7FE7 F19E 5F7D  4D0C 3FA0 810F FA8C D6D5


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel