[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [BUG 2.6.32.y] Broken PV migration between hosts with different uptime, non-monotonic time?


I encountered the following bug when migrating a Linux- PV domain on 
Xen-3.4.3 between different hosts, whose uptime differs by several minutes (3 
hosts, each ~5 minutes apart): When migrating from a host with lower uptime 
to a host with higher uptime, the VM looses it's network connection for some 
time and then continues after some minutes (roughly equivalent to the 
difference in uptime?).
There are two different symptoms: Either the VM becomes unpingable, or the VM 
is pingable but the ssh-connection freezes: a while-loop dumping /proc/uptime 
freezes and continues without a jump after the freeze is over.

When looking at the output of dmesg of the domU, I also see a jump in the 
[1967742.320218] eth0: no IPv6 routers present
[1968779.217256] suspending xenstore...
[1968779.217358] trying to map vcpu_info 0 at ffff88000bcbc020, mfn
85e61e, offset 32
[1968779.217358] cpu 0 using vcpu_info at ffff88000bcbc020
[ 5655.842391] suspending xenstore...
[ 5655.842477] trying to map vcpu_info 0 at ffff88000bcbc020, mfn
d5e61e, offset 32
[ 5655.842477] cpu 0 using vcpu_info at ffff88000bcbc020
[ 7745.941585] suspending xenstore...
[ 7745.941667] trying to map vcpu_info 0 at ffff88000bcbc020, mfn
be4163, offset 32
[ 7745.941667] cpu 0 using vcpu_info at ffff88000bcbc020
[342272.197261] suspending xenstore...

If I revert the following commit (original from 2.6.36-rc1), the problem does 
not show in 2.6.32.y:

commit 8a22b9996b001c88f2bfb54c6de6a05fc39e177a
Author: Jeremy Fitzhardinge <jeremy.fitzhardinge@xxxxxxxxxx>
Date:   Mon Jul 12 11:49:59 2010 -0700

    xen: drop xen_sched_clock in favour of using plain wallclock time
    xen_sched_clock only counts unstolen time.  In principle this should
    be useful to the Linux scheduler so that it knows how much time a process
    actually consumed.  But in practice this doesn't work very well as the
    scheduler expects the sched_clock time to be synchronized between
    cpus.  It also uses sched_clock to measure the time a task spends
    sleeping, in which case "unstolen time" isn't meaningful.
    So just use plain xen_clocksource_read to return wallclock nanoseconds
    for sched_clock.

2.6.36 does not work, since 489fb49 and e7a3481 are missing: Without 
the "global synchonization point for pvclock" (AKA last_value) plus the fix 
to "reset it to 0 on resume" VMs migrate fine in the opposite direction 
(older=higher uptime â newer=lower uptime), but the original direction 
(lowerâhigher) now stalls for 5 minutes.

2.6.37 (which includes above patches) works fine in both directions (I only 
see a 2 second network dropout for 2 VMs going lowerâhigher). So something 
other must have changed also, which is missing in so far.

I tried to unserstand all those clockevent, timer, pvclock, sched_clock() 
details, but now I'm stuck. To me it looks like xen_clocksource_read() is not 
monotonic over migration, which seems to break some assumtion of 
sched_clock() being monotonic.

Has sombody else observed a similar problem and can provide a helpful hint?
Is there anything I can look at to get this issue solved?


PS: bisecting did not help much, since contains a lot of back-ports 
from 2.6.33, 35, 36 and 37.
2.6.33 needs 281ff33 # x86_64, cpa: Don't work hard in preserving kernel 2M 
mappings when using 4K already
2.6.33-rc1: c5cae66 fixes 65f6338  # do_suspend error handling
2.6.35-rc1: e7a3481 fixes 489fb49 # global sync point
2.6.37 needs ceff1a7 # /proc/kcore: fix seeking
Philipp Hahn           Open Source Software Engineer      hahn@xxxxxxxxxxxxx
Univention GmbH        be open.                       fon: +49 421 22 232- 0
Mary-Somerville-Str.1  D-28359 Bremen                 fax: +49 421 22 232-99

Attachment: signature.asc
Description: This is a digitally signed message part.

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.