[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Hypervisor error messages after xl block-detach with linux 3.18-rc5



On 11/24/2014 04:09 PM, Juergen Gross wrote:
On 11/24/2014 11:59 AM, Juergen Gross wrote:
On 11/24/2014 11:20 AM, Jan Beulich wrote:
On 24.11.14 at 10:55, <JGross@xxxxxxxx> wrote:
- Sometimes I see only NMI watchdog messages, looking into hanging cpu
    state via xen debug keys I can see the cpu(s) in question are
spinning
    in _raw_spin_lock():
    __handle_mm_fault()->__pte_alloc()->pmd_lock()->_raw_spin_lock()
    The hanging cpus were executing some random user processes (cron,
    bash, xargs), cr2 contained user addresses.

Is this perhaps what
http://lists.xenproject.org/archives/html/xen-devel/2014-11/msg02135.html

appears to be about?

Hmm, I'm not sure.

I'll try a 3.17 kernel to verify.

Still seeing the issue, but less frequent. OTOH I just found in above
thread in lkml that 3.17 is showing that issue, too. :-(

I'll try to setup a pv-variant of Linus' patch and test it...

Okay, test survived the night. Seems really to be the same issue.

I think I'm seeing the qemu issue now Ian mentioned:

[  140.182849] xen:grant_table: WARNING: g.e. 0x10 still in use!
[  140.182859] deferring g.e. 0x10 (pfn 0xffffffffffffffff)
[  140.182864] xen:grant_table: WARNING: g.e. 0xf still in use!
[  140.182866] deferring g.e. 0xf (pfn 0xffffffffffffffff)
...
[  140.183128] xen:grant_table: WARNING: g.e. 0x2a still in use!
[  140.183129] deferring g.e. 0x2a (pfn 0xffffffffffffffff)
[  142.182274] xen:grant_table: freeing g.e. 0x9
[  145.182284] xen:grant_table: freeing g.e. 0x44
[  147.182272] xen:grant_table: freeing g.e. 0x43
[  501.182282] xen:grant_table: g.e. 0x10 still pending
[  501.182315] xen:grant_table: g.e. 0xf still pending
...

I'll update qemu and try again...


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.