|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Resend: Linux 4.11-rc7: kernel BUG at drivers/xen/events/events_base.c:1221
On 25/04/17 08:57, Sander Eikelenboom wrote:
> On 25/04/17 08:42, Juergen Gross wrote:
>> On 25/04/17 08:35, Sander Eikelenboom wrote:
>>> (XEN) [2017-04-24 21:20:53.203] d0v0 Unhandled invalid opcode fault/trap
>>> [#6, ec=ffffffff]
>>> (XEN) [2017-04-24 21:20:53.203] domain_crash_sync called from entry.S:
>>> fault at ffff82d080358f70 entry.o#create_bounce_frame+0x145/0x154
>>> (XEN) [2017-04-24 21:20:53.203] Domain 0 (vcpu#0) crashed on cpu#0:
>>> (XEN) [2017-04-24 21:20:53.203] ----[ Xen-4.9-unstable x86_64 debug=y
>>> Not tainted ]----
>>> (XEN) [2017-04-24 21:20:53.203] CPU: 0
>>> (XEN) [2017-04-24 21:20:53.203] RIP: e033:[<ffffffff8255a485>]
>>
>> Can you please tell us symbol+offset for RIP?
>>
>> Juergen
>>
>
> Sure:
> # addr2line -e vmlinux-4.11.0-rc8-20170424-linus-doflr-xennext-boris+
> ffffffff8255a485
> linux-linus/arch/x86/xen/enlighten_pv.c:288
>
> Which is:
> static bool __init xen_check_xsave(void)
> {
> unsigned int err, eax, edx;
>
> /*
> * Xen 4.0 and older accidentally leaked the host XSAVE flag into
> guest
> * view, despite not being able to support guests using the
> * functionality. Probe for the actual availability of XSAVE by seeing
> * whether xgetbv executes successfully or raises #UD.
> */
> HERE --> asm volatile("1: .byte 0x0f,0x01,0xd0\n\t" /* xgetbv */
> "xor %[err], %[err]\n"
> "2:\n\t"
> ".pushsection .fixup,\"ax\"\n\t"
> "3: movl $1,%[err]\n\t"
> "jmp 2b\n\t"
> ".popsection\n\t"
> _ASM_EXTABLE(1b, 3b)
> : [err] "=r" (err), "=a" (eax), "=d" (edx)
> : "c" (0));
>
> return err == 0;
I hoped so. :-)
I posted a patch to repair this some minutes ago. Would you mind to try
it? See:
https://lists.xen.org/archives/html/xen-devel/2017-04/msg02925.html
Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |