[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [BUG] hhvm running on Ubuntu 13.04 with Xen Hypervisor - linux kernel panic



Unfortunately the servers are provided "as is" by rackspace unless this is
something I can change in the terminal (unlikely).

Is there any performance loss from not using hardware performance
counters? Also, do you know the configure command as it did not recognise
it when I tried.

Thanks

On 20/09/2013 13:02, "Dietmar Hahn" <dietmar.hahn@xxxxxxxxxxxxxx> wrote:

>Am Donnerstag 19 September 2013, 15:02:38 schrieb Craig Carnell:
>> Xen Version is 4.1.3
>> 
>> I'm not able to run xm it asks for xen-utils 4.1 which I install
>>(xen-utils 4.2 installs) but it can't find it..
>> 
>> Sorry!
>
>As it seems your hhvm is running as PV domu with cpl=3 and in this case
>the
>rdpmc leads to the general protection fault because there is no VPMU
>support
>for PV domains.
>What you can do is let your hhvm run as a HVM domain. Then you should not
>get a panic.
>The other way is to build your hhvm without hardware performance counters
>like Wei Liu already mentioned. This is the way for linux dom0 I think.
>
>Dietmar.
>
>> 
>> 
>> From: Dietmar Hahn
>><dietmar.hahn@xxxxxxxxxxxxxx<mailto:dietmar.hahn@xxxxxxxxxxxxxx>>
>> Date: Thursday, 19 September 2013 12:51
>> To: "xen-devel@xxxxxxxxxxxxx<mailto:xen-devel@xxxxxxxxxxxxx>"
>><xen-devel@xxxxxxxxxxxxx<mailto:xen-devel@xxxxxxxxxxxxx>>
>> Cc: Wei Liu <wei.liu2@xxxxxxxxxx<mailto:wei.liu2@xxxxxxxxxx>>, Craig
>>Carnell <ccarnell@xxxxxxxxxx<mailto:ccarnell@xxxxxxxxxx>>
>> Subject: Re: [Xen-devel] [BUG] hhvm running on Ubuntu 13.04 with Xen
>>Hypervisor - linux kernel panic
>> 
>> 
>> Am Donnerstag 19 September 2013, 10:52:26 schrieb Wei Liu:
>> 
>> > On Wed, Sep 18, 2013 at 11:21:18AM +0000, Craig Carnell wrote:
>> 
>> > > Hi,
>> 
>> > >
>> 
>> > > I am trying out hiphop vm (the php just in time compiler). My setup
>>is a Rackspace Cloud Server running Ubuntu 13.04 with kernel
>>3.8.0-30-generic #44-Ubuntu SMP Thu Aug 22 20:52:24 UTC 2013 x86_64
>>x86_64 x86_64 GNU/Linux
>> 
>> > >
>> 
>> > > The cloud server uses Xen Hypervisor.
>> 
>> > >
>> 
>> > > Hiphopvm is compiled from source using the github repo. When
>>running hhvm from the command line (without any options or php
>>application) the system immediately crashes, throwing linux into a
>>kernel panic and thus death.
>> 
>> > >
>> 
>> > > I have reported this issue on hiphop github issue page:
>> 
>> > >
>> 
>> > > https://github.com/facebook/hiphop-php/issues/1065
>> 
>> > >
>> 
>> > > I am not sure if this is a linux kernel bug or a xen hypervisor bug:
>> 
>> > >
>> 
>> >
>> 
>> > I'm not a expert on VPMU stuffs, but it seems that HHVM makes use of
>> 
>> > (virtual) hardware performance counter which is not well supported at
>> 
>> > the moment, which causes this problem.
>> 
>> >
>> 
>> > Try to compile HHVM without hardware performance counter support might
>> 
>> > solve this problem.
>> 
>> >
>> 
>> > ./configure -DNO_HARDWARE_COUNTERS=1
>> 
>> >
>> 
>> > Wei.
>> 
>> >
>> 
>> > > The output of /var/log/syslog:
>> 
>> > >
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674736] general protection
>>fault: 0000 [#1] SMP
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674754] Modules linked in:
>>xenfs(F) xen_privcmd(F) xt_tcpudp(F) nf_conntrack_ipv4(F)
>>nf_defrag_ipv4(F) xt_state(F) nf_conntrack(F) xt_comment(F)
>>iptable_filter(F) ip_tables(F) x_tables(F) microcode(F) lp(F) parport(F)
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674781] CPU 0
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674787] Pid: 5020, comm: hhvm
>>Tainted: GF 3.8.0-30-generic #44-Ubuntu
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674795] RIP:
>>e030:[<ffffffff81003046>] [<ffffffff81003046>] native_read_pmc+0x6/0x20
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674809] RSP:
>>e02b:ffff8800026b9d20 EFLAGS: 00010083
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674814] RAX: ffffffff81c1bd80
>>RBX: 0000000000000000 RCX: 0000000000000000
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674819] RDX: 0000000000005f6c
>>RSI: ffff8800f7c81900 RDI: 0000000000000000
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674824] RBP: ffff8800026b9d20
>>R08: 00000000000337d8 R09: ffff8800e933dcc0
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674830] R10: 00007fff2d3caea0
>>R11: 0000000000000246 R12: ffff8800f87ecc00
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674835] R13: ffff800000000001
>>R14: ffff8800f87ecd70 R15: 0000000000000010
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674844] FS:
>>00007f43d4c9b180(0000) GS:ffff8800ffc00000(0000) knlGS:0000000000000000
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674850] CS: e033 DS: 0000 ES:
>>0000 CR0: 000000008005003b
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674855] CR2: 000000000105ebc0
>>CR3: 00000000025cd000 CR4: 0000000000000660
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674861] DR0: 0000000000000000
>>DR1: 0000000000000000 DR2: 0000000000000000
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674867] DR3: 0000000000000000
>>DR6: 00000000ffff0ff0 DR7: 0000000000000400
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674872] Process hhvm (pid: 5020,
>>threadinfo ffff8800026b8000, task ffff8800f7cfc5c0)
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674879] Stack:
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674882] ffff8800026b9d58
>>ffffffff81024625 0000000000000000 ffff8800f87ecc00
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674893] ffff8800f7c8190c
>>ffffffff811231a0 0000000000000005 ffff8800026b9d68
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674902] ffffffff81024689
>>ffff8800026b9d90 ffffffff811232a6 00000000ffff02ff
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674911] Call Trace:
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674920] [<ffffffff81024625>]
>>x86_perf_event_update+0x55/0xb0
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674929] [<ffffffff811231a0>] ?
>>perf_read+0x2f0/0x2f0
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674936] [<ffffffff81024689>]
>>x86_pmu_read+0x9/0x10
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674942] [<ffffffff811232a6>]
>>__perf_event_read+0x106/0x110
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674951] [<ffffffff810b9987>]
>>smp_call_function_single+0x147/0x170
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674959] [<ffffffff811240d0>] ?
>>perf_mmap+0x2f0/0x2f0
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674966] [<ffffffff81122dda>]
>>perf_event_read+0x10a/0x110
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674972] [<ffffffff811240d0>] ?
>>perf_mmap+0x2f0/0x2f0
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674979] [<ffffffff811240dd>]
>>perf_event_reset+0xd/0x20
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674987] [<ffffffff8111ff08>]
>>perf_event_for_each_child+0x38/0xa0
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674994] [<ffffffff811240d0>] ?
>>perf_mmap+0x2f0/0x2f0
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.675001] [<ffffffff8112255a>]
>>perf_ioctl+0xba/0x340
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.675009] [<ffffffff811b1885>] ?
>>fd_install+0x25/0x30
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.675016] [<ffffffff811a60e9>]
>>do_vfs_ioctl+0x99/0x570
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.675023] [<ffffffff811a6651>]
>>sys_ioctl+0x91/0xb0
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.675031] [<ffffffff816d575d>]
>>system_call_fastpath+0x1a/0x1f
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.675036] Code: 00 00 00 00 00 55
>>89 f9 48 89 e5 0f 32 31 ff 89 c0 48 c1 e2 20 89 3e 48 09 c2 48 89 d0 5d
>>c3 66 0f 1f 44 00 00 55 89 f9 48 89 e5 <0f> 33 89 c0 48 c1 e2 20 48 09
>>c2 48 89 d0 5d c3 66 2e 0f 1f 84
>> 
>> 
>> 
>> It panics on <0f> 33
>> The code of native_read_pmc():
>> ffffffff81030bc0 <native_read_pmc>:
>> ffffffff81030bc0: 89 f9 mov %edi,%ecx
>> ffffffff81030bc2: 0f 33 rdpmc
>> ffffffff81030bc4: 48 c1 e2 20 shl $0x20,%rdx
>> 
>> So it's the rdpmc which leads to the panic.
>> In the xen VPMU (on HVM)the rdpmc are not intercepted I think.
>> On PV I'am not sure. Maybe xm dmesg ?
>> Which xen version?
>> 
>> 
>> 
>> Dietmar.
>
>-- 
>Company details: http://ts.fujitsu.com/imprint.html
>



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.