[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] __unlazy_fpu with Sandy Bridge?


  • To: Haitao Shan <maillists.shan@xxxxxxxxx>
  • From: Keir Fraser <keir.xen@xxxxxxxxx>
  • Date: Thu, 28 Apr 2011 07:45:04 +0100
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
  • Delivery-date: Wed, 27 Apr 2011 23:45:59 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=user-agent:date:subject:from:to:cc:message-id:thread-topic :thread-index:in-reply-to:mime-version:content-type :content-transfer-encoding; b=HAxcYQIFcIBHQI8ZlWFPnus+EufdPArvQ6pbV882hQNkS8jT9MUcoULu08bKp4a4bI 3J+PCG6w82E1Da8qkO7RXXu4djdcw86Zh4ccRGYmNi1hm++ScZsY3PJzURIt8H3nrXUK K4wvWvuw+vX8q/e13R4Hm/4UpORQ3NTioRQ6k=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcwFb83Zn6MY4MRxB0OD9XvYYwgJvA==
  • Thread-topic: [Xen-devel] __unlazy_fpu with Sandy Bridge?

See xen-unstable:23248. I plumbed a call to xc_cpuid_config_xsave() into
xc_cpuid_pv_policy() in libxc/xc_cpuid_x86.c. Without this a recent Linux
kernel fails even more quickly, as it detects XSAVE but then finds the
feature leaves all zeroes, and panics.

Best guess is that filling in the leaves using exactly the same method as
for HVM guests (which is what xc_cpuid_config_xsave() was originally written
for) is wrong for some reason.

 -- Keir

On 28/04/2011 02:19, "Haitao Shan" <maillists.shan@xxxxxxxxx> wrote:

> I will have a look. But Keir, could you please share some context?
> Any changes made to Xsave/CPUID so far?
>  
> Shan Haitao
> 2011/4/27 Keir Fraser <keir.xen@xxxxxxxxx>
>> On 27/04/2011 03:58, "Konrad Rzeszutek Wilk" <konrad.wilk@xxxxxxxxxx> wrote:
>> 
>>> On Tue, Apr 26, 2011 at 09:57:58PM -0400, Konrad Rzeszutek Wilk wrote:
>>>> I bought over the weekend a Sandy Bridge machine and quite often
>>>> when I launch a guest I get this:
>>> 
>>> Which disappear if I do 'xsave=0'. Looks like there is still some lingering
>>> bug?
>>> (I did build it with the patch that Keir posted).
>> 
>> I may have plumbed the XSAVE CPUID stuff incorrecty for PV guests. Intel
>> probably need to take a look.
>> 
>>  -- Keir
>> 
>>>> 
>>>> [    1.519320] modprobe used greatest stack depth: 6348 bytes left
>>>> [    1.528966] udevd (1158): /proc/1158/oom_adj is deprecated, please use
>>>> /proc/1158/oom_score_adj instead.
>>>> 
>>>> [    1.610819] BUG: unable to handle kernel paging request at cb5b007f
>>>> [    1.610839] IP: [<c102b88a>] __unlazy_fpu+0x20/0x84
>>>> [    1.610854] *pdpt = 000000000b49f027 *pde = 000000000c0b1067 *pte =
>>>> 800000000b5b0061
>>>> [    1.610874] Oops: 0003 [#1] SMP
>>>> [    1.610886] last sysfs file: /sys/devices/virtual/tty/ptyp9/uevent
>>>> [    1.610896] Modules linked in: xen_blkfront xen_netfront xen_fbfront
>>>> fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs
>>>> [    1.610934]
>>>> [    1.610941] Pid: 1424, comm: ifup Not tainted
>>>> 2.6.39-rc4yes_xen_blkdev-00489-gfa5424e #1
>>>> [    1.610959] EIP: 0061:[<c102b88a>] EFLAGS: 00010086 CPU: 1
>>>> [    1.610969] EIP is at __unlazy_fpu+0x20/0x84
>>>> [    1.610977] EAX: ffffffff EBX: ead8c4e0 ECX: cb602700 EDX: ffffffff
>>>> [    1.610987] ESI: eb3d3180 EDI: cb5afd40 EBP: cb60de60 ESP: cb60de5c
>>>> [    1.610997]  DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0069
>>>> [    1.611007] Process ifup (pid: 1424, ti=cb60c000 task=cb602700
>>>> task.ti=cb5c8000)
>>>> [    1.611017] Stack:
>>>> [    1.611023]  cb602700 cb60de88 c102b938 ead8c4e0 eb3d5480 0060224c
>>>> 00000001 eb3d5480
>>>> [    1.611055]  cb4adc00 eb3d5480 ead8c4e0 cb5c9f50 c13d45ce cb60df1c
>>>> 00000282 5fb55ce4
>>>> [    1.611055]  00000000 c15e7480 ead8c4e0 c15e7480 c15e7480 ead8c758
>>>> cb60df1c c15e7480
>>>> [    1.611055] Call Trace:
>>>> [    1.611055]  [<c102b938>] __switch_to+0x40/0xfa
>>>> [    1.611055]  [<c13d45ce>] schedule+0x5fb/0x667
>>>> [    1.611055]  [<c1029d43>] ? xen_restore_fl_direct_reloc+0x4/0x4
>>>> [    1.611055]  [<c10c4b05>] ? free_hot_cold_page+0xf8/0x100
>>>> [    1.611055]  [<c102a21c>] ? get_phys_to_machine+0x18/0x4c
>>>> [    1.611055]  [<c10296cf>] ? xen_force_evtchn_callback+0xf/0x14
>>>> [    1.611055]  [<c1029d4c>] ? check_events+0x8/0xc
>>>> [    1.611055]  [<c1029d43>] ? xen_restore_fl_direct_reloc+0x4/0x4
>>>> [    1.611055]  [<c13d5792>] ? _raw_spin_unlock_irqrestore+0x14/0x17
>>>> [    1.611055]  [<c1074a7f>] ? add_wait_queue+0x30/0x35
>>>> [    1.611055]  [<c1061112>] ? do_wait+0x183/0x1e1
>>>> [    1.611055]  [<c10611f8>] ? sys_wait4+0x88/0xa1
>>>> [    1.611055]  [<c105fa70>] ? wait_noreap_copyout+0xdf/0xdf
>>>> [    1.611055]  [<c1061224>] ? sys_waitpid+0x13/0x15
>>>> [    1.611055]  [<c13daa98>] ? sysenter_do_call+0x12/0x28
>>>> [    1.611055] Code: 31 ff 5a 89 f8 59 5b 5e 5f 5d c3 55 89 c1 89 e5 57 8b
>>>> 40
>>>> 04 f6 40 0c 01 74 6b b0 01 84 c0 74 1f 83 c8 ff 8b b9 38 03 00 00 89 c2
>>>> <0f>
>>>> ae 37 8b 81 38 03 00 00 f6 80 00 02 00 00 01 75 18 eb 2e b0
>>>> [    1.611055] EIP: [<c102b88a>] __unlazy_fpu+0x20/0x84 SS:ESP
>>>> 0069:cb60de5c
>>>> [    1.611055] CR2: 00000000cb5b007f
>>>> [    1.611055] ---[ end trace 0e00f93ab96a1012 ]---
>>>> [    1.611055] Fixing recursive fault but reboot is needed!
>>>> 
>>>> 
>>>> Has anybody seen something similar to this?
>>>> 
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>>>> http://lists.xensource.com/xen-devel
>>> 
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>>> http://lists.xensource.com/xen-devel
>> 
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.