WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] [PATCH] Fix qemu-dm segfault when multiple HVM domains ---Wa

To: "John Clemens" <jclemens@xxxxxxxxxxxxxxx>, "Li, Xin B" <xin.b.li@xxxxxxxxx>
Subject: [Xen-devel] [PATCH] Fix qemu-dm segfault when multiple HVM domains ---Was: qemu-dm segfault with multiple HVM domains?
From: "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx>
Date: Mon, 27 Feb 2006 17:20:42 +0800
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 27 Feb 2006 09:21:19 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcY3/URjwHEKzpi7RuGQ9G6YMTyAmgDemHLQ
Thread-topic: [PATCH] Fix qemu-dm segfault when multiple HVM domains ---Was: qemu-dm segfault with multiple HVM domains?
 Hi,  John
        Can you try the attached patch ? 

        This issue can be reproduced on SMP platform, while the domain 0
is UP. The reason is, after finishing a dma request, the dma thread will
trigger the interrupt and then clear the call back function. When guest
get the interrupt , it will try to check the call back function, if it
is set, then it will trigger a dma request again. So if the checking for
the callbackfunction happens between trigger interrupt and clear
callback function on dma thread, it will cause NULL function calling.
After with domain 0 UP and smp platform, this situation can be
reproduced easily.

Thanks
Yunhong Jiang


This patch fix this issue by placing the interupt request in the end of
dma thread.
>-----Original Message-----
>From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx 
>[mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of 
>John Clemens
>Sent: Thursday, February 23, 2006 6:12 AM
>To: Li, Xin B
>Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
>Subject: RE: [Xen-devel] qemu-dm segfault with multiple HVM domains?
>
>
>Thanks, glad to know I'm not the only one seeing this...  I've opened 
>bug 542.
>
>http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=542
>
>john.c
>
>- --
>John Clemens                   jclemens@xxxxxxxxxxxxxxx
>
>On Thu, 23 Feb 2006, Li, Xin B wrote:
>
>>> Just verifying that with cs 8932 I still see this problem.
>>> I'm able to
>>> start multiple windows domains, but either immediately or over
>>> time all
>>> but one qemu-dm processes segfaults.  This only appears to be
>>> a problem
>>> when I start multiple windows domains, a single domain seems
>>> to work fine.
>>>
>>> qemu-dm[4961]: segfault at 0000000000000000 rip 0000000000000000 rsp
>>> 0000000040800198 error 14
>>> qemu-dm[4963]: segfault at 0000000000000000 rip 0000000000000000 rsp
>>> 0000000040800198 error 14
>>>
>>
>> Thanks, I ever met this issue too, but did not pay much attention to
>> this :-(
>> Can you pls fire a bug?
>> -Xin
>>
>>> john.c
>>>
>>> - --
>>> John Clemens                        jclemens@xxxxxxxxxxxxxxx
>>>
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>>> http://lists.xensource.com/xen-devel
>>>
>>
>
>_______________________________________________
>Xen-devel mailing list
>Xen-devel@xxxxxxxxxxxxxxxxxxx
>http://lists.xensource.com/xen-devel
>

Attachment: ide_thread.patch
Description: ide_thread.patch

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>