WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

RE: [Xen-ia64-devel] [PATCH] Fix domain reboot bug

To: "Tristan Gingold" <Tristan.Gingold@xxxxxxxx>, "Zhang, Xiantao" <xiantao.zhang@xxxxxxxxx>, <xen-ia64-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-ia64-devel] [PATCH] Fix domain reboot bug
From: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
Date: Thu, 30 Mar 2006 18:36:00 +0800
Delivery-date: Thu, 30 Mar 2006 10:37:30 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcZT3W4arLLQNMnsRXifY96mTFNOYgAByIMw
Thread-topic: [Xen-ia64-devel] [PATCH] Fix domain reboot bug
>From: Tristan Gingold
>Sent: 2006年3月30日 17:40
>
>Le Jeudi 30 Mars 2006 11:25, Zhang, Xiantao a écrit :
>> Actually domain reboot issue is not caused by our previous patch to
>> solve schedule_tail, which instead helps to find a severe HOST_SMP
>plus
>> domain destroy bug.
>>
>> The major reason is that currently VHPT table for dom0/domU is per
>LP,
>> while domain destroy only issues vhpt_flush on current LP (dom0 is
>> running). So VHPT table is not flushed on the LP that destroyed domU
>is
>> running.
>>
>> The mechanism of domain reboot is to kill current domain and create a
>> new domain with same configuration. Since region id recycle is added
>> last time with domain destroy support, the new created domain will
>> inherit same region id as previous one. Under this case, the stale
>> entries in VHPT table will make new domU halt.
>>
>> Before applying our schedule_tail patch, domU will keep same pta
>value
>> as idle domain when first created where vhpt walker is disabled.
>Because
>> we use bvt as default scheduler, context switch never happens as long
>as
>> domU is runnable. That means domU will have vhpt DISABLED in
>whole life
>> cycle. So even vhpt on that LP is not flushed, domU still runs
>> correctly.
>>
>> So we need to send IPI to target LP to flush right vhpt table.
>> Especially, based on our previous patch for schedule_tail, domU can
>get
>> performance gain by enabling vhpt walker.
>If I understand the patch correctly, you flush all the vhpt when a domain
>is
>destroyed.  Isn't it a little bit too heavy ?
>
>Tristan.

Yes, that's too heavy. We hope the flush can be made more accurately 
after guest SMP support is fully ready. At that time, people can depend on 
domain-> domain_dirty_cpumask to decide which LPs should receive IPI. 
Fore example, currently domain_dirty_cpumask is even not updated at 
domain switch. So a most conservative way is chosen here to flush all 
LPs just like flush_tlb_all at domain destroy, and that can also ensure 
guest SMP working at early phase before everything is shot down. :-)

Thanks,
Kevin

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel