|
|
|
|
|
|
|
|
|
|
xen-ia64-devel
RE: [Xen-ia64-devel] [PATCH] Fix domain reboot bug
>From: Tristan Gingold
>Sent: 2006年3月30日 17:40
>
>Le Jeudi 30 Mars 2006 11:25, Zhang, Xiantao a écrit :
>> Actually domain reboot issue is not caused by our previous patch to
>> solve schedule_tail, which instead helps to find a severe HOST_SMP
>plus
>> domain destroy bug.
>>
>> The major reason is that currently VHPT table for dom0/domU is per
>LP,
>> while domain destroy only issues vhpt_flush on current LP (dom0 is
>> running). So VHPT table is not flushed on the LP that destroyed domU
>is
>> running.
>>
>> The mechanism of domain reboot is to kill current domain and create a
>> new domain with same configuration. Since region id recycle is added
>> last time with domain destroy support, the new created domain will
>> inherit same region id as previous one. Under this case, the stale
>> entries in VHPT table will make new domU halt.
>>
>> Before applying our schedule_tail patch, domU will keep same pta
>value
>> as idle domain when first created where vhpt walker is disabled.
>Because
>> we use bvt as default scheduler, context switch never happens as long
>as
>> domU is runnable. That means domU will have vhpt DISABLED in
>whole life
>> cycle. So even vhpt on that LP is not flushed, domU still runs
>> correctly.
>>
>> So we need to send IPI to target LP to flush right vhpt table.
>> Especially, based on our previous patch for schedule_tail, domU can
>get
>> performance gain by enabling vhpt walker.
>If I understand the patch correctly, you flush all the vhpt when a domain
>is
>destroyed. Isn't it a little bit too heavy ?
>
>Tristan.
Yes, that's too heavy. We hope the flush can be made more accurately
after guest SMP support is fully ready. At that time, people can depend on
domain-> domain_dirty_cpumask to decide which LPs should receive IPI.
Fore example, currently domain_dirty_cpumask is even not updated at
domain switch. So a most conservative way is chosen here to flush all
LPs just like flush_tlb_all at domain destroy, and that can also ensure
guest SMP working at early phase before everything is shot down. :-)
Thanks,
Kevin
_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel
|
|
|
|
|