WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

RE: [Xen-ia64-devel] [PATCH] bug fix new_tlbflush_clock_period()

To: "Isaku Yamahata" <yamahata@xxxxxxxxxxxxx>
Subject: RE: [Xen-ia64-devel] [PATCH] bug fix new_tlbflush_clock_period()
From: "Xu, Anthony" <anthony.xu@xxxxxxxxx>
Date: Mon, 5 Feb 2007 11:47:28 +0800
Cc: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Sun, 04 Feb 2007 19:46:51 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20070205034313.GE10566%yamahata@xxxxxxxxxxxxx>
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcdI18sgtmvydTpxQsGsPnHDYBKAkAAAC+kA
Thread-topic: [Xen-ia64-devel] [PATCH] bug fix new_tlbflush_clock_period()
Isaku Yamahata write on 2007年2月5日 11:43:
> On Mon, Feb 05, 2007 at 11:34:59AM +0800, Xu, Anthony wrote:
>> Before calling local_vhpt_flush,
>> We need to make sure it is not VTI-domain and it is not per-VCPU
>> VHPT, 
> 
> local_vhpt_flush() always flushes vhpt associated to pcpu. not vcpu.

If HAS_PERVCPU_VHPT(current->domain) is true, or VMX_DOMAIN (current) is true.

Vhpt_paddr may be 0,
So It erases 64KB memory content that starts from machine address 0.


- Anthony

> 
> DEFINE_PER_CPU (unsigned long, vhpt_paddr);
> DEFINE_PER_CPU (unsigned long, vhpt_pend);
> local_vhpt_flush(void)
>         __vhpt_flush(__ia64_per_cpu_var(vhpt_paddr));
> 
> 
>> +static void
>> +tlbflush_clock_local_flush(void *unused)
>> +{
>> 
>>>> if(!VMX_DOMAIN(current)&&!HAS_PERVCPU_VHPT(current->domain){ +   
>>>>    local_vhpt_flush(); }
>> +    local_flush_tlb_all();
>> +}
>> +
>> 
>> - Anthony
>> 
>> Isaku Yamahata write on 2007年2月5日 10:56:
>>> On Mon, Feb 05, 2007 at 10:02:53AM +0800, Xu, Anthony wrote:
>>>> Isaku Yamahata write on 2007年2月5日 9:45:
>>>>> Hi Kouya.
>>>>> Good catch!
>>>>> Although this patch already commited and I made the bug,
>>>>> vti domain also relies on tlb flush lock.
>>>>> (See flush_vtlb_for_context_switch())
>>>>> 
>>>>> So we should do
>>>>>   if (!test_bit(_VCPUF_initialize))
>>>>>   continue
>>>>>   if (VMX_DOMAIN(v))
>>>>>   <flush all hash and collision chain of v>
>>>>>   else
>>>>>   vcpu_vhpt_flush()
>>>>> 
>>>>> Or
>>>>> 
>>>>>   disable the tlb flush clock usage in
>>>>> flush_vtlb_for_context_switch().
>>>>> 
>>>> 
>>>> Hi Isaku,
>>>> 
>>>> Why do we need to call vcpu_vhpt_flush?
>>>> IMO we only need to call __local_flush_tlb_all, if we use per-vcpu
>>>> VHPT. Can you elaborate it?
>>> 
>>> That's right.
>>> when I wrote that, I tried to apply tlb flush clock to
>>> not only mTLB and vhpt but also per-vcpu vhpt. but it isn't
>>> used for per-vcpu vhpt. So I removed the related bogus code.

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel