WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] When flush tlb , we need consider the cpu_online

To: Jan Beulich <JBeulich@xxxxxxxxxx>, Yunhong Jiang <yunhong.jiang@xxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] When flush tlb , we need consider the cpu_online_map
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Mon, 29 Mar 2010 15:33:11 +0100
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Mon, 29 Mar 2010 07:34:04 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4BB0BF4D0200007800037763@xxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcrPPu8ogIZutj03T5OTWemC01hHvwADdKsg
Thread-topic: [Xen-devel] [PATCH] When flush tlb , we need consider the cpu_online_map
User-agent: Microsoft-Entourage/12.24.0.100205
Sounds good. Can you please re-spin the patch, Yunhong? I will drop your
original patch for now.

 -- Keir

On 29/03/2010 13:55, "Jan Beulich" <JBeulich@xxxxxxxxxx> wrote:

>>>> "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx> 29.03.10 14:00 >>>
>> When flush tlb mask, we need consider the cpu_online_map. The same happens to
>> ept flush also.
> 
> While the idea is certainly correct, doing this more efficiently seems
> quite desirable to me, especially when NR_CPUS is large:
> 
>> --- a/xen/arch/x86/hvm/vmx/vmx.c Sat Mar 27 16:01:35 2010 +0000
>> +++ b/xen/arch/x86/hvm/vmx/vmx.c Mon Mar 29 17:49:51 2010 +0800
>> @@ -1235,6 +1235,9 @@ void ept_sync_domain(struct domain *d)
>>      * unnecessary extra flushes, to avoid allocating a cpumask_t on the
>> stack.
>>      */
>>     d->arch.hvm_domain.vmx.ept_synced = d->domain_dirty_cpumask;
>> +    cpus_and(d->arch.hvm_domain.vmx.ept_synced,
>> +             d->arch.hvm_domain.vmx.ept_synced,
>> +             cpu_online_map);
> 
> The added code can be combined with the pre-existing line:
> 
>     cpus_and(d->arch.hvm_domain.vmx.ept_synced,
>              d->domain_dirty_cpumask, cpu_online_map);
> 
>>     on_selected_cpus(&d->arch.hvm_domain.vmx.ept_synced,
>>                      __ept_sync_domain, d, 1);
>> }
>> --- a/xen/arch/x86/smp.c Sat Mar 27 16:01:35 2010 +0000
>> +++ b/xen/arch/x86/smp.c Mon Mar 29 17:47:25 2010 +0800
>> @@ -229,6 +229,7 @@ void flush_area_mask(const cpumask_t *ma
>>     {
>>         spin_lock(&flush_lock);
>>         cpus_andnot(flush_cpumask, *mask, *cpumask_of(smp_processor_id()));
>> +        cpus_and(flush_cpumask, cpu_online_map, flush_cpumask);
> 
> Here, first doing the full-mask operation and then clearing the one
> extra bit is less overhead:
> 
>         cpus_and(flush_cpumask, *mask, cpu_online_map);
>         cpu_clear(smp_processor_id(), flush_cpumask);
> 
>>         flush_va      = va;
>>         flush_flags   = flags;
>>         send_IPI_mask(&flush_cpumask, INVALIDATE_TLB_VECTOR);
> 
> Jan
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel