WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: next->vcpu_dirty_cpumask checking at the top of context_

To: Jan Beulich <jbeulich@xxxxxxxxxx>
Subject: [Xen-devel] Re: next->vcpu_dirty_cpumask checking at the top of context_switch()
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Thu, 16 Apr 2009 16:59:44 +0100
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 16 Apr 2009 09:00:51 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <49E767F5.76EA.0078.0@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acm+piR+v8U8J6EgQ+KM8kPL02RJCAABjdEs
Thread-topic: next->vcpu_dirty_cpumask checking at the top of context_switch()
User-agent: Microsoft-Entourage/12.17.0.090302
How big NR_CPUS are we talking about? Is the overhead measurable, or is this
a premature micro-optimisation?

 -- Keir

On 16/04/2009 16:16, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:

> In an attempt to create a patch to remove some of the cpumask copying
> (in order to reduce stack usage when NR_CPUS is huge) one of the obvious
> things to do was to change function parameters to pointer-to-cpumask.
> However, doing so for flush_area_mask() creates the unintended side
> effect of triggering the WARN_ON() at the top of send_IPI_mask_flat(),
> apparently because next->vcpu_dirty_cpumask can occasionally change
> between the call site of flush_tlb_mask() in context_switch() and that low
> level routine.
> 
> That by itself certainly is not a problem, what puzzles me are the redundant
> !cpus_empty() checks prior to the call to flush_tlb_mask() as well as the
> fact that if I'm hitting a possible timing window here, then I can't see why
> it shouldn't be possible to hit the (albeit much smaller) window between the
> second !cpus_empty() check and the point where the cpumask got fully
> copied to the stack as flush_tlb_mask()'s argument.
> 
> Bottom line question is - can't the second !cpus_empty() check go away
> altogether, and shouldn't the argument passed to flush_tlb_mask() be
> dirty_mask instead of next->vcpu_dirty_cpumask?
> 
> Thanks for any insights,
> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>