WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: Question about x86/mm/gup.c's use of disabled interrupts

To: Avi Kivity <avi@xxxxxxxxxx>
Subject: [Xen-devel] Re: Question about x86/mm/gup.c's use of disabled interrupts
From: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Date: Thu, 19 Mar 2009 10:16:57 -0700
Cc: Nick Piggin <nickpiggin@xxxxxxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, Linux Kernel Mailing List <linux-kernel@xxxxxxxxxxxxxxx>, Linux Memory Management List <linux-mm@xxxxxxxxx>, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>, Ingo Molnar <mingo@xxxxxxx>
Delivery-date: Thu, 19 Mar 2009 10:17:25 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <49C21473.2000702@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <49C148AF.5050601@xxxxxxxx> <49C16411.2040705@xxxxxxxxxx> <49C1665A.4080707@xxxxxxxx> <49C16A48.4090303@xxxxxxxxxx> <49C17230.20109@xxxxxxxx> <49C17880.7080109@xxxxxxxxxx> <49C17BD8.6050609@xxxxxxxx> <49C17E22.9040807@xxxxxxxxxx> <49C18487.1020703@xxxxxxxx> <49C21473.2000702@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.19 (X11/20090105)
Avi Kivity wrote:
And the hypercall could result in no Xen-level IPIs at all, so it could be very quick by comparison to an IPI-based Linux implementation, in which case the flag polling would be particularly harsh.

Maybe we could bring these optimizations into Linux as well. The only thing Xen knows that Linux doesn't is if a vcpu is not scheduled; all other information is shared.

I don't think there's a guarantee that just because a vcpu isn't running now, it won't need a tlb flush. If a pcpu does runs vcpu 1 -> idle -> vcpu 1, then there's no need for it to do a tlb flush, but the hypercall can make force a flush when it reschedules vcpu 1 (if the tlb hasn't already been flushed by some other means).

(I'm not sure to what extent Xen implements this now, but I wouldn't want to over-constrain it.)

Also, the straightforward implementation of "poll until all target cpu's flags are clear" may never make progress, so you'd have to "scan flags, remove busy cpus from set, repeat until all cpus done".

All annoying because this race is pretty unlikely, and it seems a shame to slow down all tlb flushes to deal with it. Some kind of global "doing gup_fast" counter would get flush_tlb_others bypass the check, at the cost of putting a couple of atomic ops around the outside of gup_fast.

The nice thing about local_irq_disable() is that it scales so well.

Right. But it effectively puts the burden on the tlb-flusher to check the state (implicitly, by trying to send an interrupt). Putting an explicit poll in gets the same effect, but its pure overhead just to deal with the gup race.

I'll put a patch together and see how it looks.

   J

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>