WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: Question about x86/mm/gup.c's use of disabled interrupts

To: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Subject: [Xen-devel] Re: Question about x86/mm/gup.c's use of disabled interrupts
From: Avi Kivity <avi@xxxxxxxxxx>
Date: Thu, 19 Mar 2009 01:05:06 +0200
Cc: Nick Piggin <nickpiggin@xxxxxxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, Linux Kernel Mailing List <linux-kernel@xxxxxxxxxxxxxxx>, Linux Memory Management List <linux-mm@xxxxxxxxx>, Ingo Molnar <mingo@xxxxxxx>
Delivery-date: Wed, 18 Mar 2009 16:05:11 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <49C17BD8.6050609@xxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <49C148AF.5050601@xxxxxxxx> <49C16411.2040705@xxxxxxxxxx> <49C1665A.4080707@xxxxxxxx> <49C16A48.4090303@xxxxxxxxxx> <49C17230.20109@xxxxxxxx> <49C17880.7080109@xxxxxxxxxx> <49C17BD8.6050609@xxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.19 (X11/20090105)
Jeremy Fitzhardinge wrote:
Avi Kivity wrote:
Hm, awkward if flush_tlb_others doesn't IPI...


How can it avoid flushing the tlb on cpu [01]? It's it's gup_fast()ing a pte, it may as well load it into the tlb.

xen_flush_tlb_others uses a hypercall rather than an IPI, so none of the logic which depends on there being an IPI will work.

Right, of course, that's what we were talking about. I thought optimizations to avoid IPIs if an mm never visited a cpu.


Simplest fix is to make gup_get_pte() a pvop, but that does seem like putting a red flag in front of an inner-loop hotspot, or something...

The per-cpu tlb-flush exclusion flag might really be the way to go.

I don't see how it will work, without changing Xen to look at the flag?

local_irq_disable() is used here to lock out a remote cpu, I don't see why deferring the flush helps.

Well, no, not deferring. Making xen_flush_tlb_others() spin waiting for "doing_gup" to clear on the target cpu. Or add an explicit notion of a "pte update barrier" rather than implicitly relying on the tlb IPI (which is extremely convenient when available...).

Pick up a percpu flag from all cpus and spin on each?  Nasty.

You could use the irq enabled flag; it's available and what native spins on (but also means I'll need to add one if I implement this).

--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>