WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

Re: [Xen-ia64-devel] [PATCH] NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ is notreg

To: "Xu, Anthony" <anthony.xu@xxxxxxxxx>
Subject: Re: [Xen-ia64-devel] [PATCH] NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ is notregistered.
From: Isaku Yamahata <yamahata@xxxxxxxxxxxxx>
Date: Tue, 30 Jan 2007 12:35:50 +0900
Cc: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 29 Jan 2007 19:35:11 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <51CFAB8CB6883745AE7B93B3E084EBE26F7BB1@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20070129102905.GB25482%yamahata@xxxxxxxxxxxxx> <51CFAB8CB6883745AE7B93B3E084EBE26F7BB1@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.4.2.1i
On Tue, Jan 30, 2007 at 09:46:04AM +0800, Xu, Anthony wrote:
> Isaku Yamahata write on 2007年1月29日 18:29:
> > 
> > How about the following example?
> > For simplicity, we consider only local_flush_tlb_all().
> > (The similar argument can be applied to vcpu_vhpt_flush())
> > 
> > suppose domM has two vcpus, vcpu0, vcpu1.
> >     domN has one vcpu, vcpu2.
> > 
> > - case 1
> >   vcpu0 and vcpu1 are running on same pcpu.
> >   vcpu0 runs.
> >   context switch <<<< local_flush_tlb_all() is necessry here
> >   vcpu1 runs.
> > 
> > - case 2
> >   vcpu0, vcpu1 and vcpu2 are running on the same pcpu
> >   vcpu0 runs
> >   context switch
> >   vcpu2 runs
> >   vcpu2 issues local_tlb_flush().
> >   context switch <<< local_flush_tlb_all() can be skipped.
> I can understand this. Yes, this local_flush_tlb_all can be skipped,
> But it is because vcpu2 issues local_tlb_flush.
> My question is why we need new_tlbflush_clock_period?

Because the counter is finite.
If we can ignore conter overflow, we can check only which counter
is bigger.
But when overflow comes in (i.e. counter == 0 after increment),
things become complicated. It's the reason of new_tlbflush_clock_period.

Probably another approach to address overflow is to use signed comparison
like Linux jiffies time_after().
But we can't assume the distance between two conters is near enough.


> > You can confirm its effect by the perf-counters,
> > tlbflush_clock_cswitch_skip, flush_vtlb_for_context_switch and
> > tlbflush_clock_cswitch_purge.
> > Please note that local_flush_tlb_all() (or vcpu_vhpt_flush()) is
> > called everytime grant table unmapping without tlb insert tracking
> Currently, grant table unmapping did not purge any thing,
> Because  in flush_tlb_mask(current->domain->domain_dirty_cpumask);
> Domain_dirty_cpumask is always 0.

It does.
destroy_grant_host_mapping()
  => domain_page_flush_and_put()
     => domain_flush_vtalb_all()
        or
        domain_flush_vtlb_track_entry()
Yes, this execution path is somewhat confusing.
I think that flush_tlb_mask(current->domain->domain_dirty_cpumask)
should be replaced with something like arch_flush_tlb(current).

-- 
yamahata

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel