WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

Re: [Xen-ia64-devel] [RFC] SMP issues

Le Mercredi 14 Juin 2006 10:30, Tristan Gingold a écrit :
> Le Mercredi 14 Juin 2006 07:58, Isaku Yamahata a écrit :
> > Hello.
>
> [...]
>
> > - races between global tlb purge and tlb insert
> >   This is a race between reading/writing vcpu->arch.{d, i}tlb or VHPT
> > entry. When a vcpu is about to insert tlb, another vcpu may purge tlb
> >   cache globally. Inserting tlb (vcpu_itc_no_srlz()) or global tlb purge
> >   (domain_flush_vtlb_range() and domain_flush_vtlb_all()) can't update
> >   cpu->arch.{d, i}tlb, VHPT and mTLB. So there is a race here.
> >   Use sequence lock to avoid this race.
> >   After inserting tlb entry, check the sequence lock and retry to insert.
> >   This means that when global tlb purge and tlb insert are issued
> >   simultaneously, always tlb insert happens after global tlb purge.
> >
> >   There was an attempt to resolve this race by checking only
> >   vcpu->arch.{d, i}tlb.p bit. However it was incomplete because it
> > doesn't take care of VHPT.
>
> I don't agree with the last paragraph.
>
> During a flush, p bit was cleared first and then VHPT.
> Races were avoided between Xen reads and Xen writes because the clear is
> atomic.
> There is no races between VHPT walker and Xen because either the walker
> reads before the VHPT clear (as if the reads occured before the ptc.ga) or
> fails (as if the access occured after the ptc.ga).
Just a word after re-reading the message.

I don't want to be polemic.  I'd just like to establish facts ;-)

Tristan.

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel

<Prev in Thread] Current Thread [Next in Thread>