Le Vendredi 07 Avril 2006 08:15, Tian, Kevin a écrit :
> From: Tristan Gingold [mailto:Tristan.Gingold@xxxxxxxx]
[...]
> >+ again:
> > fault = vcpu_translate(current,address,is_data,0,&pteval,&itir,&iha);
> >- if (fault == IA64_NO_FAULT) {
> >+ if (fault == IA64_NO_FAULT || fault == IA64_USE_TLB) {
> > pteval = translate_domain_pte(pteval,address,itir);
> >
> > vcpu_itc_no_srlz(current,is_data?2:1,address,pteval,-1UL,(itir>>2)
> >&0x3f);
> >+ if (fault == IA64_USE_TLB && !current->arch.dtlb.pte.p) {
>
> Sorry for my question, but I really can't convince myself such
> code immune from race. :-) It's possible another side to clear
> present bit of current dtlb right after pass of above check. In
> that case, the TC entry is still inserted into TLB. Since the logic
> can't prevent all the cases, that check seems excessive here.
I don't agree. If a vcpu_ptc_ga clears dtlb after this, it will also execute
the ptc.ga.
> >-static inline int vcpu_match_tr_entry(TR_ENTRY *trp, UINT64 ifa,
> >UINT64 rid)
> >-{
> >- return trp->p && trp->rid == rid
> >+static inline int vcpu_match_tr_entry_no_p(TR_ENTRY *trp, UINT64 ifa,
> >UINT64 rid)
> >+{
> >+ return trp->rid == rid
> > && ifa >= trp->vadr
> > && ifa <= (trp->vadr + (1L << trp->ps) - 1);
> > }
>
> Why do you need no_p version? In your patch, no one call no_p
> version directly
See just below!
> >- if (/* is_data && */ vcpu_match_tr_entry(trp,address,rid)) {
> >- if (vcpu->domain==dom0 && !in_tpa) *pteval =
> >trp->page_flags;
> >+ pte = trp->pte;
> >+ if (/* is_data && */ pte.p
> >+ && vcpu_match_tr_entry_no_p(trp,address,rid)) {
>
> Above change seems no difference as original code.
It uses _no_p.
The aim is to read pags_flags once!
> >+void vhpt_flush_address_remote(int cpu,
> >+ unsigned long vadr, unsigned long addr_range)
> >+{
> >+ while ((long)addr_range > 0) {
> >+ /* Get the VHPT entry. */
> >+ unsigned int off = ia64_thash(vadr) - VHPT_ADDR;
> >+ volatile struct vhpt_lf_entry *v;
> >+ v =__va(per_cpu(vhpt_paddr, cpu) + off);
> >+ v->ti_tag = INVALID_TI_TAG;
> > addr_range -= PAGE_SIZE;
> > vadr += PAGE_SIZE;
> > }
>
> Concerns here:
> - The per-LP vhpt table is shared by all domains, and the entry
> sitting at the hash position may not belong to target domain. By above
> way, you may finally purge unrelated entry of other domain. So check
> upon tag value is required
Yes. Call this an optimization.
> - Once start to check tag value, a read_seqlock style process is
> required here since target LP may be in process of modifying the
> very hash entry
This optimization is turning boring...
> - When hash tlb is added with collision chain support, this remote
> purge is more dangerous since there'll be link list operation which
> however has no lock protection.
Hash tlb requires more analysis.
> Last point, so far your patch only aims for vtlb and vhpt. Machine
> TLB purge is not covered yet which however is the most direct place
> required to be handled. :-)
I don't understand.
vcpu_ptc_ga still calls ia64_global_tlb_purge, which does ptc.ga
Thank you for your comments.
Tristan.
_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel
|