|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-ia64-devel] flush_tlb_mask and grant_table on ia64
Le Vendredi 21 Avril 2006 09:27, Xu, Anthony a écrit :
> From: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
>
> >[mailto:xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Tristan
> >Gingold
> >Sent: 2006?4?21? 15:24
> >To: xen-devel@xxxxxxxxxxxxxxxxxxx; xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
> >Subject: [Xen-ia64-devel] flush_tlb_mask and grant_table on ia64
> >
> >Hi,
> >
> >on IA64 flushing the whole TLB is very expensive: this is a cpu tlb flush
> > and clearing 16MB of memory (virtual tlb).
> >However, flushing an address range is rather cheap. Flushing an address
> > range on every processors is also cheap (no IPI).
> >
> >Unfortunatly Xen common code flushes the whole TLB after unmapping grant
> >reference.
>
> Agreed
>
> >Currently, this is not done on IA64 because domain_dirty_cpumask is never
> > set (bug!).
> >
> >We can flush TLB by range within destroy_grant_host_mapping. But then we
> > need to disable the flush_tlb_mask call.
> >
> >What is the best solution?
>
> It depends on the coverage of VHPT and coverage of purged page.
From my point of view, the problem is not the number of frames to be purge. I
suppose only a few pages are unmapped per unmap_grant_ref call (although I
may be wrong here).
From my point of view the problem is how to make Xen common code more arch
neutral.
Tristan.
_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel
|
|
|
|
|