>>> Keir Fraser <keir.fraser@xxxxxxxxxxxxx> 08.05.08 14:11 >>>
>On 8/5/08 12:13, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:
>
>>> Nor am I convinced about how much potential time-saving
>>> there is to be had here.
>>
>> I'm not seeing any time saving here. The other thing I brought up
>> was just an unrelated item pointing out potential for code
>> simplification.
>
>Ah, yes, I see.
>
>The approach looks plausible. I think in its current form it will leave
>zombie L2/L3 pages hanging around and the domain will never actually
>properly die (e.g., still will be visible with the 'q' key). Because
>although you do get around to doing free_lX_table(), the type count and ref
>count of the L2/L3 pages will not drop to zero because the dead L3/L4 page
>never actually dropped its references properly.
Indeed, the extended version below avoids this.
>In actuality, since we know that we never have 'cross-domain' pagetable type
>references, we should actually be able to zap pagetable reference counts to
>zero. The only reason we don't do that right now is really because it
>provides good debugging info to see whether a domain's refcounts have got
>screwed up. But that would not prevent us doing something faster for NDEBUG
>builds, at least.
I still thought it'd be better to not simply zap the counts, but
incrementally drop them using the proper interface:
Index: 2008-05-08/xen/arch/x86/domain.c
===================================================================
--- 2008-05-08.orig/xen/arch/x86/domain.c 2008-05-07 12:21:36.000000000
+0200
+++ 2008-05-08/xen/arch/x86/domain.c 2008-05-09 12:05:18.000000000 +0200
@@ -1725,6 +1725,23 @@ static int relinquish_memory(
if ( test_and_clear_bit(_PGC_allocated, &page->count_info) )
put_page(page);
+ y = page->u.inuse.type_info;
+
+ /*
+ * Forcibly drop reference counts of page tables above top most (which
+ * were skipped to prevent long latencies due to deep recursion - see
+ * the special treatment in free_lX_table()).
+ */
+ if ( type < PGT_root_page_table &&
+ unlikely(((y + PGT_type_mask) &
+ (PGT_type_mask|PGT_validated)) == type) ) {
+ BUG_ON((y & PGT_count_mask) >= (page->count_info &
PGC_count_mask));
+ while ( y & PGT_count_mask ) {
+ put_page_and_type(page);
+ y = page->u.inuse.type_info;
+ }
+ }
+
/*
* Forcibly invalidate top-most, still valid page tables at this point
* to break circular 'linear page table' references. This is okay
@@ -1732,7 +1749,6 @@ static int relinquish_memory(
* is now dead. Thus top-most valid tables are not in use so a non-zero
* count means circular reference.
*/
- y = page->u.inuse.type_info;
for ( ; ; )
{
x = y;
@@ -1896,6 +1912,9 @@ int domain_relinquish_resources(struct d
/* fallthrough */
case RELMEM_done:
+ ret = relinquish_memory(d, &d->page_list, PGT_l1_page_table);
+ if ( ret )
+ return ret;
break;
default:
Index: 2008-05-08/xen/arch/x86/mm.c
===================================================================
--- 2008-05-08.orig/xen/arch/x86/mm.c 2008-05-08 12:13:40.000000000 +0200
+++ 2008-05-08/xen/arch/x86/mm.c 2008-05-08 13:04:13.000000000 +0200
@@ -1341,6 +1341,9 @@ static void free_l3_table(struct page_in
l3_pgentry_t *pl3e;
int i;
+ if(d->arch.relmem == RELMEM_dom_l3)
+ return;
+
pl3e = map_domain_page(pfn);
for ( i = 0; i < L3_PAGETABLE_ENTRIES; i++ )
@@ -1364,6 +1367,9 @@ static void free_l4_table(struct page_in
l4_pgentry_t *pl4e = page_to_virt(page);
int i;
+ if(d->arch.relmem == RELMEM_dom_l4)
+ return;
+
for ( i = 0; i < L4_PAGETABLE_ENTRIES; i++ )
if ( is_guest_l4_slot(d, i) )
put_page_from_l4e(pl4e[i], pfn);
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|