[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86: fix domain cleanup



>>> Keir Fraser <keir.fraser@xxxxxxxxxxxxx> 28.10.08 11:25 >>>
>On 28/10/08 10:05, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:
>
>>> Ah, looks like it's been broken since the preemptible page_type patch went
>>> in. Perhaps the tail of free_page_type() should go into __put_page_type(),
>>> as it's not needed by the call site in relinquish_memory(): the caller
>>> doesn't really hold a type reference to be dropped; and the logic for being
>>> preempted doesn't apply since relinquish_memory() requests no preemption.
>> 
>> It doesn't at present, but it should (in favor of
>> DOMAIN_DESTRUCT_AVOID_RECURSION),
>> including the put_page_and_type() earlier in that function. But of course,
>> it may still turn out that cleaning up after preemption here must be handled
>> differently from the __put_page_type() case. I'll give moving that part
>> (and removing the put_page() added yesterday) a try.
>
>__put_page_type() is already a complex function actually, so let's define a
>__put_final_page_type() containing a call to free_page_type() plus the
>current tail of free_page_type(). __put_page_type() can call that;
>relinquish memory can call free_page_type() directly.

Will do it that way for submission. In testing it with that code inlined in
__put_page_type(), I can confirm that this closes the memory leak, but
it (obviously) doesn't address the crash when encountering a PGT_partial
page hanging off of a page table being cleaned up by that explicit call
to free_page_type() getting executed as a side effect of
DOMAIN_DESTRUCT_AVOID_RECURSION. The question of course really
is whether it's worthwhile trying to fix that, or rather to do away with it
altogether by utilizing the 'real' preemption.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.