[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 8 of 9] Modify all internal p2m functions to use the new fine-grained locking



Hey,
> Hi,
>
> At 07:24 -0700 on 02 Nov (1320218674), andres@xxxxxxxxxxxxxxxx wrote:
>> >> +/* No matter what value you get out of a query, the p2m has been
>> locked
>> >> for
>> >> + * that range. No matter what you do, you need to drop those locks.
>> >> + * You need to pass back the mfn obtained when locking, not the new
>> >> one,
>> >> + * as the refcount of the original mfn was bumped. */
>> >
>> > Surely the caller doesn't need to remember the old MFN for this?
>> After
>> > allm, the whole point of the lock was that nobody else could change
>> the
>> > p2m entry under our feet!
>> >
>> > In any case, I thing there needs to be a big block comment a bit
>> futher
>> > up that describes what all this locking and refcounting does, and why.
>>
>> Comment will be added. I was being doubly-paranoid. I can undo the
>> get_page/put_page of the old mfn. I'm not a 100% behind it.
>
> I meant to suggest that the p2m code should me able to do the
> get_page/put_page without the caller remembering the mfn, since by
> definition it should be able to look it up in the unlock, knowing no-one
> else can have changed it.

Not necessarily. How about sharing? Or paging out? Tricky, tricky. I guess
the easy fix is to tell drop_p2m to not put_page in those cases.

Andres

>
>> I don't think these names are the most terrible -- we've all seen far
>> worse :) I mean, the naming encodes the arguments, and I don't see an
>> intrinsic advantage to
>> gfn_to_mfn(d, g, t, p2m_guest, p2m_unlocked)
>> over
>> gfn_to_mfn_guest_unlocked(d,g,t)
>
> Yep, it's definitely not the worst. :)  It's really just a question of
> verbosity in the headers.
>
> Tim.
>



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.