[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 2/4] x86/mem_sharing: introduce and use page_lock_memshr instead of page_lock



>>> On 30.04.19 at 18:03, <george.dunlap@xxxxxxxxxx> wrote:
> On 4/30/19 4:06 PM, Jan Beulich wrote:
>>>>> On 30.04.19 at 16:43, <george.dunlap@xxxxxxxxxx> wrote:
>>> On 4/30/19 9:44 AM, Jan Beulich wrote:
>>>>>>> On 30.04.19 at 10:28, <tamas@xxxxxxxxxxxxx> wrote:
>>>>> On Tue, Apr 30, 2019 at 1:15 AM Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>>>> I've outlined a solution already: Make a mem-sharing private variant
>>>>>> of page_{,un}lock(), derived from the PV ones (but with pieces
>>>>>> dropped you don't want/need).
>>>>>
>>>>> Well, that's what I already did here in this patch. No?
>>>>
>>>> No - you've retained a shared _page_{,un}lock(), whereas my
>>>> suggestion was to have a completely independent pair of
>>>> functions in mem_sharing.c. The only thing needed by both PV
>>>> and HVM would then be the PGT_locked flag.
>>>
>>> But it wasn't obvious to me how the implementations of the actual lock
>>> function would be be different.  And there's no point in having two
>>> identical implementations; in fact, it would be harmful.
>> 
>> The main difference would be the one that Tamas is after - not
>> doing the checking that we do for PV. Whether other bits could
>> be dropped for a mem-sharing special variant I don't know (yet).
> 
> The "checking" being that the type count doesn't go to 0?
> 
> It's not just page_lock() that does that checking; it's also
> _put_page_type().  We can't really change one but leave the other alone.

No, I mean the extra debug checking (current_locked_page_*()).
See his patch as to what he keeps for mem-sharing, and what he
drops.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.