[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86/domain_page: implement pure per-vCPU mapping infrastructure



On 21.02.2020 15:58, Wei Liu wrote:
> On Fri, Feb 21, 2020 at 03:55:28PM +0100, Jan Beulich wrote:
>> On 21.02.2020 15:36, Wei Liu wrote:
>>> On Fri, Feb 21, 2020 at 02:31:08PM +0100, Jan Beulich wrote:
>>>> On 21.02.2020 13:52, Xia, Hongyan wrote:
>>>>> On Fri, 2020-02-21 at 11:50 +0000, Wei Liu wrote:
>>>>>> On Thu, Feb 06, 2020 at 06:58:23PM +0000, Hongyan Xia wrote:
>>>>>>> +    if ( hashmfn != mfn && !vcache->refcnt[idx] )
>>>>>>> +        __clear_bit(idx, vcache->inuse);
>>>>>>
>>>>>> Also, please flush the linear address here and the other __clear_bit
>>>>>> location.
>>>>>
>>>>> I flush when a new entry is taking a slot. Yeah, it's probably better
>>>>> to flush earlier whenever a slot is no longer in use.
>>>>
>>>> Question is whether such individual flushes aren't actually
>>>> more overhead than a single flush covering all previously
>>>> torn down entries, done at suitable points (see the present
>>>> implementation).
>>>
>>> I asked to flush because I was paranoid about leaving stale entry after
>>> the slot is reclaimed. I think the address will be flushed when a new
>>> entry is inserted.
>>>
>>> So the question would be whether we care about leaving a stale entry in
>>> place until a new one is inserted.
>>
>> Well, we apparently don't have an issue with such today, so
>> unless explicitly stated to the contrary I think any replacement
>> implementation can and should make the same assumptions /
>> guarantees.
> 
> In that case, Hongyan's current implementation should be fine. Flushing
> is deferred to the last possible moment -- right before next use.

Well, in a way. That's still not what the current implementation
does.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.