[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Ping: [PATCH] x86/AMD: also determine L3 cache size



On 29.04.2021 11:21, Jan Beulich wrote:
> On 16.04.2021 16:21, Andrew Cooper wrote:
>> On 16/04/2021 14:20, Jan Beulich wrote:
>>> For Intel CPUs we record L3 cache size, hence we should also do so for
>>> AMD and alike.
>>>
>>> While making these additions, also make sure (throughout the function)
>>> that we don't needlessly overwrite prior values when the new value to be
>>> stored is zero.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>>> ---
>>> I have to admit though that I'm not convinced the sole real use of the
>>> field (in flush_area_local()) is a good one - flushing an entire L3's
>>> worth of lines via CLFLUSH may not be more efficient than using WBINVD.
>>> But I didn't measure it (yet).
>>
>> WBINVD always needs a broadcast IPI to work correctly.
>>
>> CLFLUSH and friends let you do this from a single CPU, using cache
>> coherency to DTRT with the line, wherever it is.
>>
>>
>> Looking at that logic in flush_area_local(), I don't see how it can be
>> correct.  The WBINVD path is a decomposition inside the IPI, but in the
>> higher level helpers, I don't see how the "area too big, convert to
>> WBINVD" can be safe.
>>
>> All users of FLUSH_CACHE are flush_all(), except two PCI
>> Passthrough-restricted cases. MMUEXT_FLUSH_CACHE_GLOBAL looks to be
>> safe, while vmx_do_resume() has very dubious reasoning, and is dead code
>> I think, because I'm not aware of a VT-x capable CPU without WBINVD-exiting.
> 
> Besides my prior question on your reply, may I also ask what all of
> this means for the patch itself? After all you've been replying to
> the post-commit-message remark only so far.

As for the other patch just pinged again, unless I hear back on the
patch itself by then, I'm intending to commit this the week after the
next one, if need be without any acks.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.