[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 9/9] xen/x86: track dirty pCPU caches for a given vCPU


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Fri, 16 May 2025 09:25:34 +0200
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Delivery-date: Fri, 16 May 2025 07:25:51 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 15.05.2025 12:52, Roger Pau Monné wrote:
> On Mon, May 12, 2025 at 05:38:07PM +0200, Jan Beulich wrote:
>> On 06.05.2025 14:55, Roger Pau Monné wrote:
>>> On Tue, May 06, 2025 at 12:16:00PM +0100, Andrew Cooper wrote:
>>>> On 06/05/2025 9:31 am, Roger Pau Monne wrote:
>>>>> When a guest is allowed access to cache control operations such tracking
>>>>> prevents having to issue a system-wide cache flush, and rather just flush
>>>>> the pCPUs where the vCPU has been scheduled since the last flush.
>>>>>
>>>>> Note that domain-wide flushes accumulate the dirty caches from all the
>>>>> vCPUs, but clearing the vCPU masks will require pausing all vCPUs, which
>>>>> seems overkill.  Instead leave the vCPU dirty masks as-is, worse case it
>>>>> will result in redundant flushes in further calls.
>>>>>
>>>>> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
>>>>
>>>> I'm afraid this doesn't work.
>>>>
>>>> Unlike TLBs, dirty cacheline can move sideways, e.g. by foreign or grant
>>>> mapping, but also naturally because of how cache coherency works.
>>>
>>> Does such sideway moving also imply that local WB{NO,}INVD on native
>>> could be equally bogus?
>>>
>>> According to the SDM, cache lines can indeed move between processor
>>> caches, but the memory controller must always snoop such moves and
>>> flush the data to memory:
>>>
>>> "Here, the processor with the valid data may pass the data to the
>>> other processors without actually writing it to system memory;
>>> however, it is the responsibility of the memory controller to snoop
>>> this operation and update memory."
>>>
>>> So a cache line moving sideways will always be propagated to memory as
>>> part of the move, and hence the data in the previous pCPU cache will
>>> always hit memory.
>>
>> But that's only one of the two aspects of a flush. The other is to ensure
>> respective data isn't in any (covered) cache anymore. IOW dirty-ness (as
>> the title has it) isn't a criteria, unless of course you mean "dirty" in
>> a sense different from what it means in the cache coherency model.
> 
> Given the direct map, and the fact that the CPU can speculatively load
> entries in the cache, I'm not sure there's much Xen can effectively do
> ATM to ensure a certain cache line it's not in any cache anymore.
> 
> It would possibly get better if/when the direct map is removed, but
> even then Xen (or dom0) might map guest memory for emulation or IO
> purposes.  Then Xen/dom0 would need to issue a wbinvd after unmapping
> the memory, to ensure the cache is clean in case a vCPU from a domain
> is scheduled there.
> 
> IMO being realistic it is very unlikely for Xen to be able to ensure
> that after a guest wbinvd there are no guest owned cache lines in any
> CPU cache, even if such wbinvd is propagated to all host CPUs.

Well, see Andrew's reply on one of the two "restrict guest-induced WBINVD"
of mine. What you say is effectively supporting the statement I make in
the descriptions there ("We allow its use for writeback purposes only
anyway, ..."). Which Andrew says is wrong for at least future use cases,
possibly even today's. IOW I think we first need to find some common
grounds on the underlying goals / principles.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.