[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Ping: [PATCH v2 2/2] IOREQ: refine when to send mapcache invalidation request



On 17/02/2021 08:30, Jan Beulich wrote:
Paul (or others), thoughts?

On 04.02.2021 12:24, Jan Beulich wrote:
On 04.02.2021 10:26, Paul Durrant wrote:
From: Jan Beulich <jbeulich@xxxxxxxx>
Sent: 02 February 2021 15:15

XENMEM_decrease_reservation isn't the only means by which pages can get
removed from a guest, yet all removals ought to be signaled to qemu. Put
setting of the flag into the central p2m_remove_page() underlying all
respective hypercalls as well as a few similar places, mainly in PoD
code.

Additionally there's no point sending the request for the local domain
when the domain acted upon is a different one. The latter domain's ioreq
server mapcaches need invalidating. We assume that domain to be paused
at the point the operation takes place, so sending the request in this
case happens from the hvm_do_resume() path, which as one of its first
steps calls handle_hvm_io_completion().

Even without the remote operation aspect a single domain-wide flag
doesn't do: Guests may e.g. decrease-reservation on multiple vCPU-s in
parallel. Each of them needs to issue an invalidation request in due
course, in particular because exiting to guest context should not happen
before the request was actually seen by (all) the emulator(s).

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
---
v2: Preemption related adjustment split off. Make flag per-vCPU. More
     places to set the flag. Also handle acting on a remote domain.
     Re-base.

I'm wondering if a per-vcpu flag is overkill actually. We just need
to make sure that we don't miss sending an invalidation where
multiple vcpus are in play. The mapcache in the emulator is global
so issuing an invalidate for every vcpu is going to cause an
unnecessary storm of ioreqs, isn't it?

The only time a truly unnecessary storm would occur is when for
an already running guest mapcache invalidation gets triggered
by a remote domain. This should be a pretty rare event, so I
think the storm in this case ought to be tolerable.

Could we get away with the per-domain atomic counter?

Possible, but quite involved afaict: The potential storm above
is the price to pay for the simplicity of the model. It is
important to note that while we don't need all of the vCPU-s
to send these ioreqs, we need all of them to wait for the
request(s) to be acked. And this waiting is what we get "for
free" using the approach here, whereas we'd need to introduce
new logic for this with an atomic counter (afaict at least).

Jan



Ok, let's take the patch as-is then.

Reviewed-by: Paul Durrant <paul@xxxxxxx>




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.