[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 09/27] ARM: GICv3: introduce separate pending_irq structs for LPIs



Hi Stefano,

On 27/03/17 18:44, Stefano Stabellini wrote:
On Mon, 27 Mar 2017, Julien Grall wrote:
Hi,

On 27/03/17 10:02, Andre Przywara wrote:
On 24/03/17 17:26, Stefano Stabellini wrote:
On Fri, 24 Mar 2017, Andre Przywara wrote:
I am afraid that this would lead to situations where we needlessly
allocate and deallocate pending_irqs. Under normal load I'd expect to
have something like zero to three LPIs pending at any given point in
time (mostly zero, to be honest).
So this will lead to a situation where *every* LPI that becomes pending
triggers a memory allocation - in the hot path. That's why the pool
idea. So if we are going to shrink the pool, I'd stop at something like
five entries, to not penalize the common case.
Does that sound useful?

Not answering directly to the question here. I will summarize the face to face
discussion I had with Andre this morning.

So allocating the pending_irq in the IRQ path is not a solution because memory
allocation should not happen in IRQ context, see ASSERT(!in_irq()) in
_xmalloc.

Regardless the ASSERT, it will also increase the time to handle and forward an
interrupt when there are no pending_irq free because it is necessary to
allocate a new one. Lastly, we have no way to tell the guest: "Try again" if
it Xen is running out of memory.

The outcome of the discussion is to pre-allocate the pending_irq when a device
is assigned to a domain. We know the maximum number of event supported by a
device and that 1 event = 1 LPI.

This may allocate more memory (a pending_irq is 56 bytes), but at least we
don't need allocation on the fly and can report error.

One could argue that we could allocate on MAPTI to limit the allocation.
However, as we are not able to rate-limit/defer the execution of the command
queue so far, a guest could potentially flood with MAPTI and monopolize the
pCPU for a long time.

It makes a lot of sense to keep the allocation out of the irq path.
However, I am wondering if the allocation/deallocation of pending_irq
structs could be done at the point the vLPIs are enabled/disabled,
instead of device assignment time.

I am not sure what you mean by enabling/disabling vLPIS. Do you mean when the guest is enabling/disabling them or the guest will assign a vLPI to a (deviceID, event) via MAPTI.

For both guest could potentially flood us. It would take us a lot of time to allocate/free memory for each vLPIs modified. Hence, why I didn't suggest it and said: "One could argue that we could allocate on MAPTI to limit the allocation...".


In any case, there should be code for the deallocation. If we keep the
allocation at device assignment time, there should be code for the
deallocation when a device is remove from the guest (even if we cannot
test that case well now).

It was implied in my answered :).

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.