[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 08/27] ARM: GICv3 ITS: introduce host LPI array



On 04/03/2017 08:30 PM, Andre Przywara wrote:
Hi,

Hi Andre,

On 23/03/17 19:08, Julien Grall wrote:
 /*
+ * On the host ITS @its, map @nr_events consecutive LPIs.
+ * The mapping connects a device @devid and event @eventid pair to
LPI @lpi,
+ * increasing both @eventid and @lpi to cover the number of requested
LPIs.
+ */
+int gicv3_its_map_host_events(struct host_its *its,
+                              uint32_t devid, uint32_t eventid,
uint32_t lpi,
+                              uint32_t nr_events)
+{
+    uint32_t i;
+    int ret;
+
+    for ( i = 0; i < nr_events; i++ )
+    {
+        /* For now we map every host LPI to host CPU 0 */
+        ret = its_send_cmd_mapti(its, devid, eventid + i, lpi + i, 0);
+        if ( ret )
+            return ret;
+        ret = its_send_cmd_inv(its, devid, eventid + i);


So the spec allows up to 32KB event per device. As all the LPIs will be
routed to CPU0 (e.g collection 0), it would be more efficient to do an
INVALL. I would be happy to defer that to post Xen 4.9, but the rest
needs to be fixed.

I tried INVALL and it didn't work, at least on the model. I can't see
why, so I kept the individual INVs in.
I have the patch still lying around, so we can revisit this later.

Can you add a TODO comment please?


     hw_its = gicv3_its_find_by_doorbell(host_doorbell);
     if ( !hw_its )
@@ -574,6 +639,11 @@ int gicv3_its_map_guest_device(struct domain *d,
     if ( !dev )
         goto out_unlock;

+    dev->host_lpis = xzalloc_array(uint32_t,
+                                   DIV_ROUND_UP(nr_events, LPI_BLOCK));

Rather than having DIV_ROUND_UP spread everywhere. Would not be easier
to round_up nr_events once for all?

I'd rather keep nr_events as the actual number around.
I think we might look into actually limiting the allocation later.

Why? This number will likely be a multiple of bit because of the ITS works. You would also have to keep around multiple different value that will make the code more complicate to read...


+/* Must be called with host_lpis_lock held. */

Again, this is a call for adding an ASSERT in the function.

This comment is more like lock documentation, to give code writers a
guidance how the locking should be handled here.
I am not convinced that having ASSERTS in *static* functions is really
useful.

Well, you seem to assume that you will be the only one to modify this code and it is very easy to skip reading a comment by mistake.

So the ASSERT will catch such error. Give me a reason that the ASSERT is a bad idea and I will re-think my position.

[...]

The algo does not seem to have changed since the previous version.
Looking at it again, my understanding is you will always try to allocate
forward. So if the current chunk is full, you will allocate the next one
rather than looking whether a previous chunk has space available. This
will result to allocate more memory than necessary.

In v4 I amended the code slightly to move next_lpi outside of the
function. When we now free an LPI block, we check if the previous
next_lpi was pointing after this block and adjust it in this case.

Similarly unused chunk could be freed to save memory.

No, we can't, because this breaks the lockless property. A user
(incoming LPI) would find the first level pointer and go into that
block. So we can't never replace this block pointer now. That smells
like a case for RCU, but I am not sure if Xen can properly handle this case.
But with the above optimization (adjusting next_lpi on freeing a block)
I am pretty sure this isn't a problem for now, especially for Dom0.

Then document it, because this is a call to forget to revisit that in the future.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.