[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH V2 09/10] xen/arm: io: Use binary search for mmio handler lookup



Hi Shanker,

On 27/06/16 15:50, Shanker Donthineni wrote:
On 06/27/2016 08:31 AM, Julien Grall wrote:
On 26/06/16 18:48, Shanker Donthineni wrote:
As the number of I/O handlers increase, the overhead associated with
linear lookup also increases. The system might have maximum of 144
(assuming CONFIG_NR_CPUS=128) mmio handlers. In worst case scenario,
it would require 144 iterations for finding a matching handler. Now
it is time for us to change from linear (complexity O(n)) to a binary
search (complexity O(log n) for reducing mmio handler lookup overhead.

Signed-off-by: Shanker Donthineni <shankerd@xxxxxxxxxxxxxx>
---
  xen/arch/arm/io.c | 50
+++++++++++++++++++++++++++++++++++++++-----------
  1 file changed, 39 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
index a5b2c2d..abf49fb 100644
--- a/xen/arch/arm/io.c
+++ b/xen/arch/arm/io.c
@@ -20,6 +20,7 @@
  #include <xen/lib.h>
  #include <xen/spinlock.h>
  #include <xen/sched.h>
+#include <xen/sort.h>
  #include <asm/current.h>
  #include <asm/mmio.h>

@@ -70,23 +71,38 @@ static int handle_write(const struct mmio_handler
*handler, struct vcpu *v,
handler->priv);
  }

-int handle_mmio(mmio_info_t *info)
+const struct mmio_handler *find_mmio_handler(struct vcpu *v, paddr_t
addr)
  {
-    struct vcpu *v = current;
-    int i;
-    const struct mmio_handler *handler = NULL;
      const struct vmmio *vmmio = &v->domain->arch.vmmio;
+    const struct mmio_handler *handler = vmmio->handlers;
+    unsigned int eidx = vmmio->num_entries;
+    unsigned int midx = eidx / 2;
+    unsigned int sidx = 0;

-    for ( i = 0; i < vmmio->num_entries; i++ )
+    /* Do binary search for matching mmio handler */
+    while ( sidx != midx )
      {
-        handler = &vmmio->handlers[i];
-
-        if ( (info->gpa >= handler->addr) &&
-             (info->gpa < (handler->addr + handler->size)) )
-            break;
+        if ( addr < handler[midx].addr )
+            eidx = midx;
+        else
+            sidx = midx;
+        midx = sidx + (eidx - sidx) / 2;

This binary search can be simplified. For instance, why do you want to
compute midx at the end rather than at the beginning. This would avoid
to have "unsigned int midx = eidx / 2" at the beginning.


Let me try to use "do while()" loop logic to simplify the above code logic.

Please don't try to re-invent your own binary search implementation and use a known one such as the one implemented by Linux (see bsearch).


      }

-    if ( i == vmmio->num_entries )
+    if ( (addr >= handler[sidx].addr) &&
+         (addr < (handler[sidx].addr + handler[sidx].size)) )
+        return handler + sidx;

Please use a temporary variable for handler[sidx]. So it will be
easier to read the code.

+
+    return NULL;
+}
+
+int handle_mmio(mmio_info_t *info)
+{
+    const struct mmio_handler *handler;
+    struct vcpu *v = current;
+
+    handler = find_mmio_handler(v, info->gpa);

I would have expected some locking here. Could you explain why it is
safe to looking find the handler with your solution?

For what is worth, there was no locking before because
register_mmio_handler was always adding the handler at the end of the
array. This is not true anymore because you are sorting the array.


The function register_mmio_handler() is called only during dom0/domU
domain build code path. We don't need locking here until unless some
code inserting mmio handlers at runtime.

The current implementation of I/O handler is thread-safe because of the spin_lock and lock-less. We may have people building product on top of Xen using register_mmio_handler when the domain is running. So I will nack any patch that cause a regression on the behavior.

Regards,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.