[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 15/27 v9] xen/arm: vpl011: Add a new console_evtchn_unmask function in xenconsole



This patch introduces a new console_evtchn_unmask function. This function
unmasks the console event channel if it is masked for some timeout
period.

One optimization that has been done is to merge the two for loops.

One for loop was used to iterate through all domains and
unmask the domain event channels which had been rate limited for a
specified duration.

The other for loop was run to add the event channel fd and the tty fd to
the poll list.

These two for loops were merged so that the these operations can be done
in one iteration instead of two iterations.

Signed-off-by: Bhupinder Thakur <bhupinder.thakur@xxxxxxxxxx>
Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>
---
CC: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
CC: Wei Liu <wei.liu2@xxxxxxxxxx>
CC: Stefano Stabellini <sstabellini@xxxxxxxxxx>
CC: Julien Grall <julien.grall@xxxxxxx>

Changes since v5:
- Split this change in a separate patch.

 tools/console/daemon/io.c | 44 +++++++++++++++++++++++++++-----------------
 1 file changed, 27 insertions(+), 17 deletions(-)

diff --git a/tools/console/daemon/io.c b/tools/console/daemon/io.c
index a0b35da..2dcaee6 100644
--- a/tools/console/daemon/io.c
+++ b/tools/console/daemon/io.c
@@ -117,6 +117,11 @@ struct domain {
 
 static struct domain *dom_head;
 
+static inline bool console_enabled(struct console *con)
+{
+       return con->local_port != -1;
+}
+
 static int write_all(int fd, const char* buf, size_t len)
 {
        while (len) {
@@ -908,6 +913,27 @@ static void handle_tty_write(struct console *con)
        }
 }
 
+static void console_evtchn_unmask(struct console *con, void *data)
+{
+       long long now = (long long)data;
+
+       if (!console_enabled(con))
+               return;
+
+       /* CS 16257:955ee4fa1345 introduces a 5ms fuzz
+        * for select(), it is not clear poll() has
+        * similar behavior (returning a couple of ms
+        * sooner than requested) as well. Just leave
+        * the fuzz here. Remove it with a separate
+        * patch if necessary */
+       if ((now+5) > con->next_period) {
+               con->next_period = now + RATE_LIMIT_PERIOD;
+               if (con->event_count >= RATE_LIMIT_ALLOWANCE)
+                       (void)xenevtchn_unmask(con->xce_handle, 
con->local_port);
+               con->event_count = 0;
+       }
+}
+
 static void handle_ring_read(struct domain *dom)
 {
        xenevtchn_port_or_error_t port;
@@ -1142,23 +1168,7 @@ void handle_io(void)
                for (d = dom_head; d; d = d->next) {
                        struct console *con = &d->console;
 
-                       /* CS 16257:955ee4fa1345 introduces a 5ms fuzz
-                        * for select(), it is not clear poll() has
-                        * similar behavior (returning a couple of ms
-                        * sooner than requested) as well. Just leave
-                        * the fuzz here. Remove it with a separate
-                        * patch if necessary */
-                       if ((now+5) > con->next_period) {
-                               con->next_period = now + RATE_LIMIT_PERIOD;
-                               if (con->event_count >= RATE_LIMIT_ALLOWANCE) {
-                                       (void)xenevtchn_unmask(con->xce_handle, 
con->local_port);
-                               }
-                               con->event_count = 0;
-                       }
-               }
-
-               for (d = dom_head; d; d = d->next) {
-                       struct console *con = &d->console;
+                       console_evtchn_unmask(con, (void *)now);
 
                        maybe_add_console_evtchn_fd(con, (void *)&next_timeout);
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.