[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[RFC PATCH 05/10] preempt: add try_preempt() function


  • To: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Date: Tue, 23 Feb 2021 02:34:57 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Z7bkZJeAVBuSp9Gyc797wEvVdAIwSUhCtPUTvuKcVEE=; b=CYpIuVSxRvIsnDd2YCEd7pN6Q2wWhZkEbUaFx1JQjeRvPZMICDtYtdkrltjVN2DVPhhiVnZIi8XIZgD66iqoAQxRa0OIevf6+98UwhOkgJnMKEcR4M0DeP8JqnTaBJ5ZCCmRqtj7aWvozqUrzlFzy2/jHz7mLGdtotJE8iDlNmSL6lVsTVrotp7CO7IYSlocaAJGuXH+DDRX35mzl3tWxTPICziYo273iHLH94nocdeZE6XarCAN/VlClsVDF3kMAN/ej0AsGgp/xrtN4z1WIo3pC7vcR7KsBLcap9yYpc/UwOpHxpm4ZkcPflDVVejpyNHP4hlAZArV8Zs+Jr8xZQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=n8UsQ/zNRlGvj/CMCAtMQlV8xoeGe/qY82ASPb8bsv1EJdfCD5A166Brx+0DHQCkbAvOD1j0OJ7Rury/pO+pMG69Q/YnB0J4EimaaZTI77s1IgEpb5JMteMdHZxmAd3/fcXj8hcFvvZRcIGP8pKEw691wnOqPbCHPzi1y+rFYFCe7dafaFJOy1ewuh04VM4e5PmlkjjRuWqsmOCYtuXQ7XdgDKEKawldJ28zOD/M2QkCa4xC9uFKERuBr1193ecVyrBJ36iqOouoStzvs5ZUhysNObYjnBrSzNwAQx6pXyFMnQxtpn8PsAXILqS3kEDPb/L3LGD3JgM9JKJMgVl+ZA==
  • Authentication-results: lists.xenproject.org; dkim=none (message not signed) header.d=none;lists.xenproject.org; dmarc=none action=none header.from=epam.com;
  • Cc: Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Tue, 23 Feb 2021 02:35:15 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHXCYx5ySuVthNLHUKZmgWg9dU+LA==
  • Thread-topic: [RFC PATCH 05/10] preempt: add try_preempt() function

This function can be used to preempt code running in hypervisor mode.
Generally, there are two reasons to preempt while in HYP mode:

1. IRQ arrived. This may woke vCPU with higher scheduling priority.
2. Exit from atomic context. While we were in atomic context, state
   of the system may changed and we need to reschedule.

It is very inefficient to call scheduler each time we leave atomic
context, so very simple optimists is used. There are cases when
we *know* that there might be reasons for preemption. One example - is
IRQ. In this case we call try_preempt(true). This will force
rescheduling if we are outside atomic context or it will ensure that
scheduler will be called right after leaving atomic context. This is
done by calling try_preempt(false) when we are leaving atomic
context. try_preempt(false) will check if there was call to
try_preempt(true) in atomic context and call scheduler only in this
case.

Also macro preempt_enable_no_sched() is introduced. It is meant to
be used by scheduler itself, because we don't want to initiate
rescheduling inside scheduler code.

Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@xxxxxxxx>
---
 xen/common/preempt.c      | 32 +++++++++++++++++++++++++++++++-
 xen/include/xen/preempt.h |  8 ++++++++
 2 files changed, 39 insertions(+), 1 deletion(-)

diff --git a/xen/common/preempt.c b/xen/common/preempt.c
index ad61c8419a..98699aaa1f 100644
--- a/xen/common/preempt.c
+++ b/xen/common/preempt.c
@@ -4,6 +4,7 @@
  * Track atomic regions in the hypervisor which disallow sleeping.
  * 
  * Copyright (c) 2010, Keir Fraser <keir@xxxxxxx>
+ * Copyright (c) 2021, EPAM Systems
  * 
  * This program is free software; you can redistribute it and/or modify
  * it under the terms of the GNU General Public License as published by
@@ -21,13 +22,42 @@
 
 #include <xen/preempt.h>
 #include <xen/irq.h>
+#include <xen/sched.h>
+#include <xen/wait.h>
 #include <asm/system.h>
 
 DEFINE_PER_CPU(atomic_t, __preempt_count);
+DEFINE_PER_CPU(unsigned int, need_reschedule);
 
 bool_t in_atomic(void)
 {
-    return atomic_read(&preempt_count()) || in_irq() || local_irq_is_enabled();
+    return atomic_read(&preempt_count()) || in_irq();
+}
+
+void try_preempt(bool force)
+{
+    /*
+     * If caller wants us to call the scheduler, but we are in atomic
+     * context - update the flag. We will try preemption upon exit
+     * from atomic context.
+     */
+    if ( force && in_atomic() )
+    {
+        this_cpu(need_reschedule) = 1;
+        return;
+    }
+
+    /* idle vCPU schedules via soft IRQs */
+    if ( unlikely(system_state != SYS_STATE_active) ||
+         in_atomic() ||
+         is_idle_vcpu(current) )
+        return;
+
+    if ( force || this_cpu(need_reschedule) )
+    {
+        this_cpu(need_reschedule) = 0;
+        wait();
+    }
 }
 
 #ifndef NDEBUG
diff --git a/xen/include/xen/preempt.h b/xen/include/xen/preempt.h
index e217900d6e..df7352a75e 100644
--- a/xen/include/xen/preempt.h
+++ b/xen/include/xen/preempt.h
@@ -4,6 +4,7 @@
  * Track atomic regions in the hypervisor which disallow sleeping.
  * 
  * Copyright (c) 2010, Keir Fraser <keir@xxxxxxx>
+ * Copyright (c) 2021, EPAM Systems
  */
 
 #ifndef __XEN_PREEMPT_H__
@@ -15,6 +16,8 @@
 
 DECLARE_PER_CPU(atomic_t, __preempt_count);
 
+void try_preempt(bool force);
+
 #define preempt_count() (this_cpu(__preempt_count))
 
 #define preempt_disable() do {                  \
@@ -23,6 +26,11 @@ DECLARE_PER_CPU(atomic_t, __preempt_count);
 
 #define preempt_enable() do {                   \
     atomic_dec(&preempt_count());               \
+    try_preempt(false);                         \
+} while (0)
+
+#define preempt_enable_no_sched() do {          \
+    atomic_dec(&preempt_count());               \
 } while (0)
 
 bool_t in_atomic(void);
-- 
2.29.2



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.