WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] [PATCH] Fix deadlock in schedule.c at TRACE mode

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] [PATCH] Fix deadlock in schedule.c at TRACE mode
From: NISHIGUCHI Naoki <nisiguti@xxxxxxxxxxxxxx>
Date: Thu, 24 Apr 2008 13:34:58 +0900
Delivery-date: Wed, 23 Apr 2008 21:35:36 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.12 (Windows/20080213)
Hi,

In schedule.c, schedule() and sched_adjust() call trace functions during
acquiring lock of schedule_lock in each cpu's schedule_data. When trace
buffers are enabled, the trace function (__trace_var()) may call
vcpu_wake() by calling send_guest_global_virq(). In the case, a deadlock
occurs when acquiring lock of schedule_lock.

Attached patch fixes this problem.

Signed-off-by: Naoki Nishiguchi <nisiguti@xxxxxxxxxxxxxx>

Regards,
Naoki Nishiguchi
diff -r 77dec8732cde xen/common/schedule.c
--- a/xen/common/schedule.c     Wed Apr 23 16:58:44 2008 +0100
+++ b/xen/common/schedule.c     Thu Apr 24 11:19:25 2008 +0900
@@ -605,11 +605,13 @@ long sched_adjust(struct domain *d, stru
     if ( d == current->domain )
         vcpu_schedule_lock_irq(current);
 
-    if ( (ret = SCHED_OP(adjust, d, op)) == 0 )
-        TRACE_1D(TRC_SCHED_ADJDOM, d->domain_id);
+    ret = SCHED_OP(adjust, d, op);
 
     if ( d == current->domain )
         vcpu_schedule_unlock_irq(current);
+
+    if ( ret == 0 )
+        TRACE_1D(TRC_SCHED_ADJDOM, d->domain_id);
 
     for_each_vcpu ( d, v )
     {
@@ -654,6 +656,7 @@ static void schedule(void)
     struct schedule_data *sd;
     struct task_slice     next_slice;
     s32                   r_time;     /* time for new dom to run */
+    uint64_t              prev_state_time, next_state_time;
 
     ASSERT(!in_irq());
     ASSERT(this_cpu(mc_state).flags == 0);
@@ -682,14 +685,10 @@ static void schedule(void)
         return continue_running(prev);
     }
 
-    TRACE_2D(TRC_SCHED_SWITCH_INFPREV,
-             prev->domain->domain_id,
-             now - prev->runstate.state_entry_time);
-    TRACE_3D(TRC_SCHED_SWITCH_INFNEXT,
-             next->domain->domain_id,
-             (next->runstate.state == RUNSTATE_runnable) ?
-             (now - next->runstate.state_entry_time) : 0,
-             r_time);
+    /* Temporarily save the period of previous runstate. */
+    prev_state_time = now - prev->runstate.state_entry_time;
+    next_state_time = (next->runstate.state == RUNSTATE_runnable) ?
+                      (now - next->runstate.state_entry_time) : 0;
 
     ASSERT(prev->runstate.state == RUNSTATE_running);
     vcpu_runstate_change(
@@ -705,6 +704,12 @@ static void schedule(void)
     next->is_running = 1;
 
     spin_unlock_irq(&sd->schedule_lock);
+
+    /* Avoid deadlock by calling the trace function after unlock. */
+    TRACE_2D(TRC_SCHED_SWITCH_INFPREV,
+             prev->domain->domain_id, prev_state_time);
+    TRACE_3D(TRC_SCHED_SWITCH_INFNEXT,
+             next->domain->domain_id, next_state_time, r_time);
 
     perfc_incr(sched_ctx);
 
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>