[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] dom0 hang





George Dunlap wrote:
On Thu, Jul 2, 2009 at 4:19 AM, Mukesh Rathor<mukesh.rathor@xxxxxxxxxx> wrote:
dom0 hang:
   vcpu0 is trying to wakeup a task and in try_to_wake_up() calls
   task_rq_lock(). since the task has cpu set to 1, it gets runq lock
   for vcpu1. next it calls resched_task() which results in sending IPI
   to vcpu1. for that, vcpu0 gets into the HYPERVISOR_event_channel_op
   HCALL and is waiting to return. Meanwhile, vcpu1 got running, and is
   spinning on it's runq lock in "schedule():spin_lock_irq(&rq->lock);",
   that vcpu0 is holding (and is waiting to return from the HCALL).

   As I had noticed before, vcpu0 never gets scheduled in xen. So
   looking further into xen:

xen:
   Both vcpu's are on the same runq, in this case cpu1. But the
   priority of vcpu1 has been set to CSCHED_PRI_TS_BOOST. As a result,
   the scheduler always picks vcpu1, and vcpu0 is starved. Also, I see in
   kdb that the scheduler timer is not set on cpu 0. That would've
   allowed csched_load_balance() to kick in on cpu0. [Also, on
   cpu1, the accounting timer, csched_tick, is not set.  Altho,
   csched_tick() is running on cpu0, it only checks runq for cpu0.]

   Looks like c/s 19500 changed csched_schedule():

-    ret.time = MILLISECS(CSCHED_MSECS_PER_TSLICE);
+    ret.time = (is_idle_vcpu(snext->vcpu) ?
+                -1 : MILLISECS(CSCHED_MSECS_PER_TSLICE));

 The quickest fix for us would be to just back that out.


 BTW, just a comment on following (all in sched_credit.c):

     if ( svc->pri == CSCHED_PRI_TS_UNDER &&
        !(svc->flags & CSCHED_FLAG_VCPU_PARKED) )
     {
        svc->pri = CSCHED_PRI_TS_BOOST;
     }
 comibined with
   if ( snext->pri > CSCHED_PRI_TS_OVER )
           __runq_remove(snext);

     Setting CSCHED_PRI_TS_BOOST as pri of vcpu seems dangerous. To me,
     since csched_schedule() never checks for time accumulated by a
     vcpu at pri CSCHED_PRI_TS_BOOST, that is same as pinning a vcpu to a
     pcpu. if that vcpu never makes progress, essentially, the system
     has lost a physical cpu.  Optionally, csched_schedule() should always
     check for cpu time accumulated and reduce the priority over time.
     I can't tell right off if it already does that. or something like
     that :)...  my 2 cents.

Hmm... what's supposed to happen is that eventually a timer tick will
interrupt vcpu1.  If cpu1 is set to be "active", then it will be
debited 10ms worth of credit.  Eventually, it will go into OVER, and
lose BOOST.  If it's "inactive", then when the tick happens, it will
be set to "active" and be debited 10ms again, setting it directly into
OVER (and thus also losing boost).

Can you see if the timer ticks are still happening, and perhaps put
some tracing it to verify that what I described above is happening?

 -George


George,

Is that in csched_acct()? Looks like that's somehow gotten removed. If
true, then may be that's the fundamental problem to chase.

Here's what the trq looks like when hung, not in any schedule function:

[0]xkdb> dtrq
CPU[00]: NOW:0x00003f2db9af369e
 1: exp=0x00003ee31cb32200 fn:csched_tick data:0000000000000000
 2: exp=0x00003ee347ece164 fn:time_calibration data:0000000000000000
 3: exp=0x00003ee69a28f04b fn:mce_work_fn data:0000000000000000
 4: exp=0x00003f055895e25f fn:plt_overflow data:0000000000000000
 5: exp=0x00003ee353810216 fn:rtc_update_second data:ffff83007f0226d8

CPU[01]: NOW:0x00003f2db9af369e
 1: exp=0x00003ee30b847988 fn:s_timer_fn data:0000000000000000
 2: exp=0x00003f1b309ebd45 fn:pmt_timer_callback data:ffff83007f022a68


thanks
Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.