[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 1/2] x86 hvm: don't set periodical timer again until its IRQ is delivered.



Modern Windows OS (ex XP,2003,2008) never use the PIT timer,
and neither cpu#0's LAPIC timer after boot.
Despite that, xen emulates them busily. It's inefficient.

With this patch, setting a timer is defered while its IRQ is masked.

The reasons why pt_timer_fn() simply calls vcpu_kick() are:
- checking by pt_irq_masked() is duplicated. pt_update_irq() also does.
- pt_timer_fn() is likely called on the same processor
  as pt->vcpu->processor. Hence vcpu_kick() hardly send IPI.

Signed-off-by: Kouya Shimura <kouya@xxxxxxxxxxxxxx>

# HG changeset patch
# User Kouya Shimura <kouya@xxxxxxxxxxxxxx>
# Date 1253081156 -32400
# Node ID 864f20763d03ef989501fbc43d74034027679a56
# Parent  d2a32e24fe504b9626e6732b4f213c7cb1bc8b57
x86 hvm: don't set periodical timer again until its IRQ is delivered.

Modern Windows OS (ex XP,2003,2008) never use the PIT timer,
and neither cpu#0's LAPIC timer after boot.
Despite that, xen emulates them busily. It's inefficient.

With this patch, setting a timer is defered while its IRQ is masked.

The reasons why pt_timer_fn() simply calls vcpu_kick() are:
- checking by pt_irq_masked() is duplicated. pt_update_irq() also does.
- pt_timer_fn() is likely called on the same processor
  as pt->vcpu->processor. Hence vcpu_kick() hardly send IPI.

Signed-off-by: Kouya Shimura <kouya@xxxxxxxxxxxxxx>

diff -r d2a32e24fe50 -r 864f20763d03 xen/arch/x86/hvm/vpt.c
--- a/xen/arch/x86/hvm/vpt.c    Wed Sep 09 16:39:41 2009 +0100
+++ b/xen/arch/x86/hvm/vpt.c    Wed Sep 16 15:05:56 2009 +0900
@@ -187,8 +187,11 @@ void pt_restore_timer(struct vcpu *v)
 
     list_for_each_entry ( pt, head, list )
     {
-        pt_process_missed_ticks(pt);
-        set_timer(&pt->timer, pt->scheduled);
+        if ( pt->pending_intr_nr == 0 )
+        {
+            pt_process_missed_ticks(pt);
+            set_timer(&pt->timer, pt->scheduled);
+        }
     }
 
     pt_thaw_time(v);
@@ -205,15 +208,7 @@ static void pt_timer_fn(void *data)
     pt->pending_intr_nr++;
     pt->do_not_freeze = 0;
 
-    if ( !pt->one_shot )
-    {
-        pt->scheduled += pt->period;
-        pt_process_missed_ticks(pt);
-        set_timer(&pt->timer, pt->scheduled);
-    }
-
-    if ( !pt_irq_masked(pt) )
-        vcpu_kick(pt->vcpu);
+    vcpu_kick(pt->vcpu);
 
     pt_unlock(pt);
 }
@@ -302,6 +297,9 @@ void pt_intr_post(struct vcpu *v, struct
     }
     else
     {
+        pt->scheduled += pt->period;
+        pt_process_missed_ticks(pt);
+
         if ( mode_is(v->domain, one_missed_tick_pending) ||
              mode_is(v->domain, no_missed_ticks_pending) )
         {
@@ -313,6 +311,9 @@ void pt_intr_post(struct vcpu *v, struct
             pt->last_plt_gtime += pt->period;
             pt->pending_intr_nr--;
         }
+
+        if ( pt->pending_intr_nr == 0 )
+            set_timer(&pt->timer, pt->scheduled);
     }
 
     if ( mode_is(v->domain, delay_for_missed_ticks) &&
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.