[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [Patch] Fix the slow wall clock time issue in x64 SMP Vista


  • To: "Keir Fraser" <keir@xxxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "Cui, Dexuan" <dexuan.cui@xxxxxxxxx>
  • Date: Wed, 10 Jan 2007 21:09:56 +0800
  • Delivery-date: Wed, 10 Jan 2007 05:09:47 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Acc0qpECUWE/uVerTTS+WkRpZ1GU0QAAQ2zNAABGKxAAAJtwjAAAeAsg
  • Thread-topic: [Xen-devel] [Patch] Fix the slow wall clock time issue in x64 SMP Vista

>I cannot see the logic behind the change though. If the guest has set up a
>timeout, and the timeout is in the past, it makes sense to deliver
>immediately.
Some guests may set up on purpose a timeout which is in the "past"?
Following are a patch snippet and some output results when I boot x64 SMP Vista 
(my CPU is 2.133G; the results are printed 1 line per second or so); it seems 
x64 SMP Vista is doing some kind of adjustment using HPET's timer1.

+    scheduled = tn_cmp - cur_tick;
+    if ( (int64_t)scheduled < 0 )
+    {
+        printk("t%u_cmp= %lu, cur_tick= %lu, scheduled = negative %lu \n", tn, 
tn_cmp, cur_tick, - scheduled);
+        passed = cur_tick - tn_cmp;
+
+        /* the number of HPET tick that stands for
+         * 1/(2^10) second, namely, 0.9765625 milliseconds */
+       #define  hpet_tiny_time_span  (h->tsc_freq >> 10)
+       if ( passed < hpet_tiny_time_span )
+            scheduled = 0;
+       else
(XEN) t1_cmp=          0, cur_tick= 1947268012, scheduled = negative 1947268012
(XEN) t1_cmp= 2147483648, cur_tick= 4258824668, scheduled = negative 2111341020
(XEN) t1_cmp=          0, cur_tick= 1993055268, scheduled = negative 1993055268
(XEN) t1_cmp= 2147483648, cur_tick= 3917908772, scheduled = negative 1770425124
(XEN) t1_cmp=          0, cur_tick= 1858024708, scheduled = negative 1858024708
(XEN) t1_cmp= 2147483648, cur_tick= 4000005532, scheduled = negative 1852521884
(XEN) t1_cmp=          0, cur_tick= 2089118628, scheduled = negative 2089118628
(XEN) t1_cmp= 2147483648, cur_tick= 4012630076, scheduled = negative 1865146428
(XEN) t1_cmp=          0, cur_tick= 2003908668, scheduled = negative 2003908668
(XEN) t1_cmp= 2147483648, cur_tick= 4075156388, scheduled = negative 1927672740
(XEN) t1_cmp=          0, cur_tick= 1819502692, scheduled = negative 1819502692
(XEN) t1_cmp=          0, cur_tick= 3661310188, scheduled = negative 3661310188
(XEN) t1_cmp= 2147483648, cur_tick= 4135112804, scheduled = negative 1987629156
(XEN) t1_cmp=          0, cur_tick= 1642903140, scheduled = negative 1642903140
(XEN) t1_cmp=          0, cur_tick= 3728809316, scheduled = negative 3728809316
(XEN) t1_cmp= 2147483648, cur_tick= 4161725460, scheduled = negative 2014241812
(XEN) t1_cmp=          0, cur_tick= 1722051132, scheduled = negative 1722051132
(XEN) t1_cmp= 2147483648, cur_tick= 3950346516, scheduled = negative 1802862868
(XEN) t1_cmp=          0, cur_tick= 1904943348, scheduled = negative 1904943348
(XEN) t1_cmp= 2147483648, cur_tick= 4128329004, scheduled = negative 1980845356
(XEN) t1_cmp=          0, cur_tick= 2065056964, scheduled = negative 2065056964
(XEN) t1_cmp= 2147483648, cur_tick= 4150274868, scheduled = negative 2002791220
(XEN) t1_cmp=          0, cur_tick= 2139854452, scheduled = negative 2139854452
(XEN) t1_cmp= 2147483648, cur_tick= 4146800116, scheduled = negative 1999316468
(XEN) t1_cmp=          0, cur_tick= 2131365484, scheduled = negative 2131365484
(XEN) t1_cmp= 2147483648, cur_tick= 4270226844, scheduled = negative 2122743196
(XEN) t1_cmp=          0, cur_tick= 2146031844, scheduled = negative 2146031844
(XEN) t1_cmp= 2147483648, cur_tick= 4279941940, scheduled = negative 2132458292
(XEN) t1_cmp=          0, cur_tick= 2119305068, scheduled = negative 2119305068
(XEN) t1_cmp= 2147483648, cur_tick= 4289293244, scheduled = negative 2141809596
(XEN) t1_cmp=          0, cur_tick= 2115127940, scheduled = negative 2115127940
(XEN) t1_cmp= 2147483648, cur_tick= 4272576276, scheduled = negative 2125092628
(XEN) t1_cmp=          0, cur_tick= 2138209348, scheduled = negative 2138209348
(XEN) t1_cmp= 2147483648, cur_tick= 4172725236, scheduled = negative 2025241588
(XEN) t1_cmp=          0, cur_tick= 2124112860, scheduled = negative 2124112860
(XEN) t1_cmp= 2147483648, cur_tick= 4232138948, scheduled = negative 2084655300

btw, I can not reproduce the "Warning: many lost ticks." message now. :(

 -- Dexuan

-----Original Message-----
From: Keir Fraser [mailto:keir@xxxxxxxxxxxxx] 
Sent: 2007年1月10日 20:02
To: Cui, Dexuan; Keir Fraser; xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] [Patch] Fix the slow wall clock time issue in x64 SMP 
Vista

On 10/1/07 11:53, "Cui, Dexuan" <dexuan.cui@xxxxxxxxx> wrote:

> 1) I made the change mainly for timer1 of HPET. Actually I don't know exactly
> how x64 SMP Vista uses HPET's timer1 to adjust the wall clock time; but
> without the patch, HPET's timer1 can generate and inject interrupts with a
> frequency of 20KHz or so! and the wall clock time would become slow.

I cannot see the logic behind the change though. If the guest has set up a
timeout, and the timeout is in the past, it makes sense to deliver
immediately.

 -- Keir

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.