|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] Re: Fix for get_s_time()
Dan, Keir:
Here is where I stand on the overhead of (hpet)read_64_main_counter()
for the version layered on get_s_time with the max function
compared to a version that goes to the hardware each time.
There are two histograms, each with 100 buckets, each bucket is 64 cycles.
There are 1991 cycles per usec on this box. Bucket 99 contains all
events where overhead >= (99*64) cycles.
Layered on stime the overhead is probably lower on average.
Both histograms are bi-modal, but the going-to-the-hardware one
seems to have a stronger second mode. As we have discussed, the cost
of going to the hardware could vary quite a bit from platform to platform.
I optimized the code around read_64_main_counter() over stime quite a
bit, but
I'm sure there is room for improvement.
-Dave
read_64_main_counter() On stime:
(VMM) cycles per bucket 64
(VMM)
(VMM) 0: 0 78795 148271 21173 15902 47704 89195 121962
(VMM) 8: 83632 51848 17531 12987 10976 8816 9120 8608
(VMM) 16: 5685 3972 3783 2518 1052 710 608 469
(VMM) 24: 277 159 83 46 34 23 19 16
(VMM) 32: 9 6 7 3 4 8 5 6
(VMM) 40: 9 7 14 13 17 25 22 29
(VMM) 48: 25 19 35 27 30 26 21 23
(VMM) 56: 17 24 12 27 22 18 10 22
(VMM) 64: 19 16 16 16 28 18 23 16
(VMM) 72: 22 22 12 14 21 19 17 19
(VMM) 80: 18 14 10 14 11 12 8 18
(VMM) 88: 16 10 17 14 10 8 11 11
(VMM) 96: 10 10 0 175
read_64_main_counter() Going to the hardware:
(VMM) cycles per bucket 64
(VMM)
(VMM) 0: 92529 148423 27850 12532 28042 43336 60516 59011
(VMM) 8: 36895 14043 8162 6857 7794 7401 5099 2986
(VMM) 16: 1636 1066 796 592 459 409 314 248
(VMM) 24: 206 195 138 97 71 45 35 34
(VMM) 32: 33 36 40 40 25 26 25 26
(VMM) 40: 37 23 18 30 27 30 34 44
(VMM) 48: 38 19 25 23 23 25 21 27
(VMM) 56: 28 24 43 80 220 324 568 599
(VMM) 64: 610 565 611 699 690 846 874 788
(VMM) 72: 703 542 556 613 605 603 559 500
(VMM) 80: 485 493 512 578 561 594 575 614
(VMM) 88: 759 851 895 856 807 770 719 958
(VMM) 96: 1127 1263 0 18219
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- Re: [Xen-devel] Re: Fix for get_s_time(), (continued)
- Re: [Xen-devel] Re: Fix for get_s_time(), Keir Fraser
- Re: [Xen-devel] Re: Fix for get_s_time(), Dave Winchell
- Re: [Xen-devel] Re: Fix for get_s_time(), Dave Winchell
- Re: [Xen-devel] Re: Fix for get_s_time(), Keir Fraser
- Re: [Xen-devel] Re: Fix for get_s_time(), Dave Winchell
- Re: [Xen-devel] Re: Fix for get_s_time(), Dave Winchell
- RE: [Xen-devel] Re: Fix for get_s_time(), Dan Magenheimer
- RE: [Xen-devel] Re: Fix for get_s_time(), Dave Winchell
- RE: [Xen-devel] Re: Fix for get_s_time(), Dan Magenheimer
- Re: [Xen-devel] Re: Fix for get_s_time(), Dave Winchell
- Re: [Xen-devel] Re: Fix for get_s_time(),
Dave Winchell <=
- Re: [Xen-devel] Re: Fix for get_s_time(), Keir Fraser
- Re: [Xen-devel] Re: Fix for get_s_time(), Dave Winchell
- RE: [Xen-devel] Re: Fix for get_s_time(), Dan Magenheimer
- Re: [Xen-devel] Re: Fix for get_s_time(), Dave Winchell
RE: [xen-devel] System time monotonicity, Dan Magenheimer
|
|
|
|
|