All right, I just got finished upgrading my i386 Fedora 8 system to x86_64,
thanx to the new Core 2 duo processor. (Actually, it was a 'new install',
since apparently there is no clean upgrade path from one arch to another.)
After restoring my old configuration, I was pleased to see that my system
behaved exactly the same as before, no extra quirks. If it wasn't for 'yum
update's offering both arches, I wouldn't be able to tell the difference,
tho' I haven't explored multimedia much, yet. Even my kernel compiles are
simpler, not having to explicitly specify 'rpmbuild --target=i686', since
there are no subarches to worry about.
So let's see whether it's any faster. Note - I'm only testing the same 0.8.9
gplpv version, just before and after the processor & software upgrade.
Current configuration:
Equipment: core 2 duo 5600, 1.83ghz each, 2M, sata drive configured for
UDMA/100 System: fc8 64bit, xen 3.1.2, xen.gz 3.1.3, dom0 2.6.21
Tested hvm: XP Pro SP2, 2002 32bit w/512M, file backed vbd on local disk,
tested w/ iometer 2006-07-27 (1Gb \iobw.tst, 5min run) & iperf 2.0.2 (1 min
run)
Note, I'm no longer using iperf 1.7.0, since I discovered that iperf 2.0.2
comes with Fedora 8. First the old iometer numbers, from the old 32bit
processor, both domu & dom0 threads running at the same time:
pattern 4k, 50% read, 0% random
dynamo on? | io/s | MB/s | Avg. i/o time(ms} | max i/o time(ms) | %CPU
domu w/gplpv| 331.5 | 1.29 | 232.29 | 0 | 35.63
domu w/qemu | 166.1 | 0.65 | 9.67 | 0 | 35.09
dom0 w/4Gb | 1088.3 | 4.25 | 0.92 | 487.4 | 0
dom0 w/4Gb | 1118.0 | 4.37 | 0.89 | 181.3 | 0
(2nd dom0 numbers from when booted w/o /gplpv)
pattern 32k, 50% read, 0% random
domu w/gplpv| 166.0 | 5.19 | 7.98 | 0 | 29.85
domu w/qemu | 100.4 | 3.14 | 21.09 | 0 | 35.93
dom0 w/4Gb | 61.8 | 1.93 | 16.14 | 1492.3 | 0
dom0 w/4Gb | 104.9 | 3.28 | 9.54 | 906.6 | 0
And now the new numbers:
pattern 4k, 50% read, 0% random
dynamo on? | io/s | MB/s | Avg. i/o time(ms} | max i/o time(ms) | %CPU
domu w/gplpv| 417.5 | 1.63 | 7.39 | 0 | 27.29
domu w/qemu | 155.4 | 0.60 | -4.60 | 0 | 29.23
dom0 w/2Gb | 891.6 | 3.48 | 1.12 | 574.4 | 0
dom0 w/2Gb | 1033.1 | 4.04 | 0.97 | 242.4 | 0
(2nd dom0 numbers from when booted w/o /gplpv)
pattern 32k, 50% read, 0% random
domu w/gplpv| 228.6 | 7.15 | -4.65 | 0 | 21.64
domu w/qemu | 120.4 | 3.76 | 83.63 | 0 | 28.50
dom0 w/2Gb | 42.0 | 1.31 | 23.80 | 2084.7 | 0
dom0 w/2Gb | 88.3 | 2.76 | 11.32 | 1267.3 | 0
There are significant improvements in gplpv io/s. MB/s, avg. i/o time.
and %cpu. There are modest decreases in dom0 performance, and modest
improvements in qemu.
Now running one domain thread at a time, with any other domains running
the 'idle' task. First the old numbers (with the new processor, but 32bit
dom0):
gplpv 0.8.9:
4k pattern | 1026.6 | 4.01 | 39.37 | 0 | 49.70
32k pattern | 311.1 | 9.72 | 45.33 | 0 | 26.21
dom0:
4k pattern | 1376.7 | 5.38 | 0.73 | 365.7 | 0
32k pattern | 165.9 | 5.19 | 6.02 | 226.6 | 0
and now the new:
gplpv 0.8.9:
4k pattern | 1170.0 | 4.57 | 7.16 | 0 | 41.34
32k pattern | 287.0 | 8.97 | -30.85 | 0 | 23.39
dom0:
4k pattern | 1376.7 | 5.38 | 0.73 | 365.7 | 0
32k pattern | 1484.3 | 5.80 | 0.67 | 314.4 | 0
The differences are insignificant for single thread execution. Since the
underlying disk has not changed, just the processor and software, this is not
unexpected. However, it was nice to see multi-thread performance improve
(which is more software dependent), even if it was just on gplpv.
As far as 'iperf -c dom0-name -t 60' goes, the old numbers (for 1.7.0) are:
realtek: 10Mb/s
gplpv (old processor): 25 Mb/s
gplpv (new processor): 32 Mb/s
and the new numbers (for 2.0.2) are:
realtek: .5 Mb/s
gplpv (new processor): 4Mb/s
Huh?!? Ok, let's try iperf 1.7.0 again:
realtek: 9.1 Mb/s
gplpv (new processor): 33.6 Mb/s
That's interesting - guess I'll be sticking with 1.7.0 after all! (Btw, by
adding the -r option, I get nearly identical write speeds for dom0 to gplpv
domu, but 2-6x faster for qemu.)
I'll look at 0.9.0 later, and if there are significant differences from 0.8.9,
I'll report to the list.
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|