| 
Hi,
what are the mount options (/etc/fstab) on the DomU ?
In the xen-tools "partitions.d" example, "sync" is used, which give poor 
performance. 
Olivier
Stefan Below a écrit :
 
This is the domU config file:
kernel = "/usr/lib/xen/boot/pv-grub-x86_64.gz"
memory      = '2048'
vcpus=2
disk        = [
                 'phy:/dev/vg0/tsIbex-disk,xvda,w',
                 'phy:/dev/vg0/tsIbex-swap,xvdb,w',
             ]
extra = "(hd0)/boot/grub/menu.lst"
name        = 'tsIbex'
vif         = [ 'ip=192.xxx.xxx.xxx,mac=xx:xx:xx:xx:xx' ]
on_poweroff = 'destroy'
on_reboot   = 'restart'
on_crash    = 'restart'
Dom0 Hardware and configuration:
Q6600 with 8GB RAM
sw Raid10,f2 Layout  with 4 disks  (i know, hardware raid is 
better.....) (storage for DomU via LVM)
sw Raid1 with 4 disks for Dom0 root partition
Debian Lenny, xen 3.3.1, with Kernel 2.6.26-1-xen-amd64.
Anything else i should provide?
Thanks,
Stefan
 Could use some more detail - DomU config and a description of the 
storage and hardware would be good.
Best Regards
Nathan Eisenberg
Sr. Systems Administrator
Atlas Networks, LLC
support@xxxxxxxxxxxxxxxx
http://support.atlasnetworks.us/portal
-----Original Message-----
From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx 
[mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Stefan Below 
Sent: Tuesday, June 16, 2009 12:50 PM
To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] DomU IO issue
Hello,
i have a a big io issue with my PV-Guest (Ubuntu 9.04). When i write 
or copy large files, the hdd performance is very slow. 
This is my  iostat output. It looks like that the datat is not 
reading and written the same time. 
File copy in DomU (PV Guest, LVM)
Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s 
avgrq-sz avgqu-sz   await  svctm  %util
md0               0,00     0,00 1718,00 2816,00     6,71    11,00     
8,00     0,00    0,00   0,00   0,00 
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0,00    0,00    1,00    0,00    0,00   99,00
Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s 
avgrq-sz avgqu-sz   await  svctm  %util
md0               0,00     0,00 2232,67    0,00     8,72     0,00     
8,00     0,00    0,00   0,00   0,00
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0,00    0,00    1,98    0,00    0,99   97,03
Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s 
avgrq-sz avgqu-sz   await  svctm  %util
md0               0,00     0,00 1079,00 4959,00     4,21    19,37     
8,00     0,00    0,00   0,00   0,00
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0,00    0,00    1,00    0,00    0,00   99,00
Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s 
avgrq-sz avgqu-sz   await  svctm  %util
md0               0,00     0,00 2866,00    0,00    11,20     0,00     
8,00     0,00    0,00   0,00   0,00
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0,99    0,00    0,99    1,98    0,00   96,04
Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s 
avgrq-sz avgqu-sz   await  svctm  %util
md0               0,00     0,00 2719,00    0,00    10,62     0,00     
8,00     0,00    0,00   0,00   0,00
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0,00    0,00    0,00    0,00    0,00  100,00
Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s 
avgrq-sz avgqu-sz   await  svctm  %util
md0               0,00     0,00 1264,00    0,00     4,94     0,00     
8,00     0,00    0,00   0,00   0,00
File copy from Dom0 shows up a normal behavior:
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0,99    0,00   13,86   78,22    0,00    6,93
Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s 
avgrq-sz avgqu-sz   await  svctm  %util
md0               0,00     0,00 13504,00 13325,00    52,75    
52,05     8,00     0,00    0,00   0,00   0,00
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0,00    0,00   10,00   81,00    0,00    9,00
Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s 
avgrq-sz avgqu-sz   await  svctm  %util
md0               0,00     0,00 11155,45 11163,37    43,58    
43,61     8,00     0,00    0,00   0,00   0,00
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0,00    0,00   15,84   76,24    0,99    6,93
Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s 
avgrq-sz avgqu-sz   await  svctm  %util
md0               0,00     0,00 12226,00 12302,00    47,76    
48,05     8,00     0,00    0,00   0,00   0,00
I am running xen 3.3.1 on Debian Lenny Dom0, Kernel 2.6.26-1-xen-amd64.
Thanks a lot
Stefan
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
 
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
 
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
 |