[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] 4.2.1: Poor write performance for DomU.



On 20/02/13 03:10, Steven Haigh wrote:
> Hi guys,
> 
> Firstly, please CC me in to any replies as I'm not a subscriber these days.
> 
> I've been trying to debug a problem with Xen 4.2.1 where I am unable to 
> achieve more than ~50Mb/sec sustained sequential write to a disk. The 
> DomU is configured as such:

Since you mention 4.2.1 explicitly, is this a performance regression
from previous versions? (4.2.0 or the 4.1 branch)

> name            = "zeus.vm"
> memory          = 1024
> vcpus           = 2
> cpus            = "1-3"
> disk            = [ 'phy:/dev/RAID1/zeus.vm,xvda,w', 
> 'phy:/dev/vg_raid6/fileshare,xvdb,w' ]
> vif             = [ "mac=02:16:36:35:35:09, bridge=br203, 
> vifname=vm.zeus.203", "mac=10:16:36:35:35:09, bridge=br10, 
> vifname=vm.zeus.10" ]
> bootloader      = "pygrub"
> 
> on_poweroff     = 'destroy'
> on_reboot       = 'restart'
> on_crash        = 'restart'
> 
> I have tested the underlying LVM config by mounting 
> /dev/vg_raid6/fileshare from within the Dom0 and running bonnie++ as a 
> benchmark:
> 
> Version  1.96       ------Sequential Output------ --Sequential Input- 
> --Random-
> Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- 
> --Seeks--
> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP 
> /sec %CP
> xenhost.lan.crc. 2G   667  96 186976  21 80430  14   956  95 290591  26 
> 373.7   8
> Latency             26416us     212ms     168ms   35494us   35989us 83759us
> Version  1.96       ------Sequential Create------ --------Random 
> Create--------
> xenhost.lan.crc.id. -Create-- --Read--- -Delete-- -Create-- --Read--- 
> -Delete--
>                files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP 
> /sec %CP
>                   16 14901  32 +++++ +++ 19672  39 15307  34 +++++ +++ 
> 18158  37
> Latency             17838us     141us     298us     365us     133us 296us
> 
> ~186Mb/sec write, ~290Mb/sec read. Awesome.
> 
> I then started a single DomU which gets passed /dev/vg_raid6/fileshare 
> through as xvdb. It is then mounted in /mnt/fileshare/. I then ran 
> bonnie++ again in the DomU:
> 
> Version  1.96       ------Sequential Output------ --Sequential Input- 
> --Random-
> Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- 
> --Seeks--
> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP 
> /sec %CP
> zeus.crc.id.au   2G   658  96 50618   8 42398  10  1138  99 267568  30 
> 494.9  11
> Latency             22959us     226ms     311ms   14617us   41816us 72814us
> Version  1.96       ------Sequential Create------ --------Random 
> Create--------
> zeus.crc.id.au      -Create-- --Read--- -Delete-- -Create-- --Read--- 
> -Delete--
>                files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP 
> /sec %CP
>                   16 21749  59 +++++ +++ 31089  73 23283  64 +++++ +++ 
> 31114  75
> Latency             18989us     164us     928us     480us      26us  87us
> 
> ~50Mb/sec write, ~267Mb/sec read. Not so awesome.

We are currently working on improving the speed of pv block drivers, I
will look into this difference between the read/write speed, but I would
guess this is due to the size of the request/ring.

> 
> /dev/vg_raid6/fileshare exists as an LV on /dev/md2:
> 
> # lvdisplay vg_raid6/fileshare
>    --- Logical volume ---
>    LV Path                /dev/vg_raid6/fileshare
>    LV Name                fileshare
>    VG Name                vg_raid6
>    LV UUID                cwC0yK-Xr56-WB5v-10bw-3AZT-pYj0-piWett
>    LV Write Access        read/write
>    LV Creation host, time xenhost.lan.crc.id.au, 2013-02-18 20:59:40 +1100
>    LV Status              available
>    # open                 1
>    LV Size                2.50 TiB
>    Current LE             655360
>    Segments               1
>    Allocation             inherit
>    Read ahead sectors     auto
>    - currently set to     1024
>    Block device           253:5
> 
> 
> md2 : active raid6 sdd[4] sdc[0] sde[1] sdf[5]
>        3907026688 blocks super 1.2 level 6, 128k chunk, algorithm 2 
> [4/4] [UUUU]
> 
> Heres a quick output of 'xm info' - although its full VM load is running 
> now:
> # xm info
> host                   : xenhost.lan.crc.id.au
> release                : 3.7.9-1.el6xen.x86_64
> version                : #1 SMP Mon Feb 18 14:46:35 EST 2013
> machine                : x86_64
> nr_cpus                : 4
> nr_nodes               : 1
> cores_per_socket       : 4
> threads_per_core       : 1
> cpu_mhz                : 3303
> hw_caps                : 
> bfebfbff:28100800:00000000:00003f40:179ae3bf:00000000:00000001:00000000
> virt_caps              : hvm
> total_memory           : 8116
> free_memory            : 1346
> free_cpus              : 0
> xen_major              : 4
> xen_minor              : 2
> xen_extra              : .1
> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 
> hvm-3.0-x86_32p hvm-3.0-x86_64
> xen_scheduler          : credit
> xen_pagesize           : 4096
> platform_params        : virt_start=0xffff800000000000
> xen_changeset          : unavailable
> xen_commandline        : dom0_mem=1024M cpufreq=xen dom0_max_vcpus=1 
> dom0_vcpus_pin
> cc_compiler            : gcc (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4)
> cc_compile_by          : mockbuild
> cc_compile_domain      : crc.id.au
> cc_compile_date        : Sat Feb 16 19:16:38 EST 2013
> xend_config_format     : 4
> 
> In a nutshell, does anyone know *why* I am only able to get ~50Mb/sec 
> sequential writes to the DomU? It certainly isn't a problem getting 
> normal speeds to the LV while mounted in the Dom0.
> 
> All OS are Scientific Linux 6.3. The Dom0 runs packages from my 
> kernel-xen repo (http://au1.mirror.crc.id.au/repo/el6/x86_64/). The DomU 
> is completely stock packages.
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.