[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] 4.2.1: Poor write performance for DomU.


  • To: xen-devel@xxxxxxxxxxxxx
  • From: Steven Haigh <netwiz@xxxxxxxxx>
  • Date: Wed, 13 Mar 2013 01:08:21 +1100
  • Delivery-date: Tue, 12 Mar 2013 14:09:08 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>

On 13/03/13 00:04, Konrad Rzeszutek Wilk wrote:
So still filesystem. Fio can do it on a block level.

What does 'xenstore-ls' show you and 'losetup -a'? I am really
curious as to where that file you are providing to the guest as
disk is being handled via 'loop' or via 'QEMU'.


I've picked out what I believe is the most relevant from xenstore-ls
that belongs to the DomU in question:

Great.
.. snip..
        params = "/dev/vg_raid6/fileshare"
        mode = "w"
        online = "1"
        frontend-id = "1"
        type = "phy"
        physical-device = "fd:5"
        hotplug-status = "connected"
        feature-flush-cache = "1"
        feature-discard = "0"
        feature-barrier = "1"
        feature-persistent = "1"
        sectors = "5368709120"
        info = "0"
        sector-size = "512"

OK, so the flow of data from the guest is:
        bonnie++ -> FS -> xen-blkfront -> xen-blkback -> LVM -> RAID6 -> 
multiple disks.

Any way you can restructure this to be:

        fio -> xen-blkfront -> xen-blkback -> one disk from the raid.


to see if the issue is between "LVM -> RAID6" or the "bonnie++ -> FS" part?
Is the cpu load quite high when you do these writes?

Maybe I'm missing something, but running this directly from the Dom0 would give a result of:

        bonnie++ -> FS -> LVM -> RAID6

These figures were well over 200Mb/sec read and well over 100Mb/sec write.

This only takes out the xen-blkfront and xen-blkback - which I thought was the aim?

Or is the point of this to make sure that we can replicate it with a single disk and that it isn't some weird interaction between blkfront/blkback and the LVM/RAID6?

CPU Usage doesn't seem to be a limiting factor. I certainly don't see massive loads for writing.


What are the RAID6 disks you have? How many?

The RAID6 is made up of 4 x 2Tb 7200RPM Seagate SATA drives...

Model Family:     Seagate SV35
Device Model:     ST2000VX000-9YW164
Serial Number:    Z1E10QQJ
LU WWN Device Id: 5 000c50 04dd3a1f1
Firmware Version: CV13
User Capacity:    2,000,398,934,016 bytes [2.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical

Then in /proc/mdstat:
md2 : active raid6 sdd[4] sdc[0] sdf[5] sde[1]
3907026688 blocks super 1.2 level 6, 128k chunk, algorithm 2 [4/4] [UUUU]

I decided to use whole disks so that I don't run into alignment issues.

The VG is using 4Mb extents, so that should be fine too:
# vgdisplay vg_raid6
  --- Volume group ---
  VG Name               vg_raid6
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  7
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                5
  Open LV               5
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               3.64 TiB
  PE Size               4.00 MiB
  Total PE              953863
  Alloc PE / Size       688640 / 2.63 TiB
  Free  PE / Size       265223 / 1.01 TiB
  VG UUID               md7G8X-F2mT-JBQa-f5qm-TN4O-kOqs-KWHGR1

--
Steven Haigh

Email: netwiz@xxxxxxxxx
Web: https://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897
Fax: (03) 8338 0299

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.