WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

RE: [Xen-users] Re: Xen Disk I/O performance vs nativeperformance:Xen I/

To: "DOGUET Emmanuel" <Emmanuel.DOGUET@xxxxxxxx>, "Fajar A. Nugraha" <fajar@xxxxxxxxx>
Subject: RE: [Xen-users] Re: Xen Disk I/O performance vs nativeperformance:Xen I/O is definitely super super super slow
From: "Joris Dobbelsteen" <Joris@xxxxxxxxxxxxxxxxxxxxx>
Date: Thu, 26 Feb 2009 13:42:54 -0000
Cc: xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 26 Feb 2009 05:43:55 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <7309E5BCEDC4DC4BA820EF9497269EAD0461B244@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx><7207d96f0902120602t1be864acm684fbe6b8f0f18aa@xxxxxxxxxxxxxx><7309E5BCEDC4DC4BA820EF9497269EAD0461B246@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx><7207d96f0902122011x541c63eewe33fe0ef922cd0c9@xxxxxxxxxxxxxx><7309E5BCEDC4DC4BA820EF9497269EAD0461B24D@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx><7309E5BCEDC4DC4BA820EF9497269EAD0461B24F@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx><7207d96f0902132122n409d71ceg2c19e3ec70f52f45@xxxxxxxxxxxxxx> <7309E5BCEDC4DC4BA820EF9497269EAD0461B292@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx> <7309E5BCEDC4DC4BA820EF9497269EAD0461B295@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx> <7309E5BCEDC4DC4BA820EF9497269EAD0461B2A3@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcmOZD92bBQc2PkGRXO88dTpflQeLgIEGQ+wAAhDz9AAMFhFIAAtPW8V
Thread-topic: [Xen-users] Re: Xen Disk I/O performance vs nativeperformance:Xen I/O is definitely super super super slow
I don't have any experience with tuning a system, but I can only make a few deductions:
 
1) Hardware RAID does not scale with spindles, which is rather strange. Also performance is rather low, which makes me question if this is also the case without running Xen (just plain Linux only)? Either this or your configuration doesn't work out (cache might help indeed).
2) Software RAID scaled with number of spindles and that seems OK.
3) domU speeds have no consistent relation with dom0 attained speeds...
 
The question is why...
 
The only resonable way you could figure out seems to be doing traces in the kernel. You need a expert that has a clue what is exactly going on under the hood, especially since lots and lots of software layers are stacked. The results point to some kind of feature interaction that the software does not like. The addition to the system are communication between domains, using some kind of buffering and probably copying. In addition an entire Linux I/O scheduler + more is put on top of it.
 
If you can spend the time, it might be interesting to see if Ubuntu 8.04 LTS or Debian 5.0 do any better, as these have a newer (at least different) kernel than RHEL. There was recently an announcement for a Debian 5.0 based Xen LiveCD that might work(tm). You can also try with a different DomU first, which is probably a lot easier. This way we can maybe isolate the problem domain a bit more.
 
- Joris
 

From: DOGUET Emmanuel [mailto:Emmanuel.DOGUET@xxxxxxxx]
Sent: Wed 25-Feb-2009 18:03
To: DOGUET Emmanuel; Fajar A. Nugraha
Cc: xen-users; Joris Dobbelsteen
Subject: RE: [Xen-users] Re: Xen Disk I/O performance vs nativeperformance:Xen I/O is definitely super super super slow


I have finished my tests on 3 servers. On each we loose some bandwidth with XEN. On our 10 platform ... We always loose some bandwidth, I think it's normal. Just the bench method who must differ?

I have made bench (write only) between hardware and software RAID under XEN (see attachment).

Linux Software RAID is always faster than HP Raid. I must try too the "512MB+Cache Write" option for the HP Raid.

So my problems seem to be here.


-------------------------
HP DL 380
Quad core
-------------------------
Test: dd if=/dev/zero of=TEST bs=4k count=1250000



          Hardware     Hardware    Software     Software
           RAID 5        RAID 5      RAID 5       RAID 5
          4 x 146G      8 x 146G   4 x 146G     8 x 146G
dom0
(1024MB,
 1 cpu)     32MB        22MB        88MB (*)    144MB (*)

domU
( 512MB,
 1 cpu)      8MB        5MB         34MB        31MB

domU
 (4096MB,
 2 cpu)     --          7MB         51MB        35MB



*: don't understand this difference.


This performance seems to be good for you?




         Best regards.




>-----Message d'origine-----
>De : DOGUET Emmanuel
>Envoyé : mardi 24 février 2009 17:50
>À : DOGUET Emmanuel; Fajar A. Nugraha
>Cc : xen-users; Joris Dobbelsteen
>Objet : RE: [Xen-users] Re: Xen Disk I/O performance vs
>nativeperformance:Xen I/O is definitely super super super slow
>
>For resuming :
>
>on RAID 0
>
>       dom0: 80MB      domU:   56MB                    Loose: 30M
>
>on RAID1
>
>       dom0: 80MB      domU:  55 MB            Loose: 32%
>
>on RAID5:
>
>       dom0: 30MB      domU:   9MB                     Loose: 70%
>
>
>
>So loose seem to be "exponantial" ?
>
>
>
>>-----Message d'origine-----
>>De : xen-users-bounces@xxxxxxxxxxxxxxxxxxx
>>[mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] De la part de
>>DOGUET Emmanuel
>>Envoyé : mardi 24 février 2009 14:22
>>À : Fajar A. Nugraha
>>Cc : xen-users; Joris Dobbelsteen
>>Objet : RE: [Xen-users] Re: Xen Disk I/O performance vs
>>nativeperformance:Xen I/O is definitely super super super slow
>>
>>
>>I have made another test on another server (DL 380)
>>
>>And same thing!
>>
>>I'm always use this test :
>>
>>dd if=/dev/zero of=TEST bs=4k count=1250000
>>
>>(be careful with memory cache)
>>
>>
>>TEST WITH 2 RAID 5 (include system on RAID 5, 3x146G + 3x146G)
>>---------------------------------------------------------------
>>
>> dom0: 1GO, 1CPU, 2 RAID 5
>>
>>        rootvg(c0d0p1):         4596207616 bytes (4.6 GB)
>>copied, 158.284 seconds, 29.0 MB/s
>>        datavg(c0d1p1):         5120000000 bytes (5.1 GB)
>>copied, 155.414 seconds, 32.9 MB/s
>>
>>domU: 512M, 1CPU         on System LVM/RAID5 (rootvg)
>>
>>        5120000000 bytes (5.1 GB) copied, 576.923 seconds, 8.9 MB/s
>>
>>domU: 512M, 1CPU         on DATA LVM/RAID5 (datavg)
>>
>>        5120000000 bytes (5.1 GB) copied, 582.611 seconds, 8.8 MB/s
>>
>>domU: 512M, 1 CPU on same RAID without LVM
>>
>>        5120000000 bytes (5.1 GB) copied, 808.957 seconds, 6.3 MB/s
>>
>>
>>TEST WITH RAID 0 (dom0 system on RAID 1)
>>---------------------------------------
>>
>>dom0   1GO RAM 1CPU
>>
>>        on system (RAID1):
>>        i3955544064 bytes (4.0 GB) copied, 57.4314 seconds, 68.9 MB/s
>>
>>        on direct HD (RAID 0 of cssiss), no LVM
>>        5120000000 bytes (5.1 GB) copied, 62.5497 seconds, 81.9 MB/s
>>
>>dom0   4GO RAM 4CPU
>>
>>
>>
>>domU:  4GO, 4 CPU
>>
>>        on direct HD (RAID 0), no LVM.
>>        5120000000 bytes (5.1 GB) copied, 51.2684 seconds, 99.9 MB/s
>>
>>
>>domU: 4GO, 4CPU  same HD but ONE LVM on it
>>
>>        5120000000 bytes (5.1 GB) copied, 51.5937 seconds, 99.2 MB/s
>>
>>
>>TEST with only ONE RAID 5 (6 x 146G)
>>------------------------------------
>>
>>dom0 : 1024MB - 1CPUI (RHEL 5.3)
>>
>>        5120000000 bytes (5.1 GB) copied, 231.113 seconds, 22.2 MB/s
>>
>>
>>512MB - 1 CPU
>>        5120000000 bytes (5.1 GB) copied, 1039.42 seconds, 4.9 MB/s
>>
>>
>>512MB - 1 CPU - ONLY 1 VDB [LVM] (root, no swap)
>>
>>        (too slow ..stopped :P)
>>        4035112960 bytes (4.0 GB) copied, 702.883 seconds, 5.7 MB/s
>>
>>512MB - 1 CPU - On a file (root, no swap)
>>
>>        1822666752 bytes (1.8 GB) copied, 2753.91 seconds, 662 kB/s
>>
>>4GB - 2 CPU
>>        5120000000 bytes (5.1 GB) copied, 698.681 seconds, 7.3 MB/s
>>
>>
>>
>>
>>>-----Message d'origine-----
>>>De : Fajar A. Nugraha [mailto:fajar@xxxxxxxxx]
>>>Envoyé : samedi 14 février 2009 06:23
>>>À : DOGUET Emmanuel
>>>Cc : xen-users
>>>Objet : Re: [Xen-users] Re: Xen Disk I/O performance vs native
>>>performance:Xen I/O is definitely super super super slow
>>>
>>>2009/2/13 DOGUET Emmanuel <Emmanuel.DOGUET@xxxxxxxx>:
>>>>
>>>>
>>>> I have mount domU partition on dom0 for testing and it's OK.
>>>> But same partiton on domU side is slow.
>>>>
>>>> Strange.
>>>
>>>Strange indeed. At least that ruled-out hardware problems :)
>>>Could try with a "simple" domU?
>>>- 1 vcpu
>>>- 512 M memory
>>>- only one vbd
>>>
>>>this should isolate whether or not the problem is on your particular
>>>domU (e.g. some config parameter actually make domU slower).
>>>
>>>Your config file should have only few lines, like this
>>>
>>>memory = "512"
>>>vcpus=1
>>>disk = ['phy:/dev/rootvg/bdd-root,xvda1,w' ]
>>>vif = [ "mac=00:22:64:A1:56:BF,bridge=xenbr0" ]
>>>vfb =['type=vnc']
>>>bootloader="/usr/bin/pygrub"
>>>
>>>Regards,
>>>
>>>Fajar
>>>
>>
>>_______________________________________________
>>Xen-users mailing list
>>Xen-users@xxxxxxxxxxxxxxxxxxx
>>http://lists.xensource.com/xen-users
>>

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
<Prev in Thread] Current Thread [Next in Thread>