|
|
|
|
|
|
|
|
|
|
xen-users
RE: [Xen-users] RESOLVED: bad I/O performance with HP Smart Array RAID
>
> Ok, sorry for having thought this might be a XEN issue.
>
> Multiple Smart Array controllers are known for their really bad disk
> write performance. The controller on the machine in question is an
> E200i. I have seen many complaints from users out there that around
> 10-12 MB/s write throughput is about what one can expect from this
> controller and even an inofficial citation from HP support classifying
> this problem as a "feature".
>
> So far, there seems to be no real solution - a few users have
> successfully been able to improve performance by enabling the drive
> write caches (which of course is dangerous). My controller does not
seem
> to support this option at all.
>
> The only real solution seems to get rid of the controller and either
use
> the disks with a normal SATA-controller with software RAID (where you
> can expect 50-100MB/s - a real performance boost) or buy a less trashy
> RAID controller.
>
> I do not know yet what we are going to do. So far I know that we must
> avoid many HP so-called-servers with I/O-performance of a desktop PC
of
> the mid-90s.
>
I had an E200 RAID controller on SATA disks with RAID1, and the
performance was horrible. I have quantify 'horrible' though:
. hdparm -tT showed some impressive figures (comparable to raw disk
speed)
. under light load it worked pretty well
. under heavy load things slowed down to a crawl. Literally.
My workload was doing a Symantec Backup Exec restore to a DomU from a
physical machine across a 1G network. Performance started off around
800Mbytes/minute and quickly dropped to around 100Mbytes/minute. Then I
switched to tap:aio from file: and it dropped to around 12Mbytes/minute.
It was like the more data I tried to push through it the slower it went.
I thought I had discovered the problem when I noticed that the network
adapter and the raid controller were sharing an interrupt - both devices
were going flat out when doing the restore - but changing that made
little difference.
Eventually I put the disks back onto the onboard SATA (it's an ML115)
and did software RAID1 and restore performance went back up to
600-800Mbytes/second.
Write cache might have made a difference (I didn't have any) but I
wasn't about to fork out a few hundred dollars for what is essentially a
dev/test box.
The E200 has a much much shorter io queue length than some of its bigger
siblings. I suspect that maybe the Linux driver just isn't tuned for it
properly
We have other Xen servers with the higher end HP RAID controllers with
battery backed write cache and they work flawlessly.
James
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|
|
|
|
|