I agree about the number of disks. Your idea of getting a bigger chassis is something we are looking at for a customer at the moment.
Our main Xen pool connect to Equallogics over iscsi. Something I have been reading about just recently is getting a large chassis and installing 8 or 10 disks in it and install Xen on a pair of mirrored drives and then install Open Filer on the machine too. This means you can then present dynamic disks to your Xen Dom0 and share the disks to another Dom0 should you want to and allow you to add more disks in the future as you need more space. There are a few articles out there about doing it but I haven’t personally tried it yet although I have used Open Filer without issue before.
Obviously if you already have the machine this might not work for you but open filer is a cheap way of creating a good size NAS / SAN with iSCSI capability.
From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Jeff Sturm
Sent: 13 September 2010 18:21
To: admin@xxxxxxxxxxx; xen-users@xxxxxxxxxxxxxxxxxxx
Subject: RE: [Xen-users] Hardware performance question : Disk RPM speed&XenPerformance
Agreed with what’s said below. Traditional (Winchester) disk drives are incredibly slow devices relative to the rest of your computing environment (CPU, memory, network etc.). You can’t do much with a pair of disks. 20-30 VM’s won’t work very well unless they each do very little I/O.
Cheap way to get more I/O throughput is to buy a big chassis and stuff lots of disks into it—as many as you can get. The size of the disks isn’t important. The quantity is. 15k drives often aren’t cost effective in such arrangements. Most server chassis are optimized for PCI expansion and air flow, not storage, so an external chassis is often a necessity. If cost is a factor you can buy a chassis that can be shared across 2 or more dom0’s.
In our environments, we tend to run anywhere from 4 up to about 8 domU’s per dom0, and no more than 4 dom0’s per disk array. So one of our disk arrays (typically 14 disks in RAID10, plus a spare) may serve from 16 to 32 domU’s. Performance overall is good with our workload.
It also helps to tune your Linux domU’s to reduce I/O. I’ve found a few simple tricks that help:
- Mount ext3 partitions with “noatime”
- Configure syslogd not to sync file writes
- Get rid of disk-intensive packages like mlocate
- Use tmpfs for small, volatile file storage (e.g. /tmp)
Other tricks may be possible depending on the types of user applications you operate.
From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of admin@xxxxxxxxxxx
Sent: Monday, September 13, 2010 12:35 PM
Subject: RE: [Xen-users] Hardware performance question : Disk RPM speed &XenPerformance
Each 7200RPM drive is good for about 100 IOPS. Each 15k RPM SAS can usually handle 200 IOPS. I would not personally try to run 20-30 VMs from two SATA drives, because it would almost surely lead to poor performance. But I am basing that statement on the type of IO I typically see in our environment. Your VMs might use totally different amounts of disk IO than my VMs do, so you may or may need not to worry about disk IO. It really depends on the type of tasks each VM is doing. One idea would be to measure the IOPS and graph it using MRTG. Start with a few VMs and measure them for a few weeks to get an idea how much total disk IO is needed prior to moving all of the VMs into production. Once you actually measure the disk IO for a while, then you can make an informed decision.
From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of kevin
Sent: Monday, September 13, 2010 10:45 AM
Subject: [Xen-users] Hardware performance question : Disk RPM speed & XenPerformance
I am a relatively new user of Xen virtualization, so you’ll have to forgive the simplistic nature of my question.
I have a Dell R410 poweredge server (dual quad core CPUs + 32gb ram). I plan on utilizing this server with Xen.
The ‘dilemma’ I am having is whether or not to replace the 2x 500gb 7.2K RPM drives that came with the server with faster 300gb 15K RPM drives. Obviously drives that spin faster in general are a better thing. I am trying to avoid investing $1,000 more in obtaining these drives unless I feel it is absolutely necessary.
From Xen documentation, I couldn’t get enough of an idea of how disk write and the speed of disks might play in a potential bottleneck scenario when 20-30 VMs are ultimately going to be running on the box.
Does anyone have any experience or advise to share? Ultimately I don’t mind spending the extra money to replace the drives but I would love to hear what your thoughts might be as far as what kind of actual performance increases I might expect.