some results from my configuration.
I've this configuration (I'm not interested in raw top performances but
in reliability.. so, I can accept slow MB/sec and prefer to rely on a
RAID6, for example):
1 Infortrend iSCSI Array A16E-G2130-4 with:
- 1GB DDR cache
- 7 x 500GB sataII Seagate ST3500630NS with 16mb (no budget for SAS)
- one of the logic volumes (about 1TB) is actually bounded to a single
1 DELL PE1950 server with:
- QLogic iSCSI HBA QLA 4060C
- 2 x Quad core x5335 2GHz Xeon
- 8GB RAM
- 2 x 73GB SAS drives (raid1 software)
1 DELL 2716 gigabit switch in the middle
The array, tha HBA and the switch are Jumbo frame enabled. CHAP
I've successfully installed XEN 3.1.0 from source (kernel 2.6.18-xen)
and the QLogic ISP4XXX iSCSI Host Bus Adapter driver 5.01.00.08-2.
I have successfully installed a windows XP HVM, a Scientific Linux CERN
- SLC - 4 and 3 (RedHat EL 4 and 3) HVMs and other non HVM machines.
The volume I imported from the iSCSI is used with LVM: each HVM domain
has a 8GB drive partitioned with a 1GB swap and rest on /.
These are performance for bonnie++ (Per Chr column write and read):
iSCSI dom0: w: 54M, r: 47M (no domU)
Local disk dom0: w: 59M, r: 51M (3 idle domU)
HVM single : w: 18M, r: 37M
I've then launched bonnie++ on two separate SLC4 HVM (cloned):
HVM 1 : w: 8M, r: 33M
HVM 2 : w: 8.5M, r: 38M
The same on three HVM:
HVM 1 : w: 4.4M, r: 11M
HVM 2, crashed, lost ssh, error: "hda: lost interrupt", need "xm reboot"
HVM 3, crashed, lost ssh, error: "hda: lost interrupt", need "xm reboot"
So, where is the limit of my configuration?
How can this scale up in your opinions in real applications (not just
stressing it with bonnie++)?
I'm quite confident that HVM performance should be slow. My problem
eventually is that iSCSI performance are not so bad compared on Local
disk performance (which could lead to a "really poor local disk perf!").
Any suggestion on this is appreciated.
Thank you in advance,
voice: +39 010 353 2789
fax: +39 010 353 2948
Xen-users mailing list