for sure the first things to define are tools and methods to perform tests... and
yes, we have to test blocksizes, disk areas, caching effects just to cite some
of the involved variables, but we also need to test VM colocation effects, and
the overall storage system overhead (we are not aiming at testing disk performance,
our purpose is to test software/hardware storage systems in virtualized environments).
Anyway, when I talk about sharing test results, I'm thinking about tests that stress an hardware
configuration using different approaches, e.g. LVM over iSCSI PV compared to VM images over NFS .
That's beacuse, I think that "absolute" tests of a single configuration are not so useful:
from comparisons over the same harware we can be more confident that the results we get are still
valid over a similar (clearly not exactly the same!!) configuration.
2011/1/26 Christian Zoffoli <czoffoli@xxxxxxxxxxx>
Il 26/01/2011 15:49, Roberto Bifulco ha scritto:
> Considering the large use of xen in enterprises, it can help lots of usfio is the way http://git.kernel.dk/?p=fio.git;a=summary
> in designing the right infrastructure!
you can simply test any storage / raid level / setup and you can compare
all choosing the right solution.
But please, check every blocksize, all the disk areas and so on.
If you test only some areas you can see overestimated or underestimated
results because of caches, better mechanichal performances on some areas
and so on.
If you test all you have an average result that's close to the reality.
Sequential transfers tipically looks pretty ...but they are not what you
are searching for.
Random IOPS (read / write / combined) are what you are searching for.
I'm agree that real loads are something mixed but RANDOM IOPS are closer
to the reality.
what you have to search are the worst values and they are what do you
need to find how is big your I/O cake.
Roberto Bifulco, Ph.D. Studentrobertobifulco.it
COMICS Lab - www.comics.unina.it
Xen-users mailing list