Pasi Kärkkäinen wrote:
can you say a little more about what you mean by "properly set up" vs.
not properly set up?
On Sat, Jun 05, 2010 at 06:59:51PM -0400, Miles Fidelman wrote:
I've been doing some experimenting to see how far I can push some old
hardware into a virtualized environment - partially to see how much use
I can get out of the hardware, and partially to learn more about the
behavior of, and interactions between, software RAID, LVM, DRBD, and Xen.
Is your disk/partition aligment properly set up? Doing it wrong could
cause bad performance. It's easy to mess it up with VMs.
As I've started experimenting with adding additional domUs, in various
configurations, I've found that my mail server can get into a state
where it's spending almost all of its cycles in an i/o wait state (95%
and higher as reported by top). This is particularly noticeable when I
run a backup job (essentially a large tar job that reads from the root
volume and writes to the backup volume). The domU grinds to halt.
Is that iowait measure in the guest, or in dom0?
iowait ONLY suffers in the guest
when I run stress tests, iowait (in the guest) jumps considerably when:
- running a benchmark (bonnie++) in dom0, on either host (to be
expected, given that dom0 gets priority)
- running bonnie++ in the guest with iowait problems
running bonnie++ in another guest does not impact the iowaits
Again run "iostat 1" in both the domU and dom0, and compare the results.
Also run "xm top" in dom0 to monitor the overall CPU usage.
very little CPU load
iostat (and vmstat) are what really helped me track things down; and
after doing a lot of googling on "performance tuning" and "iowait" I
came across the suggestion to add "noatime" to my mount options ---
brought my iowait times way down, and sped up performance
you learn something new every day :-)
Thanks again, to all,
In theory, there is no difference between theory and practice.
In<fnord> practice, there is. .... Yogi Berra
Xen-users mailing list