* Konrad Rzeszutek Wilk (konrad.wilk@xxxxxxxxxx) wrote:
> Keir, Dan, Mathieu, Chris, Mukesh,
> Doubt it. Your best bet to figure this out is to play with ftrace, or
> perf trace. But I don't know how well they work with Xen nowadays - Jeremy
> and Mathieu Desnoyers poked it a bit and I think I overheard that Mathieu got
> it working?
I did port LTTng to the Xen hypervisor in a past life, but I did not
have time to maintain this port in parallel with the Linux kernel LTTng.
So I doubt these bits would be very useful today, as a new port would be
needed for compatibility with newer lttng tools.
If you can afford to use older Xen hypervisors with older Linux kernels
and old LTTng/LTTV versions, then you could gather a synchronized trace
across the hypervisor/Dom0/DomUs, but it would require some work for
recent Xen versions.
Currently, we've been focusing our efforts on tracing of KVM, which
works very well. We support analysis of traces taken from different
host/guest domains, as long as the TSCs are synchronized.
So an option here would be to deploy LTTng on both your dom0 and domU
kernels, gather traces of both in parallel while you run your workload,
and compare the resulting traces (load both dom0 and domU traces into
one trace set within lttv). Comparing the I/O behavior with a bare-metal
trace should give a good insight into what's different.
At least you'll be able to follow the path taken by each I/O request,
except for what's happening in Xen, which will be a black box.
Operating System Efficiency R&D Consultant
Xen-devel mailing list