This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] unconditionally enable the trace buffer

To: Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx>
Subject: Re: [Xen-devel] unconditionally enable the trace buffer
From: Rob Gardner <rob.gardner@xxxxxx>
Date: Fri, 28 Oct 2005 00:45:37 -0600
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, Mark Williamson <mark.williamson@xxxxxxxxxxxx>
Delivery-date: Fri, 28 Oct 2005 06:42:50 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <A95E2296287EAD4EB592B5DEEFCE0E9D32E62B@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <A95E2296287EAD4EB592B5DEEFCE0E9D32E62B@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla Thunderbird 1.0.2 (Windows/20050317)
Ian Pratt wrote:

* ability to turn on/off via hypercall
Not currently implemented, but would not be difficult to add.
Just as an aside I'm not sure this matters: From what Rob's told me, having the (inline) trace() calls in there produces the same overhead whether tracing is active or not. I guess it makes sense; once you've incurred the overhead of having the function there and evaluating the "is tracing on" conditional, you might as well have stored a few values also ;-)
We could cache miss on reading the tracebuffer producer counter and
tracebuffer base address, so its not totally a done deal.

I don't have a big worry about performance, but I'd feel more
comfortable to see a more realistic assesment of overhead. Comparing
results from ttcp in a domU with 128k sock buf and the MTU set to 552
bytes should do it.

I just completed some benchmarks using ttcp as Ian suggests. I was surprised by the results, but essentially, Ian's intuition n this case may be right on the money. I tested three cases:
 1. trace buffers turned off, that is not compiled in
 2. trace buffers compiled in and turned on
 3. trace buffers compiled in but disabled
Cases 1 and 3 show no performance difference at all, but case 2 does show a non-trivial degradation.

Details of the benchmarking: ttcp using 128k buffers to send data to a domU from another machine using a gigabit network. 2gb were transferred in each run, and 10 runs were performed for each case. Dom0 and domU together used up just about all of a cpu, and experienced 2000-2500 context switches per second to handle 40-50,000 I/O's per second. Please note that the additional trace events for XenMon were enabled in this system (case 2) so each of those 50,000 I/O's each second generates a trace record!

I conclude the following from this:
- there is no penalty for simply having the trace buffer code compiled into xen
- there is a penalty for enabling tracing
- therefore, the ability to turn tracing on/off via a hypercall is definitely important


Xen-devel mailing list