WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: how to avoid lost trace records?

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] Re: how to avoid lost trace records?
From: Olaf Hering <olaf@xxxxxxxxx>
Date: Sat, 20 Nov 2010 21:21:22 +0100
Delivery-date: Sat, 20 Nov 2010 12:22:21 -0800
Dkim-signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1290284490; l=1438; s=domk; d=aepfle.de; h=In-Reply-To:Content-Type:MIME-Version:References:Subject:To:From: Date:X-RZG-CLASS-ID:X-RZG-AUTH; bh=IzVWQVjv0TMFrhczIlSObN51UzE=; b=iNnGYWS8EK4ZE67N+rPPoZtyrIXQixZuCyG3FxQhJche6plEJg5i+nYWEjibCIQ0GAE wRW5sIW9kRCzqictar8IW9uaph6G6JQoMUdbrTm7hHehIVj6sL6ZVqX3P5sl/zg8tQ9CM vOEINcNPmruCfs1ZTVr2DY58n3PiqHzDTmc=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20101119154652.GA11544@xxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20101119154652.GA11544@xxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.20 (2009-06-14)
On Fri, Nov 19, Olaf Hering wrote:

> 
> Today I inspected the xenalyze and the dump-raw output and noticed that
> huge number of lost trace records, even when booted with tbuf_size=200:
> 
> grep -wn 1f001 log.sles11_6.xentrace.txt.dump-raw
> 274438:R p 5 o000000000063ffd4    1f001 4 t0000006d215b3c6b [ b6aed 57fff 
> 9e668fb6 51 ]
...
> That means more than 740K lost entries on cpu5,3,2,1,0.
> Is this expected?

After reading the sources more carefully, its clear now.
There are a few constraints:

If booted with tbuf_size=N, tracing starts right away and fills up the
buffer until xentrace collects its content. So entries will be lost.

Once I just ran xentrace -e all > output, which filled up the whole disk
during my testing. So I changed the way to collect the output to a
compressed file:

 # mknod pipe p
 # gzip -v9 < pipe > output.gz &
 # xentrace -e all pipe &

This means xentrace will stall until gzip has made room in the pipe.
Which also means xentrace cant collect more data from the tracebuffer
while waiting. So that is the reason for the lost entries.

Now I changed T_INFO_PAGES in trace.c from 2 to 16, and reduced the
compression rate to speedup gzip emptying the pipe.

 # mknod pipe p
 # nice -n -19 gzip -v1 < pipe > output.gz &
 # nice -n -19 xentrace -s 1 -S 2031 -e $(( 0x10f000 )) pipe &


This means no more lost entries even with more than one guest running.


Olaf


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel