[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: how to avoid lost trace records?


  • To: George Dunlap <dunlapg@xxxxxxxxx>, Olaf Hering <olaf@xxxxxxxxx>
  • From: Keir Fraser <keir@xxxxxxx>
  • Date: Mon, 22 Nov 2010 13:46:40 +0000
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Mon, 22 Nov 2010 05:47:38 -0800
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic :thread-index:in-reply-to:mime-version:content-type :content-transfer-encoding; b=mrsIGwPJww0j3AtacfwOPRHhoz8y03x9lMW8+VRfiwzBx+PAvSPtUy0ycxL6GCB3u7 1Lc8Sy/BBKn0Lmee569cZdpHV4zSbwM517HdG75A4Kk1mFf0SesJundkKcIPiqZVO4G0 QFAkGF6XqexQQdiKyeNQM8uBfMQmyFo9oR0fI=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcuKS7CWE6yroR8iIku7HzoZvmGaNw==
  • Thread-topic: [Xen-devel] Re: how to avoid lost trace records?

Is there a good reason that T_INFO_PAGES cannot be specified dynamically by
the toolstack, when enabling tracing? It doesn't seem particularly necessary
that this piece of policy is expressed statically within the hypervisor.

 -- Keir

On 22/11/2010 11:53, "George Dunlap" <dunlapg@xxxxxxxxx> wrote:

> Olaf,
> 
> Dang, 8 megs per cpu -- but I guess that's really not so much overhead
> on a big machine; and it's definitely worth getting around the lost
> records issue.  Send the T_INFO_PAGES patch to the list, and see what
> Keir thinks.
> 
> There's probably a way to modify xenalyze to do start up gzip
> directly; may not be a bad idea.
> 
>  -George
> 
> On Sat, Nov 20, 2010 at 8:21 PM, Olaf Hering <olaf@xxxxxxxxx> wrote:
>> On Fri, Nov 19, Olaf Hering wrote:
>> 
>>> 
>>> Today I inspected the xenalyze and the dump-raw output and noticed that
>>> huge number of lost trace records, even when booted with tbuf_size=200:
>>> 
>>> grep -wn 1f001 log.sles11_6.xentrace.txt.dump-raw
>>> 274438:R p 5 o000000000063ffd4    1f001 4 t0000006d215b3c6b [ b6aed 57fff
>>> 9e668fb6 51 ]
>> ...
>>> That means more than 740K lost entries on cpu5,3,2,1,0.
>>> Is this expected?
>> 
>> After reading the sources more carefully, its clear now.
>> There are a few constraints:
>> 
>> If booted with tbuf_size=N, tracing starts right away and fills up the
>> buffer until xentrace collects its content. So entries will be lost.
>> 
>> Once I just ran xentrace -e all > output, which filled up the whole disk
>> during my testing. So I changed the way to collect the output to a
>> compressed file:
>> 
>>  # mknod pipe p
>>  # gzip -v9 < pipe > output.gz &
>>  # xentrace -e all pipe &
>> 
>> This means xentrace will stall until gzip has made room in the pipe.
>> Which also means xentrace cant collect more data from the tracebuffer
>> while waiting. So that is the reason for the lost entries.
>> 
>> Now I changed T_INFO_PAGES in trace.c from 2 to 16, and reduced the
>> compression rate to speedup gzip emptying the pipe.
>> 
>>  # mknod pipe p
>>  # nice -n -19 gzip -v1 < pipe > output.gz &
>>  # nice -n -19 xentrace -s 1 -S 2031 -e $(( 0x10f000 )) pipe &
>> 
>> 
>> This means no more lost entries even with more than one guest running.
>> 
>> 
>> Olaf
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel
>> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.