[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] xentrace, xenalyze



On 24/02/16 15:24, Paul Sujkov wrote:
>> I think actually the first thing you might need to do is to get the xentrace
> infrastructure working on ARM
> 
> Already done that. It requires some patches to memory manager, timer and
> policies. I guess I should upstream them, though.
> 
>> After that, the next thing would be to add the equivalent of VMEXIT and 
>> VMENTRY
> traces in the hypervisor on ARM guest exit and entry
> 
> It seems that this is already covered as well. At least, I have pretty
> decent (and correct if I support timer frequency instead of CPU frequency
> to xenalyze - this is where it differs from x86) trace info summary.

You mean, you have local patches you haven't upstreamed?  Or they're
already upstream?  (If the latter, I don't see the trace definitions in
xen/include/public/trace.h...)

If I could see those traces I could give you better advice about how to
integrate them into xenalyze (and possibly how to change them so they
fit better into what xenalyze does).

> 
>> add in extra tracing information
>> add support for analyzing that data to xenalyze
> 
> And, well, these are exactly the steps I can really use some help with :)
> are there any examples of parsing some additional custom trace with
> xenalyze?

So at the basic level, xenalyze has a "dump" mode, which just attempts
to print out the trace records it sees in the file, in a human readable
format, in the order in which they originally happened (even across
physical corer / processors).

To get *that* working, you just need to add it into the "triage" from
xenalyze.c:process_record().

But the real power of xenalyze is to aggregate information about how
many vmexits of a particular type happened, and how long we spent (in
cycles) doing each one.

The basic data structure for this is struct event_cycles_summary.  You
keep such a struct for every separate type of event that takes a certain
number of cycles you want to be able to classify.  As you go through the
trace file, whenever that event happens,  you call update_summary() with
a pointer to the event struct and the number of cycles.

Then when you're done processing the whole file, you call
PRINT_SUMMARY() with a pointer to the summary struct, along with printf
information you want to print before the summary information.

So the next step, after getting the ARM equivalent of TRC_HVM_VMEXIT and
TRC_HVM_VMENTRY set up, would be to get the equivalent of
hvm_vmexit_process() and hvm_vmentry_process() (and hvm_close_vmexit())
set up.

You'd probably want to start by creating a new structure, arm_data, and
adding it to the vcpu_data struct (beside hvm_data and pv_data) (also
making a new VCPU_DATA_ARM enumeration value of course).

The basic processing cycle goes like this:
* vmexit: Store the information about the vmexit in v->hvm_data
* Other HVM traces: add more information about what happened in v->hvm_data
* vmentry: Calculate the length of this event (vmentry.tsc -
vmexit.tsc), figure out all the different summaries which correspond to
this event, and call update_summary() on each of them.

One subtlety to introduce here: it's not uncommon to enter into Xen due
to a vmexit, do something on behalf of a guest, and then get scheduled
out to run some other vcpu.  The simplistic "vmexit -> vmentry"
calculation would account this time waiting for the cpu as time
processing the event -- which is not what you want.  So xenalyze has a
concept of "closing" a vmexit which happens when the vmexit is logically
finished.  hvm_close_vmexit() is called either from hvm_process_vmext(),
or from process_runstate_change when it detects a vcpu switching to the
"runnable" state.

OK, hopefully that gives you enough to start with. :-)

 -George


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.