[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [RFC][PATCH] 1/3] [XEN] Use explicit bit sized fieldsfor exported xentrace data.



> I see two other options:
>  * Pre-allocate a block of data and fill it in
>  * Allocate a struct on the stack, and copy it all at once.
> 
> In the first case you'd do something like the following:
>   struct {
>      [trace layout]
>   } *trec;
> 
>   trec=trace_var(TRC_TYPE, sizeof(*trec), [maybe some other info]);
> 
>   /* Fill in trec->* */
> 
> The second case looks similar:
>  struct {
>      [trace layout]
>  } trec;
> 
>  /* Fill in trec.*  */
> 
>  trace_var(TRC_TYPE, &trec, sizeof(trec), [maybe other info]);

The ultimate best way of doing it would be to have trace functions that
took a format string and a variable number of arguments. The actual
trace record written in the buffer would just contain the record type
and the length of the record, followed by the variable data. The format
string would be written out in an a separate segment, enabling it to be
extracted and used by the trace post-process tool to pretty print the
records.

Ian
 
> The second case involves an extra copy, but that shouldn't be a big
> deal.  It has the advantage of being self-contained, and the trace
> code can make the record "wrap around" transparently.
> 
> The first means no copying, but it also means no "wrap around"; if
> there's not enough room at the end of a buffer, the space would just
> have to be left empty.  That's not probably such a big deal, though.
> The bigger problem comes if several "open" trace records happen at
> once.  It's technically possible that the trace buffer will wrap
> around before a function is done writing to its original buffer.
> 
> In both of these cases, the common "TRACE_nD" macros can be left, I
> think.  We might want to add "TRACE_nDL" for 64-bit values, and then
> let those who need more flexible trace structures call trace_var()
> directly.
> 
> This way of doing things also has the advantage that the trace record
> can be defined in a public header somewhere, and used by user-space
> analysis tools as well as the hypervisor tracing code.
> 
> Thoughts?
> 
>  -George
> 
> On 12/5/06, Mark Williamson <mark.williamson@xxxxxxxxxxxx> wrote:
> > > There's no reason not to make the trace format more flexible.
There's a
> > > question about how you represent trace points in the Xen code
though,
> when
> > > the format is no longer a list of fixed size integers.
> >
> > I can see two main possibilities.  One involving a variadic function
and
> one
> > involving mega macros of doom.
> >
> > One possibility would be a trace() function taking a variable number
of
> > arguments, i.e.
> >
> > void trace(type, unsigned char data1, unsigned char data2, ... etc)
> >
> > And a set of arch-defined macros (or at least bitness / endian
defined
> > macros).  Eg. on x86 we might have:
> >
> > #define TRACE_U16(d) ((unsigned char)(d & 255)), ((unsigned char)(d
>>
> 8))
> >
> > We'd need to verify whether the extra processing had a measurable
> performance
> > impact, however.
> >
> > Another alternative would be to make the array of trace buffers
globally
> > accessible and then use a set of macros for the trace() instead of
an
> inline
> > function.  The macros could then look something like (pseudocode):
> >
> > struct trace_record {
> > u32 type;
> > u32 data_len;
> > char data[]
> > };
> >
> > char *trace_buffer[NR_CPUS]
> >
> > #define open_trace(type)      \
> >                               do { \
> >                                   disable local irqs
> \
> >                                   struct trace_record *record =
> \
> >
> &trace_buffer[cpu][producer_idx]; \
> >                                   record->type = (u32)type
> \
> >                                   record->data_len = 0;
> >
> > #define trace_u16(data)       *(u16 *)record->data[record->data_len]
=
> data \
> >                               record->data_len += sizeof(u16)
> >
> > ... etc for different data types, with appropriate variations if
> necessary for
> > different platforms ...
> >
> > #define close_trace()       \
> >                             inc producer counter by sizeof(struct
> \
> >                             trace_record) + record->data_len for
> userspace \
> >                             to see \
> >                             re-enable local irqs
> \
> >                             } while(0)
> >
> >
> > Things become unhappy here because there'd need to be some kind of
bounds
> > checking in here to determine whether we need to wrap to the
beginning of
> the
> > trace buffer again.  The alternatives as I see them would be either:
> >
> > a) include code in each data macro to check if we'd reached the end
of
> the
> > buffer and wrap the data appropriately, or
> > b) include code that'll simply copy everything we've built so far to
the
> > beginning of the trace buffer and start again.
> >
> > Either way is going to be ugly and unpleasant.  Also, we have the
problem
> of
> > not knowing whether we're going to wrap OR run out of space until
we're
> part
> > way through the trace record, although in this instance, I guess we
could
> > just change to create a "missed data" record.
> >
> > I think the first approach (variadic function) above is probably
nicer.
> We
> > can always make a few macros to make common cases (e.g. recording a
type
> and
> > a single u64 of data) less verbose.
> >
> > Any thoughts?
> >
> > Cheers,
> > Mark
> >
> > --
> > Dave: Just a question. What use is a unicyle with no seat?  And no
> pedals!
> > Mark: To answer a question with a question: What use is a
skateboard?
> > Dave: Skateboards have wheels.
> > Mark: My wheel has a wheel!
> >
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.