[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/emul: Split exception handling out of invoke_stub()



On 25/01/18 10:49, Jan Beulich wrote:
>>>> On 24.01.18 at 19:16, <andrew.cooper3@xxxxxxxxxx> wrote:
>> For a release build, bloat-o-meter reports:
>>
>>   add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-5111 (-5111)
>>   function                                     old     new   delta
>>   x86_emulate                               126458  121347   -5111
>>
>> or in other words, a 4% redunction in code size from this change alone.
>>
>> While shuffling things around, drop the use of __LINE__,
> While the rest of the change is fine, I continue to object to the
> removal of __LINE__ here - afaict it is awkward to reconstruct the
> line number when being presented just the address. At the very
> least you'd have to run a tool like addr2line, which assumes you
> have the correct binary to hand (which is not very likely based on
> my experience). However much I can agree that line numbers get
> in the way of live patching, there are cases where problem
> analysis is quite a bit harder without them. And this is an example
> thereof.

The point of printing the instruction stream at the WARNING is that it
uniquely identifies the invoke_stub() call, just like the __LINE__
information does, and this rearrangement makes __LINE__ awkward to use. 
We'd need another __XEN__-guarded local variable on the stack.

The tradeoff for livepatching is how likely we are to have a
livepatchable security issue which modifies something in x86_emulate.c
which results in perturbance of __LINE__, vs the utility of using
__LINE__ in the first place.

All uses of __LINE__ here are part of x86_emulate(), but we have had
issues in the past which are fixed exclusively in the x86_decode() path.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.