[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 7/9] livepatch: NOP if func->new_[addr] is zero.



>>> On 06.09.16 at 22:05, <konrad.wilk@xxxxxxxxxx> wrote:
> On Wed, Aug 24, 2016 at 03:13:18AM -0600, Jan Beulich wrote:
>> >>> On 24.08.16 at 04:22, <konrad.wilk@xxxxxxxxxx> wrote:
>> > The NOP functionality will NOP any of the code at
>> > the 'old_addr' or at 'name' if the 'new_addr' is zero.
>> > The purpose of this is to NOP out calls, such as:
>> > 
>> >  e8 <4-bytes-offset>
>> > 
>> > (5 byte insn), or on ARM a 4 byte insn for branching.
>> > But on x86 we could NOP instructions that are much
>> > shorter or longer (up to 15 bytes).
>> 
>> And we could NOP multiple instructions in one go, i.e. the new
>> boundary you introduce is still arbitrary.
> 
> True.
> 
> I am limited by the 'struct livepatch_func' -> opaque[31] size.
> 
> I figured an OK limit would be up to a maximum platform instruction size.
> That is what the design document mentions too:
> " then `new_size` determines how many instruction bytes to NOP (up to
> platform limitations)."
> 
> But in reality it could be up to 31 bytes - unless I rework the 'opaque'
> to have a void pointer to some bigger size structure - but if I do that
> then this gets complicated.
> 
> Keep in mind that the person writting the payload can have multiple
> 'struct livepatch_func' - so you could NOP a stream of say 30 bytes
> using two 'struct livepatch_func'.
> 
> If we allow the NOP to be up to the size of the 'opaque' size, then
> you could NOP a stream of instructions up to 62 bytes with two 
> 'struct livepatch_func'. or such. Thought to keep this from blowing
> up on ARM I would say it has to be up to modulo 4.
> 
> Do you have a preference on this?

Well, if it keeps the code meaningfully more simple with a restriction
on the size, then I'd prefer fully leveraging opaque's size. Even
better of course would be to not place such a restriction. Of course
an option is to leave the limit in for now, but track a work item for
it to get eliminated.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.