[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 1/9] livepatch: Clear .bss when payload is reverted



>>> On 19.08.16 at 10:37, <ross.lagerwall@xxxxxxxxxx> wrote:
> On 08/15/2016 04:10 PM, Jan Beulich wrote:
>>>>> On 15.08.16 at 16:29, <konrad.wilk@xxxxxxxxxx> wrote:
>>> On Mon, Aug 15, 2016 at 04:27:38AM -0600, Jan Beulich wrote:
>>>>>>> On 14.08.16 at 23:52, <konrad.wilk@xxxxxxxxxx> wrote:
>>>>> @@ -1034,6 +1047,9 @@ static int revert_payload(struct payload *data)
>>>>>      list_del_rcu(&data->applied_list);
>>>>>      unregister_virtual_region(&data->region);
>>>>>
>>>>> +    /* And clear the BSS for subsequent operation. */
>>>>> +    memset(data->bss, 0, data->bss_size);
>>>>
>>>> Instead of doing it in two places, how about moving this into
>>>> apply_payload()?
>>>
>>> I was thinking of it too, but then it occurred to me that between the
>>> 'check' -> 'apply' the BSS is left unitialized. And if we ever crash
>>> (right as somebody scheduled the livepatch) one could form the opinion
>>> that it is due to the .bss having garbage. Or more importantly - the
>>> hooks may not have run, but all its values looked like they have been
>>> initialized.
>>>
>>> And spend considerable amount of time debugging something which in reality
>>> is not a problem?
>>>
>>> Perhaps put it in multiple places :-)
>>
>> Well, you and Ross know. Afaic I'd always do things in just one
>> place if that's possible.
>>
> 
> Yes, I'd also prefer it in just a single place.
> 
> However, assuming that there is only a single BSS section is wrong. 
> Especially when compiling with -fdata-sections (as livepatch-build-tools 
> does), a payload can have multiple BSS sections.

Good point, and making it an even stronger argument to deal with
that in just one place.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.