[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC] KEXEC: allocate crash note buffers at boot time v3



On 01/12/11 12:56, Jan Beulich wrote:
>>>> On 01.12.11 at 13:29, Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote:
>> +static spinlock_t crash_notes_lock = SPIN_LOCK_UNLOCKED;
> Please use DEFINE_SPINLOCK() here.
>
>> +    register_keyhandler('C', &crashdump_trigger_keyhandler);
>> +
>> +    /* If no crash area, no need to allocate space for notes. */
>> +    if ( 0 == kexec_crash_area.size )
>> +        return 0;
> Wouldn't it make sense to switch the order of these?
>
>> +    crash_notes = xmalloc_bytes(nr_cpu_ids * sizeof(void*));
> Please use xmalloc_array() here.
>
>> +    if ( !crash_notes[nr] && 0 != kexec_init_cpu_notes(nr) )
> The first check is pointless - the function will return zero if the
> allocation was already done.

I forgot to say in my previous email.  Short circuit evaluation should
prevent the kexec_init_cpu_notes() function call actually being made.

> Further, you shouldn't take a lock around a call to xmalloc() or alike
> unless absolutely necessary. It is pretty simple to avoid here - you
> really only need to lock around the storing of the pointer and maybe
> the setup_note() calls (but be careful with returning -ENOMEM - you
> shouldn't if the allocation fails, but you then find - under the lock -
> that a pointer was already set by another CPU).
>
> Finally, one thing I failed to notice on the previous version - the
> nr_bytes calculations are now being done twice. This should
> probably be moved into a helper function, especially since you
> said you intend to add stuff here subsequently.
>
> Jan
>

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.