[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86: correct socket_cpumask allocation



On Thu, Jul 09, 2015 at 10:41:55AM +0100, Jan Beulich wrote:
> >>> On 09.07.15 at 10:26, <chao.p.peng@xxxxxxxxxxxxxxx> wrote:
> > @@ -748,8 +758,9 @@ static int cpu_smpboot_alloc(unsigned int cpu)
> >          goto oom;
> >      per_cpu(stubs.addr, cpu) = stub_page + STUB_BUF_CPU_OFFS(cpu);
> >  
> > -    if ( !socket_cpumask[socket] &&
> > -         !zalloc_cpumask_var(socket_cpumask + socket) )
> > +    if ( secondary_socket_cpumask == NULL &&
> > +         (secondary_socket_cpumask = _xzalloc(nr_cpumask_bits / 8,
> > +                                              sizeof(long))) == NULL )
> 
> This is horrible since completely type-unsafe, and correct only
> because _xmalloc() happens to allocate more space than requested
> if the size isn't a multiple of MEM_ALIGN. And it makes me realize why
> on IRC I first suggested xzalloc_array(): That would at least have
> taken care of that latent bug. And remember that I did _not_
> suggest _xzalloc(), but xzalloc().
> 
> Taken together I think we should stay with using zalloc_cpumask_var(),
> and introduce zap_cpumask_var() (storing NULL in the big NR_CPUS
> case and doing nothing in the small one).

Apart from zap_cpumask_var() there is need to check if cpumask_vat_t is
NULL as well. While that is weird to satisfy compiler for small NR_CPUS case.

> Should I be overlooking
> something that still prevents this from building in both cases, the
> above allocation should be changed to at least be type safe (and I
> guess I'd rather waste a few bytes here than see you add fragile
> casts or some such).

So this solution is finally adopted. The new version is already sent out.

Chao

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.