[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/6] libxl: libxl_device_disk_local_attach return a new libxl_device_disk



On Tue, 27 Mar 2012, Ian Jackson wrote:
> Stefano Stabellini writes ("[PATCH 1/6] libxl: libxl_device_disk_local_attach 
> return a new libxl_device_disk"):
> > -char * libxl_device_disk_local_attach(libxl_ctx *ctx, libxl_device_disk 
> > *disk)
> > +char * libxl_device_disk_local_attach(libxl_ctx *ctx, const 
> > libxl_device_disk *disk,
> > +        libxl_device_disk **new_disk)
> 
> Long line.
> 
> Shouldn't we make this a libxl-internal function ?  If we do that we
> can allocate the new libxl_device_disk from the gc.
> 
> > +    if (tmpdisk->pdev_path != NULL)
> > +        tmpdisk->pdev_path = strdup(tmpdisk->pdev_path);
> > +    if (tmpdisk->script != NULL)
> > +        tmpdisk->script = strdup(tmpdisk->script);
> > +    tmpdisk->vdev = NULL;
> 
> Perhaps you should put my "malloc always fails" patch on the bottom of
> your series ?  That means you can write
>    tmpdisk->pdev_path = libxl__strdup(0, tmpdisk->pdev_path);
> which will crash more sanely if malloc fails.
> 
> > -    switch (disk->backend) {
> > +    switch (tmpdisk->backend) {
> 
> If you renamed the formal parameter rather than introducing a new
> variable this would all be a lot easier to read (both the patch, and
> the resulting code).
> 
> >   out:
> > -    if (dev != NULL)
> > +    if (dev != NULL) {
> >          ret = strdup(dev);
> > +        tmpdisk->vdev = strdup(tmpdisk->vdev);
> > +    }
> >      GC_FREE;
> >      return ret;
> 
> This leaks tmpdisk if the function fails.  Or, alternatively, you
> expect the caller to always free *new_disk even if _attach fails,
> which is very counterintuitive.
> 
> This would be solved if you made this an internal function and used
> the gc.

Right, I'll do that.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.