[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 1/2] libxl: Introduce functions to add and remove USB devices to an HVM guest

On Wed, 2013-04-17 at 11:02 +0100, George Dunlap wrote:
> On 17/04/13 10:58, Ian Campbell wrote:
> > On Tue, 2013-04-16 at 19:00 +0100, Roger Pau Monnà wrote:
> >
> >>> +    libxl_usb_path = libxl__sprintf(gc, "%s/usb", libxl_path);
> >> I would recommend to use GCSPRINTF instead, it already checks for errors
> > GCSPRINTF and libxl__sprintf have the same amount of error checking.
> >
> > The more important point is that the libxl memory allocation
> > functions/macros all panic on ENOMEM so there is no need for code in
> > general to check for allocation errors.
> >
> > i.e. This block...
> >>> +    if (!libxl_usb_path) {
> >>> +        LIBXL__LOG(ctx, LIBXL__LOG_ERROR, "cannot allocate create 
> >>> paths");
> >>> +        rc = ERROR_FAIL;
> >>> +        goto out;
> >>> +    }
> > ...is unneeded. I didn't check the rest of the patch but I imagine you
> > are in the habits of checking errors and can now remove a bunch of
> > code ;-)
> OK -- well in that block in particular, it's following suit with the 
> rest of the surrounding code which does exactly the same thing.

That code predates the panic on ENOMEM behaviour.

>   As I 
> said in another e-mail to roger about error checking in this function, I 
> think that it's more important for programmers to be able to see this is 
> exactly the same as all the rest.  At some point there should probably 
> be a clean-up to remove the check from all of them, but I don't think 
> that's worth the risk at this point in the release cycle.

Rather than having a massive flag day we are taking the approach of
fixing code as it is changed and part of that is to not introduce new
"incorrect" code.

> I can put cleaning up this function on my to-do list for the 4.4 dev 
> cycle if you want.

That would also be good of course.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.