[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 7/8] xl/libxl: implement QDISK libxl_device_disk_local_attach



Stefano Stabellini writes ("[Xen-devel] [PATCH v4 7/8] xl/libxl: implement 
QDISK libxl_device_disk_local_attach"):
> diff --git a/tools/libxl/libxl_internal.c b/tools/libxl/libxl_internal.c
> index ac73070..bb75491 100644
> --- a/tools/libxl/libxl_internal.c
> +++ b/tools/libxl/libxl_internal.c
> @@ -510,6 +510,7 @@ _hidden char * libxl__device_disk_local_attach(libxl__gc 
> *gc,
...
> +                    if (tmpdisk->vdev == NULL) {
> +                        LIBXL__LOG(ctx, LIBXL__LOG_ERROR,
> +                                "libxl__alloc_vdev failed\n");

LIBXL__LOG doesn't need an additional "\n".  You have several of
these.  Also you may prefer my new LOG convenience macro which is a
lot less verbose - look, no wrapping needed:
                           LOG(ERROR, "libxl__alloc_vdev failed);

> +    switch (disk->backend) {
> +        case LIBXL_DISK_BACKEND_QDISK:
> +            if (disk->vdev != NULL) {
> +                libxl_device_disk_remove(gc->owner, LIBXL_TOOLSTACK_DOMID,
> +                        disk, 0);
> +                return libxl_device_disk_destroy(gc->owner,
> +                        LIBXL_TOOLSTACK_DOMID, disk);
> +            }
> +            break;
> +        default:
> +            /*
> +             * Nothing to do for PHYSTYPE_PHY.
> +             * For other device types assume that the blktap2 process is
> +             * needed by the soon to be started domain and do nothing.
> +             */
> +            break;

I guess what I meant with my previous comment is that it might be
better to have _local_attach return some kind of context/state struct,
bigger than the libxl__device_disk*, that would be passed to _detach.

But this will do.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.