[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 08/10] libxl: call hotplug scripts for disk devices from libxl



Roger Pau Monne writes ("[PATCH v4 08/10] libxl: call hotplug scripts for disk 
devices from libxl"):
> Since most of the needed work is already done in previous patches,
> this patch only contains the necessary code to call hotplug scripts
> for disk devices, that should be called when the device is added or
> removed from a guest.

Thanks.

> +static void device_hotplug(libxl__egc *egc, libxl__ao_device *aodev)
> +{
...
> +    /* Check if we have to execute hotplug scripts for this device
> +     * and return the necessary args/env vars for execution */
> +    hotplug = libxl__get_hotplug_script_info(gc, aodev->dev, &args, &env,
> +                                             aodev->action);
> +    switch (hotplug) {
...
> +    default:
> +        /* everything else is an error */
> +        LOG(ERROR, "unable to get args/env to execute hotplug script for "
> +                   "device %s", libxl__device_backend_path(gc, aodev->dev));
> +        rc = ERROR_FAIL;

Again, `rc = hotplug' perhaps ?

> +    /* fork and execute hotplug script */
> +    aodev->pid = libxl__ev_child_fork(gc, &aodev->child,
> +                                      device_hotplug_fork_cb);
> +    if (aodev->pid == -1) {
> +        LOG(ERROR, "unable to fork");
> +        rc = ERROR_FAIL;
> +        goto out;
> +    }
> +
> +    if (!aodev->pid) {
> +        /* child */
> +        libxl__exec(gc, -1, -1, -1, args[0], args, env);
> +        /* notreached */
> +        abort();
> +    }
> +
> +    if (!libxl__ev_child_inuse(&aodev->child)) {
> +        /* hotplug launch failed */
> +        LOG(ERROR, "unable to launch hotplug script for device %s", be_path);
> +        rc = ERROR_FAIL;
> +        goto out;
> +    }

This isn't right.  After a successful libxl__ev_child_fork, the child
structure is definitely in use.  So an assert would be appropriate but
really this last stanza can just be removed.

> +    return;
> +
> +out:
> +    libxl__ev_time_deregister(gc, &aodev->ev);
> +    aodev->rc = rc;
> +    aodev->callback(egc, aodev);
> +    return;
> +}

Shouldn't something set aodev->state to FINISHED in this error path ?

It might be better to have a helper function to call when declaring
the aodev finished.  That helper function would also do the
_ev_time_deregister, and assert !libxl__ev_child_inuse.


> +static void device_hotplug_fork_cb(libxl__egc *egc, libxl__ev_child *child,
> +                                   pid_t pid, int status)

This shouldn't be called `fork_cb'.  It's a callback that happens when
the child dies, not when it forks.  I wouldn't call it `death_cb' or
`child_cb' or something.

> +static void device_hotplug_timeout_cb(libxl__egc *egc, libxl__ev_time *ev,
> +                                      const struct timeval *requested_abs)
> +{
> +    libxl__ao_device *aodev = CONTAINER_OF(ev, *aodev, ev);
> +    STATE_AO_GC(aodev->ao);
> +
> +    if (libxl__ev_child_inuse(&aodev->child)) {
> +        if (kill(aodev->pid, SIGKILL)) {
> +            LOGEV(ERROR, errno, "unable to kill hotplug script %s [%ld]",
> +                                aodev->what, (unsigned long)aodev->pid);
> +            goto out;
> +        }
> +    }
> +
> +out:
> +    libxl__ev_time_deregister(gc, &aodev->ev);
> +    return;

You could profitably move that deregistration to the top of the
function.  That would save the goto.

Also libxl__ev_child_inuse should always be true here, surely ?  After
all we shouldn't have a timeout running unless we also have the
child.  So yo should assert libxl__ev_child_inuse.

> diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
> index 45b776c..da5b02b 100644
> --- a/tools/libxl/libxl_internal.h
> +++ b/tools/libxl/libxl_internal.h
...
> +    /* device hotplug execution */
> +    pid_t pid;
> +    char *what;
> +    libxl__ev_time ev;
> +    libxl__ev_child child;

Is the pid inside child not sufficient ?

> diff --git a/tools/libxl/libxl_linux.c b/tools/libxl/libxl_linux.c
> index 925248b..98cd25f 100644
> --- a/tools/libxl/libxl_linux.c
> +++ b/tools/libxl/libxl_linux.c
> @@ -25,3 +25,101 @@ int libxl__try_phy_backend(mode_t st_mode)
> 
>      return 1;
>  }
> +
> +/* Hotplug scripts helpers */
> +
> +static char **get_hotplug_env(libxl__gc *gc, libxl__device *dev)
> +{
...
> +    int nr = 0, arraysize = 9;

A nit, but here we see memetic leakage from the tyranny of
-Wdeclaration-after-statement.   Surely the definition of arraysize
would be better placed:

> +    script = libxl__xs_read(gc, XBT_NULL,
> +                            GCSPRINTF("%s/%s", be_path, "script"));
> +    if (!script) {
> +        LOGEV(ERROR, errno, "unable to read script from %s", be_path);
> +        return NULL;
> +    }
> +

Here:

       const int arraysize = 9;

> +    GCNEW_ARRAY(env, arraysize);
> +    env[nr++] = "script";
> +    env[nr++] = script;
> +    env[nr++] = "XENBUS_TYPE";
> +    env[nr++] = libxl__strdup(gc, type);
> +    env[nr++] = "XENBUS_PATH";
> +    env[nr++] = GCSPRINTF("backend/%s/%u/%d", type, dev->domid, dev->devid);
> +    env[nr++] = "XENBUS_BASE_PATH";
> +    env[nr++] = "backend";
> +    env[nr++] = NULL;
> +    assert(nr == arraysize);

> diff --git a/tools/libxl/libxl_netbsd.c b/tools/libxl/libxl_netbsd.c
> index 9e0ed6d..0f2cdaa 100644
> --- a/tools/libxl/libxl_netbsd.c
> +++ b/tools/libxl/libxl_netbsd.c
> @@ -24,3 +24,12 @@ int libxl__try_phy_backend(mode_t st_mode)
> 
>      return 0;
>  }
> +
> +/* Hotplug scripts caller functions */
> +
> +int libxl__get_hotplug_script_info(libxl__gc *gc, libxl__device *dev,
> +                                   char ***args, char ***env,
> +                                   libxl__device_action action)
> +{
> +    return 0;
> +}
> \ No newline at end of file

That last complaint from diff should be fixed :-).

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.