[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 07/11] libxl_dm: Pre-open QMP socket for QEMU



Thanks for this patch.  I have a feeling that I have already commented
(perhaps informally) on a few aspects of it but the message was not
marked `replied' in my MUA so I thought I would formally review it.
Apologies if my comments are, effectively, duplicates.


Anthony PERARD writes ("[PATCH v6 07/11] libxl_dm: Pre-open QMP socket for 
QEMU"):
> This patch move the creation of the QMP unix socket from QEMU to libxl.

             moves

> But libxl doesn't rely on this yet.
> 
> When starting QEMU with dm_restrict=1, pre-open the QMP socket before
> exec QEMU. That socket will be usefull to findout if QEMU is ready, and

                                 useful     find out

> pre-opening it means that libxl can connect to it without waiting for
> QEMU to create it.
> 
> The pre-openning is conditionnal, based on the use of dm_restrict

      pre-opening     conditional

> because it is using a new command line option of QEMU, and dm_restrict
> support in QEMU is newer.



> @@ -539,6 +539,9 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config 
> *d_config,
>      libxl_domain_create_info *info = &d_config->c_info;
>      libxl_domain_build_info *b_info = &d_config->b_info;
>  
> +    /* Attempt to initialise libxl__domain_build_state */
> +    state->dm_monitor_fd = -1;

See my comments below.

> +static int libxl__pre_open_qmp_socket(libxl__gc *gc, libxl_domid domid,
> +                                      int *fd_r)
> +{
> +    int rc, r;
> +    int fd;
> +    struct sockaddr_un un;
> +    const char *path = libxl__qemu_qmp_path(gc, domid);
> +
> +    assert(fd_r != NULL);

This assertion is not really of any use since otherwise we'll
dereference it, and crash, in due course, anyway.

> +    r = listen(fd, 1);

What is the reasoning behind the choice of 1 for the listen queue
parameter ?

>  static int libxl__build_device_model_args_new(libxl__gc *gc,
>                                          const char *dm, int guest_domid,
>                                          const libxl_domain_config 
> *guest_config,
> @@ -944,10 +991,16 @@ static int libxl__build_device_model_args_new(libxl__gc 
> *gc,
>                        GCSPRINTF("%d", guest_domid), NULL);
>  
>      flexarray_append(dm_args, "-chardev");
> -    flexarray_append(dm_args,
> -                     GCSPRINTF("socket,id=libxl-cmd,"
> -                               "path=%s,server,nowait",
> -                               libxl__qemu_qmp_path(gc, guest_domid)));
> +    if (state->dm_monitor_fd >= 0) {
> +        flexarray_append(dm_args,
> +            GCSPRINTF("socket,id=libxl-cmd,fd=%d,server,nowait",
> +                      state->dm_monitor_fd));

I think I suggested (IRL perhaps, and perhaps after you posted this)
that you might add some asserts to the other
..._build_device_model_args_... functions.

> @@ -2000,6 +2053,7 @@ void libxl__spawn_stub_dm(libxl__egc *egc, 
> libxl__stub_dm_spawn_state *sdss)
>      if (ret)
>          goto out;
>  
> +    d_state->dm_monitor_fd = -1;

This function calls libxl__domain_make which you have just changed to
also set this.  The situation is now very confusing, I think.

I think it would probably be better to introduce a new function to
initialise a libxl__domain_build_state, which is called every time one
comes into existence.  Probably as a separate patch.

> @@ -2408,6 +2470,7 @@ out_close:
>      if (logfile_w >= 0) close(logfile_w);
>  out:
>      if (dm_state_fd >= 0) close(dm_state_fd);
> +    if (state->dm_monitor_fd >= 0) close(state->dm_monitor_fd);

I think this cleanup of `state' needs to be split out.  There should
be a dispose function called everywhere a state ceases to exist.
See above about _init.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.