[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 12/16] libxl: use vchan for QMP access with Linux stubdomain



On Tue, Jan 14, 2020 at 9:42 PM Marek Marczykowski-Górecki
<marmarek@xxxxxxxxxxxxxxxxxxxxxx> wrote:
>
> Access to QMP of QEMU in Linux stubdomain is possible over vchan
> connection. Handle the actual vchan connection in a separate process
> (vchan-socket-proxy). This simplified integration with QMP (already
> quite complex), but also allows preliminary filtering of (potentially
> malicious) QMP input.
> Since only one client can be connected to vchan server at the same time
> and it is not enforced by the libxenvchan itself, additional client-side
> locking is needed. It is implicitly implemented by vchan-socket-proxy,
> as it handle only one connection at a time. Note that qemu supports only
> one simultaneous client on a control socket anyway (but in UNIX socket
> case, it enforce it server-side), so it doesn't add any extra
> limitation.
>
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>
> ---
> Changes in v4:
>  - new patch, in place of both "libxl: use vchan for QMP access ..."
> ---
>  tools/configure.ac           |   9 ++-
>  tools/libxl/libxl_dm.c       | 159 ++++++++++++++++++++++++++++++++++--
>  tools/libxl/libxl_internal.h |   1 +-
>  3 files changed, 161 insertions(+), 8 deletions(-)
>
> diff --git a/tools/configure.ac b/tools/configure.ac
> index 8d86c42..20bbdbf 100644
> --- a/tools/configure.ac
> +++ b/tools/configure.ac
> @@ -192,6 +192,15 @@ AC_SUBST(qemu_xen)
>  AC_SUBST(qemu_xen_path)
>  AC_SUBST(qemu_xen_systemd)
>
> +AC_ARG_WITH([stubdom-qmp-proxy],
> +    AC_HELP_STRING([--stubdom-qmp-proxy@<:@=PATH@:>@],
> +        [Use supplied binary PATH as a QMP proxy into stubdomain]),[

Thanks for making it configurable :)

> +    stubdom_qmp_proxy="$withval"
> +],[
> +    stubdom_qmp_proxy="$bindir/vchan-socket-proxy"
> +])
> +AC_DEFINE_UNQUOTED([STUBDOM_QMP_PROXY_PATH], ["$stubdom_qmp_proxy"], [QMP 
> proxy path])
> +
>  AC_ARG_WITH([system-seabios],
>      AS_HELP_STRING([--with-system-seabios@<:@=PATH@:>@],
>         [Use system supplied seabios PATH instead of building and installing
> diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
> index 528ca3e..23ac7e4 100644
> --- a/tools/libxl/libxl_dm.c
> +++ b/tools/libxl/libxl_dm.c
> @@ -1183,7 +1183,7 @@ static int libxl__build_device_model_args_new(libxl__gc 
> *gc,
>                        "-xen-domid",
>                        GCSPRINTF("%d", guest_domid), NULL);
>
> -    /* There is currently no way to access the QMP socket in the stubdom */
> +    /* QMP access to qemu running in stubdomain is done over vchan, 
> stubdomain setup it itself */

I think this would be clearer:
/* QMP access to qemu running in stubdomain is done over vchan.  The
stubdomain init script
 * adds the appropriate monitor options for vchan-socket-proxy. */

In the block below, the -no-shutdown option is added to qemu, which
will not be done for linux stubdomain.
-no-shutdown
       Don't exit QEMU on guest shutdown, but instead only stop the
       emulation.  This allows for instance switching to monitor to commit
       changes to the disk image.

It's something I noticed, but I don't know if it matters to us.

>      if (!is_stubdom) {
>          flexarray_append(dm_args, "-chardev");
>          if (state->dm_monitor_fd >= 0) {
> @@ -2178,6 +2178,23 @@ static void stubdom_pvqemu_unpaused(libxl__egc *egc,

<snip>

> @@ -2460,24 +2477,150 @@ static void spawn_stub_launch_dm(libxl__egc *egc,
>              goto out;
>      }
>
> +    sdss->qmp_proxy_spawn.ao = ao;
> +    if (libxl__stubdomain_is_linux(&guest_config->b_info)) {
> +        spawn_qmp_proxy(egc, sdss);
> +    } else {
> +        qmp_proxy_spawn_outcome(egc, sdss, 0);
> +    }
> +
> +    return;
> +
> +out:
> +    assert(ret);
> +    qmp_proxy_spawn_outcome(egc, sdss, ret);
> +}
> +
> +static void spawn_qmp_proxy(libxl__egc *egc,
> +                            libxl__stub_dm_spawn_state *sdss)
> +{
> +    STATE_AO_GC(sdss->qmp_proxy_spawn.ao);
> +    const uint32_t guest_domid = sdss->dm.guest_domid;
> +    const uint32_t dm_domid = sdss->pvqemu.guest_domid;
> +    const char *dom_path = libxl__xs_get_dompath(gc, dm_domid);
> +    char **args;
> +    int nr = 0;
> +    int rc, logfile_w, null;
> +
> +    if (access(STUBDOM_QMP_PROXY_PATH, X_OK) < 0) {
> +        LOGED(ERROR, guest_domid, "qmp proxy %s is not executable", 
> STUBDOM_QMP_PROXY_PATH);
> +        rc = ERROR_FAIL;
> +        goto out;
> +    }
> +
> +    sdss->qmp_proxy_spawn.what = GCSPRINTF("domain %d device model qmp 
> proxy", guest_domid);
> +    sdss->qmp_proxy_spawn.pidpath = GCSPRINTF("%s/image/qmp-proxy-pid", 
> dom_path);
> +    sdss->qmp_proxy_spawn.xspath = GCSPRINTF("%s/image/qmp-proxy-state", 
> dom_path);

Since this is the vchan-socket-proxy in dom0, should it write to
"device-model/%u/qmp-proxy-state" underneath dom0?

> +
> +    sdss->qmp_proxy_spawn.timeout_ms = LIBXL_DEVICE_MODEL_START_TIMEOUT * 
> 1000;
> +    sdss->qmp_proxy_spawn.midproc_cb = libxl__spawn_record_pid;
> +    sdss->qmp_proxy_spawn.confirm_cb = qmp_proxy_confirm;
> +    sdss->qmp_proxy_spawn.failure_cb = qmp_proxy_startup_failed;
> +    sdss->qmp_proxy_spawn.detached_cb = qmp_proxy_detached;
> +
> +    const int arraysize = 6;
> +    GCNEW_ARRAY(args, arraysize);
> +    args[nr++] = STUBDOM_QMP_PROXY_PATH;
> +    args[nr++] = GCSPRINTF("--state-path=%s", sdss->qmp_proxy_spawn.xspath);
> +    args[nr++] = GCSPRINTF("%u", dm_domid);
> +    args[nr++] = GCSPRINTF("%s/device-model/%u/qmp-vchan", dom_path, 
> guest_domid);

Thinking of OpenXT"s qmp-helper, this path isn't useful.  But it is
for vchan-socket-proxy, so qmp-helper could just change to ignore it.

> +    args[nr++] = (char*)libxl__qemu_qmp_path(gc, guest_domid);

qmp-helper takes just the stub_domid and domid.  The domid is just
used to generate the above path, but taking the path would be cleaner.

> +    args[nr++] = NULL;
> +    assert(nr == arraysize);

This generally looks good.

Regards,
Jason

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.