[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC 3/6] COLO-Proxy: Setup userspace colo-proxy on secondary side





On 01/28/2017 01:05 AM, Wei Liu wrote:
On Thu, Jan 26, 2017 at 02:36:06PM +0800, Zhang Chen wrote:
In this patch we add a function to close
kernel COLO-Proxy on secondary side.

Signed-off-by: Zhang Chen <zhangchen.fnst@xxxxxxxxxxxxxx>
---
  tools/libxl/libxl_colo_restore.c |  9 +++++++--
  tools/libxl/libxl_create.c       |  9 +++++++--
  tools/libxl/libxl_types.idl      |  1 +
  tools/libxl/xl_cmdimpl.c         | 18 +++++++++++++++---
  4 files changed, 30 insertions(+), 7 deletions(-)

diff --git a/tools/libxl/libxl_colo_restore.c b/tools/libxl/libxl_colo_restore.c
index 6a96328..1d42539 100644
--- a/tools/libxl/libxl_colo_restore.c
+++ b/tools/libxl/libxl_colo_restore.c
@@ -774,8 +774,13 @@ static void colo_setup_checkpoint_devices(libxl__egc *egc,
STATE_AO_GC(crs->ao); - cds->device_kind_flags = (1 << LIBXL__DEVICE_KIND_VIF) |
-                             (1 << LIBXL__DEVICE_KIND_VBD);
+    if (crs->cps.is_userspace_proxy) {
+        cds->device_kind_flags = (1 << LIBXL__DEVICE_KIND_VBD);
+    } else {
+        cds->device_kind_flags = (1 << LIBXL__DEVICE_KIND_VIF) |
+                                 (1 << LIBXL__DEVICE_KIND_VBD);
+    }
+
Style issue.

I will fix this in next version.


      cds->callback = colo_restore_setup_cds_done;
      cds->ao = ao;
      cds->domid = crs->domid;
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index e3bc257..d230ecd 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -1609,6 +1609,7 @@ static int do_domain_create(libxl_ctx *ctx, 
libxl_domain_config *d_config,
                              uint32_t *domid, int restore_fd, int send_back_fd,
                              const libxl_domain_restore_params *params,
                              const char *colo_proxy_script,
+                            const bool userspace_colo_proxy,
                              const libxl_asyncop_how *ao_how,
                              const libxl_asyncprogress_how *aop_console_how)
  {
@@ -1633,6 +1634,7 @@ static int do_domain_create(libxl_ctx *ctx, 
libxl_domain_config *d_config,
      cdcs->dcs.callback = domain_create_cb;
      cdcs->dcs.domid_soft_reset = INVALID_DOMID;
      cdcs->dcs.colo_proxy_script = colo_proxy_script;
+    cdcs->dcs.crs.cps.is_userspace_proxy = userspace_colo_proxy;
      libxl__ao_progress_gethow(&cdcs->dcs.aop_console_how, aop_console_how);
      cdcs->domid_out = domid;
@@ -1821,7 +1823,7 @@ int libxl_domain_create_new(libxl_ctx *ctx, libxl_domain_config *d_config,
  {
      unset_disk_colo_restore(d_config);
      return do_domain_create(ctx, d_config, domid, -1, -1, NULL, NULL,
-                            ao_how, aop_console_how);
+                            false, ao_how, aop_console_how);
  }
int libxl_domain_create_restore(libxl_ctx *ctx, libxl_domain_config *d_config,
@@ -1832,16 +1834,19 @@ int libxl_domain_create_restore(libxl_ctx *ctx, 
libxl_domain_config *d_config,
                                  const libxl_asyncprogress_how 
*aop_console_how)
  {
      char *colo_proxy_script = NULL;
+    bool userspace_colo_proxy = false;
if (params->checkpointed_stream == LIBXL_CHECKPOINTED_STREAM_COLO) {
          colo_proxy_script = params->colo_proxy_script;
+        userspace_colo_proxy = libxl_defbool_val(params->userspace_colo_proxy);
I think I'm going to ask for a bit of cleanup here.

You don't  actually need the values of colo_proxy_script and
userspace_colo_proxy here.

So instead of having both values here. I suggest:

1. provide a patch to refactor existing code so that do_domain_create
    doesn't take colo_proxy_script anymore. It should be able to do
    cdcs->dcs.colo_proxy_script = params->colo_proxy_script.
2. rework this patch on top of that patch.

Does this make sense? Let me know if this is not feasible due to I miss
something obvious.

Good idea, I will follow your comments send a new patch in next version.


          set_disk_colo_restore(d_config);
      } else {
          unset_disk_colo_restore(d_config);
      }
return do_domain_create(ctx, d_config, domid, restore_fd, send_back_fd,
-                            params, colo_proxy_script, ao_how, 
aop_console_how);
+                            params, colo_proxy_script, userspace_colo_proxy,
+                            ao_how, aop_console_how);
  }
int libxl_domain_soft_reset(libxl_ctx *ctx,
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 1bd2057..89c2c9d 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -390,6 +390,7 @@ libxl_domain_restore_params = 
Struct("domain_restore_params", [
      ("checkpointed_stream", integer),
      ("stream_version", uint32, {'init_val': '1'}),
      ("colo_proxy_script", string),
+    ("userspace_colo_proxy", libxl_defbool),
I suppose you can use LIBXL_HAVE_COLO_USERSPACE_PROXY for this whole
series.

If I understand correctly, we use LIBXL_HAVE_COLO_USERSPACE_PROXY for
this whole series means we make colo userspace proxy as default option,
User can't use colo kernel proxy by a easy way (remus commands).
The goal of this series is to provide a another way to run colo proxy(we recommended),
We want to keep the colo kernel proxy function currently(maybe someone can
maintain it), and when colo userspace proxy performance better than kernel proxy,
we will change colo userspace as default option.


Thanks
Zhang Chen



Since this series touches a lot of common code, I would like you to
confirm you've tested configurations without COLO enabled. Basic VM
lifecycle operations like create, save/restore and migration should
still work.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel



--
Thanks
Zhang Chen




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.