[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH V4 7/8] COLO-Proxy: Use socket to get checkpoint event.



On Wed, Mar 15, 2017 at 10:02:46AM +0800, Zhang Chen wrote:
> 
> 
> On 03/14/2017 07:24 PM, Wei Liu wrote:
> > On Mon, Mar 06, 2017 at 10:59:25AM +0800, Zhang Chen wrote:
> > > We use kernel colo proxy's way to get the checkpoint event
> > > from qemu colo-compare.
> > > Qemu colo-compare need add a API to support this(I will add this in qemu).
> > > Qemu side patch:
> > >   https://lists.nongnu.org/archive/html/qemu-devel/2017-02/msg07265.html
> > > 
> > > Signed-off-by: Zhang Chen <zhangchen.fnst@xxxxxxxxxxxxxx>
> > Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx>
> > 
> > But see below.
> > 
> > > @@ -289,8 +393,19 @@ int colo_proxy_checkpoint(libxl__colo_proxy_state 
> > > *cps,
> > >        * event.
> > >        */
> > >       if (cps->is_userspace_proxy) {
> > > -        usleep(timeout_us);
> > > -        return 0;
> > > +        ret = colo_userspace_proxy_recv(cps, recvbuff, timeout_us);
> > > +        if (ret <= 0) {
> > > +            ret = 0;
> > > +            goto out1;
> > > +        }
> > > +
> > > +        if (!strcmp(recvbuff, "DO_CHECKPOINT")) {
> > > +            ret = 1;
> > > +        } else {
> > > +            LOGD(ERROR, ao->domid, "receive qemu colo-compare checkpoint 
> > > error");
> > > +            ret = 0;
> > > +        }
> > > +        goto out1;
> > >       }
> > >       size = colo_proxy_recv(cps, &buff, timeout_us);
> > > @@ -318,4 +433,7 @@ int colo_proxy_checkpoint(libxl__colo_proxy_state 
> > > *cps,
> > >   out:
> > >       free(buff);
> > >       return ret;
> > > +
> > > +out1:
> > Perhaps try to come up with a better name than out1? Subsequent patch is
> > welcome.
> > 
> 
> How about change 'out1' to 'out_userspace_proxy' ?
> If OK, I will send a patch for it.
> 

How about the following patch instead? Compile test only.

In general I would like code to stick with coding style.

--->8---
From 0a87defaad529c02babe24055d5782b74d3a38e3 Mon Sep 17 00:00:00 2001
From: Wei Liu <wei.liu2@xxxxxxxxxx>
Date: Wed, 15 Mar 2017 10:50:19 +0000
Subject: [PATCH] libxl/colo: unified exit path for colo_proxy_checkpoint

Slightly refactor the code to have only one exit path for the said
function.

Signed-off-by: Wei Liu <wei.liu2@xxxxxxxxxx>
Cc: zhangchen.fnst@xxxxxxxxxxxxxx
---
 tools/libxl/libxl_colo_proxy.c | 15 +++++++--------
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/tools/libxl/libxl_colo_proxy.c b/tools/libxl/libxl_colo_proxy.c
index c3d55104ea..5475f7ea32 100644
--- a/tools/libxl/libxl_colo_proxy.c
+++ b/tools/libxl/libxl_colo_proxy.c
@@ -375,7 +375,7 @@ typedef struct colo_msg {
 int colo_proxy_checkpoint(libxl__colo_proxy_state *cps,
                           unsigned int timeout_us)
 {
-    uint8_t *buff;
+    uint8_t *buff = NULL;
     int64_t size;
     struct nlmsghdr *h;
     struct colo_msg *m;
@@ -396,7 +396,7 @@ int colo_proxy_checkpoint(libxl__colo_proxy_state *cps,
         ret = colo_userspace_proxy_recv(cps, recvbuff, timeout_us);
         if (ret <= 0) {
             ret = 0;
-            goto out1;
+            goto out;
         }
 
         if (!strcmp(recvbuff, "DO_CHECKPOINT")) {
@@ -405,14 +405,16 @@ int colo_proxy_checkpoint(libxl__colo_proxy_state *cps,
             LOGD(ERROR, ao->domid, "receive qemu colo-compare checkpoint 
error");
             ret = 0;
         }
-        goto out1;
+        goto out;
     }
 
     size = colo_proxy_recv(cps, &buff, timeout_us);
 
     /* timeout, return no checkpoint message. */
-    if (size <= 0)
-        return 0;
+    if (size <= 0) {
+        ret = 0;
+        goto out;
+    }
 
     h = (struct nlmsghdr *) buff;
 
@@ -433,7 +435,4 @@ int colo_proxy_checkpoint(libxl__colo_proxy_state *cps,
 out:
     free(buff);
     return ret;
-
-out1:
-    return ret;
 }
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.