|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 4 of 5] tools/libxl: Control network buffering in remus callbacks
On 26/08/2013 00:45, rshriram@xxxxxxxxx wrote:
> # HG changeset patch
> # User Shriram Rajagopalan <rshriram@xxxxxxxxx>
> # Date 1377473611 25200
> # Node ID c6804ccfe660cb9c373c5f53a8996d0443316684
> # Parent 4b23104828c09218aa7f9fbde1578bb6706e13d6
> tools/libxl: Control network buffering in remus callbacks
>
> This patch constitutes the core network buffering logic.
> and does the following:
> a) create a new network buffer when the domain is suspended
> (remus_domain_suspend_callback)
> b) release the previous network buffer pertaining to the
> committed checkpoint (remus_domain_checkpoint_dm_saved)
>
> Signed-off-by: Shriram Rajagopalan <rshriram@xxxxxxxxx>
>
> diff -r 4b23104828c0 -r c6804ccfe660 tools/libxl/libxl_dom.c
> --- a/tools/libxl/libxl_dom.c Sun Aug 25 16:33:29 2013 -0700
> +++ b/tools/libxl/libxl_dom.c Sun Aug 25 16:33:31 2013 -0700
> @@ -1213,12 +1213,28 @@ int libxl__toolstack_save(uint32_t domid
>
> /*----- remus callbacks -----*/
>
> +/* REMUS TODO: Issue disk checkpoint reqs. */
Why does this comment need to move?
> static int libxl__remus_domain_suspend_callback(void *data)
> {
> - /* REMUS TODO: Issue disk and network checkpoint reqs. */
> - return libxl__domain_suspend_common_callback(data);
> + libxl__save_helper_state *shs = data;
> + libxl__domain_suspend_state *dss = CONTAINER_OF(shs, *dss, shs);
> + libxl__remus_ctx *remus_ctx = dss->remus_ctx;
> + int is_suspended = 0;
> + STATE_AO_GC(dss->ao);
> +
> + is_suspended = libxl__domain_suspend_common_callback(data);
> + if (!remus_ctx->netbuf_ctx)
split to new line please.
> return is_suspended;
> +
> + if (is_suspended) {
> + if (libxl__remus_netbuf_start_new_epoch(gc, dss->domid,
> + remus_ctx))
> + return !is_suspended;
> + }
> +
> + return is_suspended;
is_suspended is logically a boolean, so should be bool_t. Unhelpfully,
libxl__domain_suspend_common_callback() returns an int, although its
implementation strictly returns 0 on failure and 1 on success.
IMO, It would probably be best to have "bool_t is_suspended" set to
"!!libxl__domain_suspend_common_callback()", at which the subsequent
return !is_suspended doesn't look as suspect, although that is just a
matter of style.
~Andrew
> }
>
> +/* REMUS TODO: Deal with disk. */
> static int libxl__remus_domain_resume_callback(void *data)
> {
> libxl__save_helper_state *shs = data;
> @@ -1229,7 +1245,6 @@ static int libxl__remus_domain_resume_ca
> if (libxl__domain_resume(gc, dss->domid, /* Fast Suspend */1))
> return 0;
>
> - /* REMUS TODO: Deal with disk. Start a new network output buffer */
> return 1;
> }
>
> @@ -1256,10 +1271,34 @@ static void libxl__remus_domain_checkpoi
> static void remus_checkpoint_dm_saved(libxl__egc *egc,
> libxl__domain_suspend_state *dss, int
> rc)
> {
> - /* REMUS TODO: Wait for disk and memory ack, release network buffer */
> - /* REMUS TODO: make this asynchronous */
> - assert(!rc); /* REMUS TODO handle this error properly */
> - usleep(dss->remus_ctx->interval * 1000);
> + /* REMUS TODO: Wait for disk and explicit memory ack (through restore
> + callback from remote) before releasing network buffer. */
> + libxl__remus_ctx *remus_ctx = dss->remus_ctx;
> + struct timespec epoch;
> + int ret;
> + STATE_AO_GC(dss->ao);
> +
> + if (rc) {
> + LOG(ERROR, "Failed to save device model. Terminating Remus..");
> + libxl__xc_domain_saverestore_async_callback_done(egc,
> + &dss->shs, rc);
> + return;
> + }
> +
> + if (remus_ctx->netbuf_ctx) {
> + ret = libxl__remus_netbuf_release_prev_epoch(gc, dss->domid,
> + remus_ctx);
> + if (ret) {
> + libxl__xc_domain_saverestore_async_callback_done(egc,
> + &dss->shs,
> + ret);
> + return;
> + }
> + }
> +
> + epoch.tv_sec = remus_ctx->interval / 1000; /* interval is in ms */
> + epoch.tv_nsec = remus_ctx->interval * 1000L * 1000L;
> + nanosleep(&epoch, 0);
> libxl__xc_domain_saverestore_async_callback_done(egc, &dss->shs, 1);
> }
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |