WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [PATCH v2] tools/libxc: Document checkpoint compression

To: Shriram Rajagopalan <rshriram@xxxxxxxxx>
Subject: [Xen-devel] Re: [PATCH v2] tools/libxc: Document checkpoint compression in xg_save_restore.h
From: Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx>
Date: Mon, 27 Jun 2011 09:54:02 +0100
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
Delivery-date: Mon, 27 Jun 2011 01:54:43 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1547d25fe51b0c84b908.1309035057@xxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Citrix Systems, Inc.
References: <8bef913a3c4d14d22460.1308791079@xxxxxxxxxxxxxxxxxxx> <1547d25fe51b0c84b908.1309035057@xxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Sat, 2011-06-25 at 21:50 +0100, Shriram Rajagopalan wrote:
> # HG changeset patch
> # User Shriram Rajagopalan <rshriram@xxxxxxxxx>
> # Date 1309034380 25200
> # Node ID 1547d25fe51b0c84b908ca4fce02568f77b431b0
> # Parent  c31e9249893d309655a8e739ba2ecb07d2c0148b
> tools/libxc: Document checkpoint compression in xg_save_restore.h
> 
> Add comments to xg_save_restore.h explaining changes in Remus
> wire protocol when checkpoint compression is enabled.
> 
> Signed-off-by: Shriram Rajagopalan <rshriram@xxxxxxxxx>

Thanks Shriram.

Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>

> 
> diff -r c31e9249893d -r 1547d25fe51b tools/libxc/xg_save_restore.h
> --- a/tools/libxc/xg_save_restore.h   Sat Jun 18 20:52:07 2011 -0700
> +++ b/tools/libxc/xg_save_restore.h   Sat Jun 25 13:39:40 2011 -0700
> @@ -67,7 +67,7 @@
>   *
>   *   consists of p2m_size bytes comprising an array of xen_pfn_t sized 
> entries.
>   *
> - * BODY PHASE
> + * BODY PHASE - Format A (for live migration or Remus without compression)
>   * ----------
>   *
>   * A series of chunks with a common header:
> @@ -87,6 +87,113 @@
>   *
>   * If chunk type is 0 then body phase is complete.
>   *
> + *
> + * BODY PHASE - Format B (for Remus with compression)
> + * ----------
> + *
> + * A series of chunks with a common header:
> + *   int              : chunk type
> + *
> + * If the chunk type is +ve then chunk contains array of PFNs corresponding
> + * to guest memory and type contains the number of PFNs in the batch:
> + *
> + *     unsigned long[]  : PFN array, length == number of pages in batch
> + *                        Each entry consists of XEN_DOMCTL_PFINFO_*
> + *                        in bits 31-28 and the PFN number in bits 27-0.
> + *
> + * If the chunk type is -ve then chunk consists of one of a number of
> + * metadata types.  See definitions of XC_SAVE_ID_* below.
> + *
> + * If the chunk type is -ve and equals XC_SAVE_ID_COMPRESSED_DATA, then the
> + * chunk consists of compressed page data, in the following format:
> + *
> + *     unsigned long        : Size of the compressed chunk to follow
> + *     compressed data :      variable length data of size indicated above.
> + *                            This chunk consists of compressed page data.
> + *                            The number of pages in one chunk depends on
> + *                            the amount of space available in the sender's
> + *                            output buffer.
> + *
> + * Format of compressed data:
> + *   compressed_data = (BEGIN_PAGE,<deltas>*) | (FULL_PAGE,4096 bytes)
> + *   delta           = <+ve offset in page (2byte), value (4byte)>
> + *   BEGIN_PAGE      = a dummy delta with offset = -100 and value = 0
> + *   FULL_PAGE       = a dummy delta with offset = -101 and value = 0
> + *
> + * If chunk type is 0 then body phase is complete.
> + *
> + * There can be one or more chunks with type XC_SAVE_ID_COMPRESSED_DATA,
> + * containing compressed pages. The compressed chunks are collated to form
> + * one single compressed chunk for the entire iteration. The number of pages
> + * present in this final compressed chunk will be equal to the total number
> + * of valid PFNs specified by the +ve chunks.
> + *
> + * At the sender side, compressed pages are inserted into the output stream
> + * in the same order as they would have been if compression logic was absent.
> + *
> + * Until last iteration, the BODY is sent in Format A, to maintain live
> + * migration compatibility with receivers of older Xen versions.
> + * At the last iteration, if Remus compression was enabled, the sender sends
> + * a trigger, XC_SAVE_ID_ENABLE_COMPRESSION to tell the receiver to parse the
> + * BODY in Format B from the next iteration onwards.
> + *
> + * An example sequence of chunks received in Format B:
> + *     +16                              +ve chunk
> + *     unsigned long[16]                PFN array
> + *     +100                             +ve chunk
> + *     unsigned long[100]               PFN array
> + *     +50                              +ve chunk
> + *     unsigned long[50]                PFN array
> + *
> + *     XC_SAVE_ID_COMPRESSED_DATA       TAG
> + *       N                              Length of compressed data
> + *       N bytes of DATA                Decompresses to 166 pages
> + *
> + *     XC_SAVE_ID_*                     other xc save chunks
> + *     0                                END BODY TAG
> + *
> + * Corner case with checkpoint compression:
> + *     At sender side, after pausing the domain, dirty pages are usually
> + *   copied out to a temporary buffer. After the domain is resumed,
> + *   compression is done and the compressed chunk(s) are sent, followed by
> + *   other XC_SAVE_ID_* chunks.
> + *     If the temporary buffer gets full while scanning for dirty pages,
> + *   the sender stops buffering of dirty pages, compresses the temporary
> + *   buffer and sends the compressed data with XC_SAVE_ID_COMPRESSED_DATA.
> + *   The sender then resumes the buffering of dirty pages and continues
> + *   scanning for the dirty pages.
> + *     For e.g., assume that the temporary buffer can hold 4096 pages and
> + *   there are 5000 dirty pages. The following is the sequence of chunks
> + *   that the receiver will see:
> + *
> + *     +1024                       +ve chunk
> + *     unsigned long[1024]         PFN array
> + *     +1024                       +ve chunk
> + *     unsigned long[1024]         PFN array
> + *     +1024                       +ve chunk
> + *     unsigned long[1024]         PFN array
> + *     +1024                       +ve chunk
> + *     unsigned long[1024]         PFN array
> + *
> + *     XC_SAVE_ID_COMPRESSED_DATA  TAG
> + *      N                          Length of compressed data
> + *      N bytes of DATA            Decompresses to 4096 pages
> + *
> + *     +4                          +ve chunk
> + *     unsigned long[4]            PFN array
> + *
> + *     XC_SAVE_ID_COMPRESSED_DATA  TAG
> + *      M                          Length of compressed data
> + *      M bytes of DATA            Decompresses to 4 pages
> + *
> + *     XC_SAVE_ID_*                other xc save chunks
> + *     0                           END BODY TAG
> + *
> + *     In other words, XC_SAVE_ID_COMPRESSED_DATA can be interleaved with
> + *   +ve chunks arbitrarily. But at the receiver end, the following condition
> + *   always holds true until the end of BODY PHASE:
> + *    num(PFN entries +ve chunks) >= num(pages received in compressed form)
> + *
>   * TAIL PHASE
>   * ----------
>   *



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel