On Thu, Jun 23, 2011 at 10:56 AM, Ian Campbell
<Ian.Campbell@xxxxxxxxxx> wrote:
On Thu, 2011-06-23 at 02:04 +0100, Shriram Rajagopalan wrote:
> # HG changeset patch
> # User Shriram Rajagopalan <
rshriram@xxxxxxxxx>
> # Date 1308790913 25200
> # Node ID 8bef913a3c4d14d2246086ef30a8e80f45ad1beb
> # Parent 9eed27800ff6a2e6d73f138f20af072c1b41925e
> tools/libxc: Document checkpoint compression in xg_save_restore.h
>
> Add comments to xg_save_restore.h explaining changes in Remus
> wire protocol when checkpoint compression is enabled.
Thanks!
> Signed-off-by: Shriram Rajagopalan <
rshriram@xxxxxxxxx>
>
> diff -r 9eed27800ff6 -r 8bef913a3c4d tools/libxc/xg_save_restore.h
> --- a/tools/libxc/xg_save_restore.h Wed Jun 22 06:34:55 2011 -0700
> +++ b/tools/libxc/xg_save_restore.h Wed Jun 22 18:01:53 2011 -0700
> @@ -82,9 +82,24 @@
> * page data : PAGE_SIZE bytes for each page marked present in PFN
> * array
> *
> + * In case of Remus with checkpoint compression, since the compressed page data
> + * can be of variable size, only the pfn array is sent with a +ve chunk type.
> + *
Is this true on every round or only for second and subsequent rounds?
How do we know on the receiving end whether or not we are using
checkpoint compression? Is there an XC_SAVE_ID which is sent to trigger
this mode?
Actually, after the last iteration, the primary sends a XC_SAVE_ID_ENABLE_COMPRESSION
and then for further rounds, the receiver stops expecting page data following
the pfn array. Instead it waits for XC_SAVE_ID_COMPRESSED_DATA.
Thanks for pointing it out. I ll document it.
> * If the chunk type is -ve then chunk consists of one of a number of
> * metadata types. See definitions of XC_SAVE_ID_* below.
> *
> + * If the chunk type is -ve and equals XC_SAVE_ID_COMPRESSED_DATA, then the
> + * chunk consists of compressed page data, in the following format:
> + *
> + * unsigned long : Size of the compressed chunk to follow
> + * compressed page data : variable length data of size indicated above.
> + * This chunk consists of compressed page data. The
> + * number of pages in one chunk varies with respect
> + * to amount of space available in the sender's
> + * output buffer.
> + *
> + * There can be one or more chunks with type XC_SAVE_ID_COMPRESSED_DATA.
Having seen an XC_SAVE_ID_COMPRESSED_DATA can you ever see a +ve chunk
type at that point? IOW is XC_SAVE_ID_COMPRESSED_DATA also the trigger I
mentioned above?
Answered above. Its a separate trigger. That said, if you meant "can I expect a +ve
chunk 'after' a XC_SAVE_ID_COMPRESSED_DATA", yes that is possible. It happens
when there are too many dirty pages to fit in the sender's compression buffer. Sender
basically blocks, sends out the compressed chunk and moves on to the next batch of
pages. This is a corner case.
Part of me thinks this bit belongs below with the #define and the other
part thinks its too big to fit in nicely there so it is better here.
Ian.