[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] tools/libxc: Document checkpoint compression in xg_save_restore.h

On Thu, 2011-06-23 at 16:16 +0100, Shriram Rajagopalan wrote:

> Actually, after the last iteration, the primary sends a
> XC_SAVE_ID_ENABLE_COMPRESSION and then for further rounds, the
> receiver stops expecting page data following the pfn array. Instead it
> waits for XC_SAVE_ID_COMPRESSED_DATA. Thanks for pointing it out. I ll
> document it. 

> Answered above. Its a separate trigger. That said, if you meant "can I
> expect a +ve  chunk 'after' a XC_SAVE_ID_COMPRESSED_DATA", yes that is
> possible. It happens when there are too many dirty pages to fit in the
> sender's compression buffer. Sender basically blocks, sends out the
> compressed chunk and moves on to the next batch of pages. This is a
> corner case. 

I think I'm misunderstanding. When there are too many dirty pages you
send out a standard +ve chunk, including the page data? (presumably in
order to catch up). If so then how does this mesh with the statement
that once you've seen an XC_SAVE_ID_COMPRESSED_DATA you don't expect
page data in a +ve chunk anymore?

Back in the original patch:
> + *     compressed page data : variable length data of size indicated above.
> + *                            This chunk consists of compressed page data. 
> The
> + *                            number of pages in one chunk varies with 
> respect
> + *                            to amount of space available in the sender's
> + *                            output buffer.

What's the format of this compressed page data?

So is the sequence:
        +16 (e.g.)                      +ve chunk
        unsigned long[16]               PFN array
        NOT page-data (because we've seen XC_SAVE_ID_COMPRESSED_DATA)
        N                               Length of compressed data batch#1
        N bytes of DATA, batch #1       Decompresses to e.g. 7 pages
        M                               Length of compressed data batch#2
        M bytes of DATA, batch #2       Decompresses to e.g. 9 pages

So now we have the originally specified 16 pages. Do we guarantee that
we will always see enough instances of XC_SAVE_ID_COMPRESSED_DATA to
total to the +ve chunk specified number of pages? Are they always

How does the sequence of events differ in the corner case of too many
dirty pages? Do we abort e.g. before the second
XC_SAVE_ID_COMPRESSED_DATA and go back to the +ve chunk stage?


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.