[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] tools/libxc: Document checkpoint compression in xg_save_restore.h

On Fri, 2011-06-24 at 14:49 +0100, Shriram Rajagopalan wrote:
> On Fri, Jun 24, 2011 at 4:54 AM, Ian Campbell
> <Ian.Campbell@xxxxxxxxxxxxx> wrote:
>         On Thu, 2011-06-23 at 16:16 +0100, Shriram Rajagopalan wrote:
>         > Actually, after the last iteration, the primary sends a
>         > XC_SAVE_ID_ENABLE_COMPRESSION and then for further rounds,
>         the
>         > receiver stops expecting page data following the pfn array.
>         Instead it
>         > waits for XC_SAVE_ID_COMPRESSED_DATA. Thanks for pointing it
>         out. I ll
>         > document it.
>         [...]
>         > Answered above. Its a separate trigger. That said, if you
>         meant "can I
>         > expect a +ve  chunk 'after' a XC_SAVE_ID_COMPRESSED_DATA",
>         yes that is
>         > possible. It happens when there are too many dirty pages to
>         fit in the
>         > sender's compression buffer. Sender basically blocks, sends
>         out the
>         > compressed chunk and moves on to the next batch of pages.
>         This is a
>         > corner case.
>         I think I'm misunderstanding. When there are too many dirty
>         pages you
>         send out a standard +ve chunk, including the page data?
>         (presumably in
>         order to catch up). If so then how does this mesh with the
>         statement
>         that once you've seen an XC_SAVE_ID_COMPRESSED_DATA you don't
>         expect
>         page data in a +ve chunk anymore?
> That was a good catch. I thought I ll keep the corner cases out of
> xg_save_restore.h
> but from the way you pointed out, I think I ll document everything.
> Answers below. 

Thanks, I think the weird corner cases are actually the most important
thing to document...

> The sequence goes like this.

> In case there are too many dirty pages, there would be a
> the midst of an otherwise contiguous series of +ve chunks. For e.g.
> say if there are 9000 dirty pages,
> all of which are valid.

Thanks, I think I get it.

It might be worth making it clear in the docs that +ve and
XC_SAVE_ID_COMPRESSED_DATA can effectively be mixed/interleaved
arbitrarily but that the receiver will always have seen more +ve page
array entires than pages in compressed form. i.e. that the amount of
decompressed pages received can never exceed where page array the +ve
chunks have reached.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.