WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH 1 of 2] tools/libxc: Remus Checkpoint Compression

On Fri, Jun 17, 2011 at 5:35 AM, George Dunlap <George.Dunlap@xxxxxxxxxxxxx> wrote:
On Thu, Jun 16, 2011 at 4:50 PM, Shriram Rajagopalan <rshriram@xxxxxxxxx> wrote:
> @@ -846,6 +881,9 @@
>     if (!countpages)
>         return count;
>
> +    if (buf->compression)
> +        return pagebuf_get_one(xch, ctx, buf, fd, dom);

I'm having trouble understanding the logic of this one.  What's
supposed to be going on here?

The original xc_domain_restore code expects the list of pages immediately
after the list of pfns for a batch.

With compression, the compressed chunks are attached at the fag end of the
pagebuf,
[<pfn list>]+,[XC_SAVE_ID_* stuff like vcpu info, hvm_ident_pt, etc],
       [XC_SAVE_ID_COMPRESSED_DATA,compressed chunk]+

So that if block diverts the pagebuf_get_one()'s control flow to
consume the rest of pagebuf in usual recursive manner.

original flow:

pagebuf_get_one()
 read integer
   if integer is positive, its a count of number of pfns in the pfn list, read count*sizeof(pfn) from io_fd
    1.1 parse pfn list and determine number of valid pages (countpages)
    1.2. read (countpages * PAGE_SIZE) from io_fd
       -- basically read batch of pfns + pages
   else if its negative
     check for one of XC_SAVE_ID_* elements and read appropriate size from io_fd
  return pagebuf_get_one()

**********
The modified flow:

pagebuf_get_one()
 read integer
   if integer is positive, its a count of number of pfns in the pfn list, read count*sizeof(pfn) from io_fd
    1.1 parse pfn list and determine number of valid pages (countpages)
>>1.2  if (compression) return pagebuf_get_one()
        --basically accumulate all the pfns and the (compressed) pages would come later
   else if its negative
     check for one of XC_SAVE_ID_* elements and read appropriate size from io_fd
     includes XC_SAVE_ID_COMPRESSED_DATA.
  return pagebuf_get_one()

> @@ -1257,6 +1310,11 @@
>         PERROR("read: p2m_size");
>         goto out;
>     }
> +    if ( RDEXACT(io_fd, &remus_flags, sizeof(uint32_t)) )
> +    {
> +        PERROR("read: remus_flags");
> +        goto out;
> +    }

Won't this break save/restore/migrate compatibility with older
versions of Xen (which won't be sending this word)?  This should
probably be sent as one of the XC_SAVE_ID_* numbers.

oh shux. you are right. thanks. I could probably send this as XC_SAVE_ID_COMPRESSION
after the last iteration and enable pagebuf->compression once its received.

> @@ -1076,9 +1150,11 @@
>     }
>
>   copypages:
> -#define wrexact(fd, buf, len) write_buffer(xch, last_iter, &ob, (fd), (buf), (len))
> -#define wruncached(fd, live, buf, len) write_uncached(xch, last_iter, &ob, (fd), (buf), (len))
> +#define wrexact(fd, buf, len) write_buffer(xch, last_iter, ob, (fd), (buf), (len))
> +#define wruncached(fd, live, buf, len) write_uncached(xch, last_iter, ob, (fd), (buf), (len))
> +#define wrcompressed(fd) write_compressed(xch, remus_ctx, last_iter, ob, (fd))

Hmm -- this seems a bit weird to have "default" values, but you're
following the convention here, so I guess OK...

> diff -r 23c068b10923 -r 5dbafaf24c70 tools/libxc/xenguest.h
> --- a/tools/libxc/xenguest.h    Wed Jun 15 16:16:41 2011 +0100
> +++ b/tools/libxc/xenguest.h    Thu Jun 16 08:28:15 2011 -0700
> @@ -29,6 +29,7 @@
>  #define XCFLAGS_STDVGA    8
>  #define X86_64_B_SIZE   64
>  #define X86_32_B_SIZE   32
> +#define XCFLAGS_REMUS_COMPRESS    16

<nitpick>This should be with the other XCFLAGS, not after the byte
size...</nitpick>

okay. :)
 -George Dunlap

shriram
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>