[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] save/restore image format



On Tue, Sep 26, 2006 at 11:17:57AM +0800, Zhai, Edwin wrote:

> ian/keir,
> 
> this is a proposal for save/restore image format with more 
> information(version, host/gust info...).

The extra information in the header seems /much/ improved. I'm not sure
why cpu freq is there? This is a dynamic value!

cpu id data must be in a separate 'section' since it likely doesn't make
sense for other processor types, or at least, they'll have a different
format.

I still believe we should use more or less the same format for core
files too; in that respect we need a header field for the /type/ of
image. This could also identify HVM images, etc.

What I'd like to see is something very ELF-like: there's a simple
header, followed by a list of sections. For saved files and core files
they would have the section offset values filled in. For migration, we'd
have a special sentinel value (or a different image type) indicating
that the contents are streamed and the offsets are unknown.

So we'd have a section for guest config, a section for cpu-id and the
like, etc. You could represent, say, the number and size of entries in
the vcpu config section just like ELF does, in the section table, thus
the section would have just the vcpu context.

Your document doesn't have the p2m frame list section, or the
"extended-info" structure that's rather unceromoniously plonked in for
extended-cr3 guests. It also misses the unwritten pages array and the
shared info page. Unless you're suggesting that we have a completely
different format for HVM guests. I hope not!

Approaching this in an ELF-like manner naturally gets us a clear image
format that's easily extensible and understandable, and it'd be great if
we could do this now.

regards
john

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.