[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] save/restore problems



'xm save' only saves the memory image of the domain. You should also
save a snapshot of the domain's file system using LVM (lvcreate -s) if
you intend to modify it. 

Restoring a domain if the file system has changed behind its back is
sure to lead to disaster (unless you're using NFS).

Ian

> Mark Williamson replied earlier today that the second problem 
> could have been because of known issues that have since been 
> fixed and the first issue was because of something else that 
> I might need to help investigate why.
> If that is the case, could you or Mark guide me as to how to 
> go about debugging the problem. If you can give me some 
> pointers, I will take it from there.
> 
> Thanks very much for your responses.
> 
> (I had earlier replied to Mark's message, but to the old list --
> xen-devl@xxxxxxxxxxxxxxxxxxxxx)
> 
> Thanks again
> -Hari
> 
> 
> ---------- Forwarded message ----------
> 
> This is a known issue fixed in later releases.  BTW, the list 
> has moved, you might want to re-register ....
> 
> B.
> 
> 
> On Wed, 2005-03-30 at 00:51, Hari Kodungallur wrote:
> > Hi All,
> >
> > I am running into a bunch of problems when trying to do 
> save/restore.
> > (I am NOT running the latest xen. I am running 2.0)
> >
> > I am trying out the FC-1, and RH-ES-9 images. I can create 
> domains for 
> > all the three. But when I save some configurations of these images 
> > (for example, install some rpms on a clean image and save 
> the image) 
> > and then restore them, I am running into the following issues:
> >
> > (1) FC-1: The most success I have is with this. I can do save and 
> > restore. But as soon as I execute a command remotely that 
> modifies the 
> > file system (e.g., "ssh -n hostname "do-some-fs-updates.sh"), it 
> > starts complaining that the filesystem is read only. The 
> error on the 
> > console looks something like:
> >
> > xen_blk:  Unexpected blkif status disconnected in state connected
> > blkfront: recovered 0 descriptors
> > nfs warning: mount version older than kernel nfs warning: mount 
> > version older than kernel EXT3-fs error (device sda1): 
> > ext3_free_blocks: bit already cleared for block 293633 Aborting 
> > journal on device sda1.
> > ext3_abort called.
> > EXT3-fs error (device sda1): ext3_journal_start: Detected aborted 
> > journal Remounting filesystem read-only EXT3-fs error 
> (device sda1) in 
> > start_transaction: Journal has aborted
> > ext3_reserve_inode_write: aborting transaction: Journal has 
> aborted in 
> > __ext3_journal_get_write_access<2>EXT3-fs error (device sda1) in
> > ext3_reserve_inode_write: Journal has aborted
> > ext3_reserve_inode_write: aborting transaction: Journal has 
> aborted in 
> > __ext3_journal_get_write_access<2>EXT3-fs error (device sda1) in
> > ext3_reserve_inode_write: Journal has aborted EXT3-fs error (device 
> > sda1) in ext3_orphan_del: Journal has aborted EXT3-fs error (device 
> > sda1) in ext3_truncate: Journal has aborted EXT3-fs error (device 
> > sda1) in start_transaction: Journal has aborted EXT3-fs 
> error (device 
> > sda1) in start_transaction: Journal has aborted
> >
> >
> > Doing this a couple of times (shutdown and then restore again) 
> > corrupts the fs and then i need to run fsck at boot time to 
> get back 
> > to normal status.
> >
> >
> > (2) RH-ES-9: The save command ("xm save RHES9 myRHES9") just hangs.
> > Does not do anything.
> >
> >
> > My question is whether this is something that anyone have 
> seen before 
> > and/or whether anyone could point me to why this happens 
> here and/or 
> > whether installing 2.0.5 would solve this for me.
> >
> > Thanks
> > -Hari
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.