WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

RE: [Xen-users] Re: Snapshotting LVM backed guests from dom0

To: "chris" <tknchris@xxxxxxxxx>, "Xen-Users List" <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-users] Re: Snapshotting LVM backed guests from dom0
From: Jeff Sturm <jeff.sturm@xxxxxxxxxx>
Date: Fri, 23 Apr 2010 15:34:37 -0400
Cc:
Delivery-date: Fri, 23 Apr 2010 12:37:09 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <z2r31e44a111004231053q8b0a282fh2895c74c3099cdc0@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <o2n31e44a111004171153p4e7688dg5b340abfb0f7d63b@xxxxxxxxxxxxxx> <z2r31e44a111004231053q8b0a282fh2895c74c3099cdc0@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcrjDj9OnSHvzKHoRxqM6dL/91Nq5gACjaXQ
Thread-topic: [Xen-users] Re: Snapshotting LVM backed guests from dom0
Chris,

Saw your original post, but hesitated to respond, since I'm not really an 
expert on either Linux block I/O or NFS.  Anyway...

On Sat, Apr 17, 2010 at 2:53 PM, chris <tknchris@xxxxxxxxx> wrote:
> Just looking for some feedback from other people who do this. I know
> its not a good "backup" method but "crash consistent" images have been
> very useful for me in disaster situations just to get OS running
> quickly then restore data from a data backup. My typical setup is to
> put the LV in snapshot mode while guest is running then dd the data to
> a backup file which is on a NFS mount point. The thing that seems to
> be happening is that the VM's performance gets pretty poor during the
> time the copy is happening.

We see this all the time on Linux hosts.  One process with heavy I/O can starve 
others.

I'm not quite sure why but I suspect it has something to do with the unified 
buffer cache.  When reading a large volume with "normal" I/O, buffer pages 
might get quickly replaced with pages that are never going to be read again, 
and your buffer cache hit ratio suffers.  Every other process on the affected 
host that needs to do I/O may experience longer latency as a result.  With Xen, 
that includes any domU.

A quick fix that worked for us:  Direct I/O.  Run your "dd" command with 
"iflag=direct" and/or "oflag=direct", if your version supports it (definitely 
works on CentOS 5.x, definitely *not* on CentOS 4.x).  This bypasses the buffer 
cache completely and forces dd to read/write direct to the underlying disk 
device.  Make sure you use an ample block size ("bs=64k" or larger) so the copy 
will finish in reasonable time.

Not sure if that'll work properly with NFS, however.  (Having been badly burned 
by NFS numerous times I tend to not use it on production hosts.)  To copy disks 
from one host to another, we resort to tricks like piping over ssh (e.g. "dd 
if=<somefile> iflag=direct bs=256k | ssh <otherhost> -c 'dd of=<otherfile> 
oflag=direct bs=256k'").  These copies run slow, but steady.   Importantly they 
run with minimal impact on other processing going on at the time.

> 3.   Tried nice-ing the dd to lowest priority and qemu-dm to highest

"nice" applies only to CPU scheduling and probably isn't helpful for this.  You 
could try playing with ionice, which lets you override scheduling priorities on 
a per-process basis.

Jeff



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>