WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: Loopback Performance (Was Re: Disk naming)

To: Anthony Liguori <aliguori@xxxxxxxxxx>
Subject: [Xen-devel] Re: Loopback Performance (Was Re: Disk naming)
From: Mark Williamson <mark.williamson@xxxxxxxxxxxx>
Date: Sat, 16 Apr 2005 00:29:15 +0100
Cc: Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, Adam Heath <doogie@xxxxxxxxxxxxx>
Delivery-date: Fri, 15 Apr 2005 23:54:03 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <42603C24.8060303@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <A95E2296287EAD4EB592B5DEEFCE0E9D1E3BCA@xxxxxxxxxxxxxxxxxxxxxxxxxxx> <42603C24.8060303@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.8
On Friday 15 April 2005 23:11, Anthony Liguori wrote:
> Ian Pratt wrote:
> >I think I'd prefer not to complicate blkback, unless something's
> >fundamentally wrong with the design of the loopback device. Anyone know
> >about this? The trick with this kind of thing is avoiding deadlock under
> >low memory situations...
>
> I poked through the loopback code and it seems to be doing the
> reasonable thing.  I decided to investigate for myself what the
> performance issues with the loopback device were.  My theory was that
> the real cost was the double inode lookups (looking up the inodes in the
> filesystem on the loopback and then looking up the inodes on the host
> filesystem).

I'm sorry but I don't follow this.  The inodes for the filesystem inside it 
only need to be looked up by the guest filesystem driver.  The inode for the 
disk file only needs to be looked up once in dom0 when the file is opened 
(the metadata will then be cached).  Am I missing something?

The data you've collected are interesting though.  I wonder if searching the 
LKML archives might yield any interesting discussion about the loop device's 
behaviour.

Cheers,
Mark

> To verify, I ran a series of primitive tests with dd.  First I baselined
> the performance of writing to a large file (by running dd if=/dev/zero
> conv=notrunc) on the host filesystem.  Then I created a loopback device
> with the same file and ran the same tests writing directly to the
> loopback device.
>
> I then created a filesystem on the loopback device, mounted it, then ran
> the same test on a file within the mount.
>
> The results are what I expected.  Writing directly to the loopback
> device was equivalent to writing directly to the file (usually faster
> actually--I attribute that to buffering).  Writing to the file within
> the filesystem on the loopback device was significantly slower (about a
> ~70% slowdown).
>
> If my hypothesis is right, that the slowdown is caused by the double
> inode lookups, then I don't think there's anything we could do in the
> blkback drivers to help that.  This is another good reason to use LVM.
>
> This was all pretty primitive so take it with a grain of salt.
>
> Regards,
> Anthony Liguori

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>