WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Haddrive Performance

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] Haddrive Performance
From: Marcus Hardt <marcus.hardt@xxxxxxxxxx>
Date: Mon, 25 Jul 2005 16:27:58 +0200
Delivery-date: Mon, 25 Jul 2005 14:26:32 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <A95E2296287EAD4EB592B5DEEFCE0E9D282769@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Forschungszentrum Karlsruhe
References: <A95E2296287EAD4EB592B5DEEFCE0E9D282769@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.7.2
On Monday 25 July 2005 13:15, you wrote:
> > I've ran some simple 'dd' read performance tests on
> > xen-2.0.6-stable, finding quite different results between
> > image backed domains and partition backed domains. The Image
> > backed domains are slightly faster than the partion backed ones.
>
> Are you sure they were using similar parts of the disk? There can easily
> by a 2x or even 3x performance difference between the outside and inside
> edges of the disk.

Sure, I know. I was using different parts for the partitions and for the 
images. But I've also measured the (parrallel) read-throughput of all 
available partitions which didn't show a great variance: Avg read time for 
1GB with 8 parrallel runs was 339 s, where the longest measurement took 386 s 
and the shortest was 314 s.

> > Also I not, that for more than two domains, running dd in
> > parallel, the performance (especially of the image backed
> > domains) drops dramatically below values I've measured using
> > xen-2.0.5-stable.
> >
> > Are you aware of this? Will it be resolved?
>
> No, we're not aware of any difference between the two, though there were
> some changes to the way requests were batched. It would be good if a few
> people on the list could investigate and produce some detailed
> experiemntal data.

What I did was to run
        dd if=$DEV of=/dev/null bs=32k count=32k 
For 1, 2, 3, 4 and 8 parrallel instance of 
  a) My system with an SMP kernel
     here $DEV had to point to a different partition for each run, since
     otherwise 8 runs took as long as 1, most probably due to caching
  b) 2.6.11.10-xen0 image backed
     here $DEV was /dev/hda1 used for all domains. The image files we placed 
     on /dev/hda12 (=20 MB/s according to hdparm)
  c) 2.6.11.10-xen0 partition backed
     $DEV=/dev/hda1, which were actually backed on /dev/hda[8-11].

I've just run hdparm -tT on all of my partitions, giving me values between 20 
and 25 MB/s. According to this: the images appear even slower than they 
should be, not faster.

If I'll have enough time, I can run some more benchmarks this week.

-- 
Marcus

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>