This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-users] Best Practices for PV Disk IO?

To: Jeff Sturm <jeff.sturm@xxxxxxxxxx>
Subject: Re: [Xen-users] Best Practices for PV Disk IO?
From: Christopher Chen <muffaleta@xxxxxxxxx>
Date: Mon, 20 Jul 2009 20:18:04 -0700
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 20 Jul 2009 20:18:50 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=sKCCQdJgi4dAwYFsTQ89pIOdsXpZoeCSJOdz4WeBvAM=; b=TGenNVYFUw8CSI5e+ojs0fTLXFUsvcA8+jEjTCy7EdE29pQPs2+nrTI7wdg+cwyE9a xPNxnvi+xkQlJn7t/geqwht7wXo3g8VjQ0TG/lxvrcStTVfuucVSm+ERJAjv6N2BeO3I ppW7ZEOdSMacS1BhWQjTJdnv9eQEBEmwRgP/Q=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=q/b+V4YaztfTtXXAKPmibGIPGqUSWkkn+P04s1iHmjINwRT7qo2941voaeLQxtUTi7 3Hs6IiA0lYHwjWjrLgN1p1dhFiGLWrkH/oea0TIcAINLOqfn4aOs2YjwaW7IJdXJGVYD 89zY9Oc4OMf+JxmDu4L2khjRBhwKZxrdxskuc=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <64D0546C5EBBD147B75DE133D798665F02FDC450@xxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <7bc80d500907201726y53ded167sf565da72c36908b1@xxxxxxxxxxxxxx> <64D0546C5EBBD147B75DE133D798665F02FDC450@xxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
On Mon, Jul 20, 2009 at 7:25 PM, Jeff Sturm<jeff.sturm@xxxxxxxxxx> wrote:
>> -----Original Message-----
>> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-
>> bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Christopher Chen
>> Sent: Monday, July 20, 2009 8:26 PM
>> To: xen-users@xxxxxxxxxxxxxxxxxxx
>> Subject: [Xen-users] Best Practices for PV Disk IO?
>> I was wondering if anyone's compiled a list of places to look to
>> reduce Disk IO Latency for Xen PV DomUs. I've gotten reasonably
>> acceptable performance from my setup (Dom0 as a iSCSI initiator,
>> providing phy volumes to DomUs), at about 45MB/sec writes, and
>> 80MB/sec reads (this is to a IET target running in blockio mode).
> For domU hosts, xenblk over phy: is the best I've found.  I can get
> 166MB/s read performance from domU with O_DIRECT and 1024k blocks.
> Smaller block sizes yield progressively lower throughput, presumably due
> to read latency:
> 256k: 131MB/s
> 64k:    71MB/s
> 16k:    33MB/s
> 4k:     10MB/s
> Running the same tests on dom0 against the same block device yields only
> slightly faster throughput.
> If there's any additional magic to boost disk I/O under Xen, I'd like to
> hear it too.  I also pin my dom0 to an unused CPU so it is always
> available.  My shared block storage runs the AoE protocol over a pair of
> 1GbE links.
> The good news is that there doesn't seem to be much I/O penalty imposed
> by the hypervisor, so the domU hosts typically enjoy better disk I/O
> than an inexpensive server with a pair of SATA disks, at far less cost
> than the interconnects needed to couple a high-performance SAN to many
> individual hosts.  Overall, the performance seems like a win for Xen
> virtualization.
> Jeff


That sounds about right. Those numbers I quoted were from a iozone
latency test with 64k block sizes--80 is very close to your 71!

I found that increasing readahead (to a point) really helps get me to
80MB/sec reads, and using a low nr_requests (in linux DomU) seems to
influence the scheduler (cfq on the domU) to dispatch writes (up to
50MB/sec) faster, increasing write speed.

Of course, on the Dom0, I see 110MB/sec writes and reads on the same
block device at 64k.

But yeah, I'd love to hear what other people are doing...



Chris Chen <muffaleta@xxxxxxxxx>
"The fact that yours is better than anyone else's
is not a guarantee that it's any good."
-- Seen on a wall

Xen-users mailing list

<Prev in Thread] Current Thread [Next in Thread>