WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Optimizing NFS to Xen

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] Optimizing NFS to Xen
From: Grabber <grabber@xxxxxxxxx>
Date: Sat, 3 Nov 2007 18:05:00 -0200
Delivery-date: Sat, 03 Nov 2007 13:05:43 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:message-id:date:from:to:subject:mime-version:content-type; bh=3uAwBvXFdy6jFJeFTa81VKYX1igJ0rLWoBiz5tqYHk0=; b=kFvr6B4NSwxdjLgYXWxaMSaypwxIETfBM167eKwelk2mvWj7kscZLqLc2tthPn1UrwL+oJuQAoka9Xq2oiO4BFbQJQfiY7S94cDv8GX5aAPWVKBdyatZSBkuikOCPOJPEY3syHi+tZWdw8wVciAkDYoVqMAYhHpeZCtGp1kf/dA=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:mime-version:content-type; b=p3/+eNHMoczdftskTwE7hds9yHqxuQHXwsfri+EfTrxEn672KngkryAgZBZAHAITV5kAJ4whQTabR/UNF2hjOGLv4CRJyjr4a2bWvXDbaK9nio1wjzvuTX4zkRl1DyhDDx99XRUlGjsy4bnaAp+nq4C5JGQhbpEN2VyOIzYElKs=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
I read a post in kernel maling list talking about swapfiles and NFS, but i cant`t undestand: swap don`t work with NFS and Xen?

I read at debianadministrarors`s web site this how-to:

    "NFS does not need a fast processor or a lot of memory. I/O is the bottleneck, so fast disks and a
fast network help. If you use IDE disks, use hdparam to tune them for optimal transfer rates. If you
support multiple, simultaneous users, consider paying for SCSI disks; SCSI can schedule multiple,
interleaved requests much more intelligently than IDE can.
    On the software side, by far the most effective step you can take is to optimize the NFS block
size. NFS transfers data in chunks. If the chunks are too small, your computers spend more time
processing chunk headers than moving bits. If the chunks are too large, your computers move more
bits than they need to for a given set of data. To optimize the NFS block size, measure the transfer
time for various block size values. Here is a measurement of the transfer time for a 256 MB file full
of zeros.
    # mount files.first.com:/home /mnt -o rw,wsize=1024 # time dd if=/dev/zero of=/mnt/test
bs=16k count=16k 16384+0 records in 16384+0 records out
    real 0m32.207s user 0m0.000s sys 0m0.990s
    # umount /mnt
    This corresponds to a throughput of 63 Mb/s. Try writing with block sizes of 1024, 2048, 4096,
and 8192 bytes (if you use NFS v3, you can try 16384 and 32768, too) and measuring the time
required for each. In order to get an idea of the uncertainly in your measurements, repeat each
measurement several times. In order to defeat caching, be sure to unmount and remount between
measurements.
    # mount files.first.com:/home /mnt -o ro,rsize=1024 # time dd if=/mnt/test of=/dev/null
bs=16k 16384+0 records in 16384+0 records out
    real 0m26.772s user 0m0.010s sys 0m0.530s
    # umount /mnt
    Your optimal block sizes for both reading and writing will almost certainly exceed 1024 bytes. It
may occur that, like mine, your data do not indicate a clear optimum, but instead seem to approach
an asymptote as block size is increased. In this case, you should pick the lowest block size which gets
you close to the asymptote, rather than the highest available block size; anecdotal evidence indicates
that too large block sizes can cause problems.
    Once you have decided on an rsize and wsize, be sure to write them into your clients' /etc/fstab.
You might also consider specifying the noatime option."


This really increase some performance running with XEN?


--
Regards,
Luiz Vitor Martinez Cardoso aka Grabber.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>