Ok, well I have it working....
The following NFS mount options:
hard,intr,vers=3,tcp,wsize=32768,wsize=32768,timeo=600
Here are the changes to the /etc/sysctl.conf file on the Guests
(for the host, the last line, sunrpc, is not available, so remove it)
net.core.netdev_max_backlog = 3000
net.core.rmem_default = 256960
net.core.rmem_max = 16777216
net.core.wmem_default = 256960
net.core.wmem_max = 16777216
net.core.rmem_default = 65536
net.core.wmem_default = 65536
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 16384 4194304
net.ipv4.tcp_mem = 4096 4096 4096
sunrpc.tcp_slot_table_entries = 128
Also, add "/sbin/sysctl -p" as the first entry in /etc/init.d/netfs to
make sure that the setrtings get read before any NFS mounts take
place.
For the record, I get 95-102MB/sec each with a simple DD
--tmac
On Dec 30, 2007 7:11 AM, Riccardo Veraldi <Riccardo.Veraldi@xxxxxxxxxxxx> wrote:
>
> if you want to get Gigabit performance on your domU (using HVM
> virtualization)
> you MUST compile the xen unmodified_drivers (in particular Netfront) and
> load
> those drivers as kernel modules on your domU.
> Then you must change the guest machine xen file using netfront insted of
> ioemu
> for the network interface. I have written a page on how to do it but it
> is written in italian.
> Anyway if you follow the instruction you should understand looking at
> the bare commands.
>
> https://calcolo.infn.it/wiki/doku.php?id=network_overbust_compilare_e_installare_il_kernel_module_con_il_supporto_netfront
>
> of couse the xen source coude depends on the xen version you are using
> on your dom0.
> Actually I was not satisfied of Xen 3.0.2 used on RHEL5 so we build rpm
> for Xen 3.1.2
> and actually we are using those.
>
> Rick
>
>
> tmac ha scritto:
>
> > I have a beefy machine
> > (Intel dual-quad core, 16GB memory 2 x GigE)
> >
> > I have loaded RHEL5.1-xen on the hardware and have created two logical
> > systems:
> > 4 cpus, 7.5 GB memory 1 x Gige
> >
> > Following RHEL guidelines, I have it set up so that eth0->xenbr0 and
> > eth1->xenbr1
> > Each of the two RHEL5.1 guests uses one of the interfaces and this is
> > verified at the
> > switch by seeing the unique MAC addresses.
> >
> > If I do a crude test from one guest over nfs,
> > dd if=/dev/zero of=/nfs/test bs=32768 count=32768
> >
> > This yields almost always 95-100MB/sec
> >
> > When I run two simultaneously, I cannot seem to get above 25MB/sec from
> > each.
> > It starts off with a large burst like each can do 100MB/sec, but then
> > in a couple
> > of seconds, tapers off to the 15-40MB/sec until the dd finishes.
> >
> > Things I have tried (installed on the host and the guests)
> >
> > net.core.rmem_max = 16777216
> > net.core.wmem_max = 16777216
> > net.ipv4.tcp_rmem = 4096 87380 16777216
> > net.ipv4.tcp_wmem = 4096 65536 16777216
> >
> > net.ipv4.tcp_no_metrics_save = 1
> > net.ipv4.tcp_moderate_rcvbuf = 1
> > # recommended to increase this for 1000 BT or higher
> > net.core.netdev_max_backlog = 2500
> > sysctl -w net.ipv4.tcp_congestion_control=cubic
> >
> > Any ideas?
> >
> >
> >
>
>
--
--tmac
RedHat Certified Engineer #804006984323821 (RHEL4)
RedHat Certified Engineer #805007643429572 (RHEL5)
Principal Consultant, RABA Technologies
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|