This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-users] pfSense HVM

To: "xen-users@xxxxxxxxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-users] pfSense HVM
From: Matej Zary <zary@xxxxxxxxx>
Date: Sun, 6 Jun 2010 02:23:43 +0200
Accept-language: en-US
Acceptlanguage: en-US
Delivery-date: Sat, 05 Jun 2010 17:25:26 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4C0AD13F.101@xxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <4BFE986C.5000704@xxxxxxxxxxx> <4C001295.4040909@xxxxxxxxxx> <4C0055BC.1050202@xxxxxxxxxxx> <4C006BF0.4040208@xxxxxxxxxx> <4C0A8F51.90503@xxxxxxxxxxx> <4C0AABA5.1080208@xxxxxxxxxx>,<4C0AD13F.101@xxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcsE/8Z72CU3R+pYQM2RpCEy1Q1NgAADanxp
Thread-topic: [Xen-users] pfSense HVM

looks like you have much better HW than I used - when I benched Linux HVM guest 
on Athlon X2 1,9 GHz I got scary results (in a bad way of course) :D. 90 mbit/s 
is not bad at all for emulated hvm guest - the questions is, how much CPU power 
it consumes in the dom0 (qemu-dm process) and how many availabe cores do you 
have. PV driver should solve this issue (or pci passthru for the WAN NIC and PV 
driver for the LAN NIC). I got better results is some cases with the default 
emulated NIC (realtek?),  but in majority, the emulated e1000 was better. 



From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx 
[xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Jonathan Tripathy 
Sent: 06 June 2010 00:35
To: Nicolas Vilz; Xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] pfSense HVM

>> Hi Nic,
>> What kind of throughput are you getting with your pfsense guest? I've
>> got my Gigabit NIC passthrough to the pfsense DomU (To act as the
>> "WAN"), then I've connected the "LAN" side of pfsense to a Xen
>> bridge, with the Dom0 is also connected to. I tried to do a file
>> tranfer (via samba) from a machine on the "WAN" to the Dom0. The
>> speed was capping out at 90Mbps. In the pfsense config, I've made the
>> NIC "e1000" and pfsense does show it's connected at 1000.
>> Any ideas?
> Not really, i tried e1000 as well, but couldn't see any advantage for
> that (Throughput was nearly the same or worse).  Either i don't see
> the difference between emulated 8139cp and e1000 or there is no
> difference when using it for openvpn in a bridged setup. I will
> analyze that further.  The real performance boost would be the pv
> driver with freebsd and pfsense, but i haven't done that yet (patched
> pfsense kernel with xen modules). Inside openvpn i get a max
> throughput of 800 kb/s, where there should be 100Mbit or 1000 Mbit (if
> i emulate the right one). Thats a bit confusing for me, but i keep
> observing and searching. Pfsense shows connected at 1000 Mbit, too on
> my side.
> That doesn't really help you right now, but that is what i know and
> experienced so far.
> Sincerly
> Nicolas
Hi Nicolas,

Thanks for sharing the above. Any further testing you do would very much
be appreciated.

If we can somehow manage to patch the pfsense kernel to get PV working
then that would be great!

I guess 90Mbit for me isn't too bad though, since my colo's connection
will be limited to only 100Mbit, so I guess it's ok there

If anyone else has any experience of using the e1000 drive in HVM guests
(especially BSD), please let me know

Thanks everyone

Xen-users mailing list

Xen-users mailing list

<Prev in Thread] Current Thread [Next in Thread>