WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Paravirtualised drivers for fully virtualised domains

To: Steven Smith <sos22-xen@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Paravirtualised drivers for fully virtualised domains
From: Steve Dobbelstein <steved@xxxxxxxxxx>
Date: Wed, 9 Aug 2006 13:05:34 -0500
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 09 Aug 2006 11:06:28 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20060808094215.GA4161@xxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Steven Smith <sos22-xen@xxxxxxxxxxxxx> wrote on 08/08/2006 04:42:15 AM:

> I just put a new version of the PV-on-HVM patches up at
> http://www.cl.cam.ac.uk/~sos22/pv-on-hvm/rev8 .  These are against
> 10968:51c227428166 and are otherwise largely unchanged from the
> previous versions.
>
> Steven.

I have been running some informal performance tests on the rev8 patches.
Thought I'd share my finding thus far.

I am finding that disk performance (sequential/random read/write) with the
PV xen-vbd driver in an HVM domain is pretty much equal to that of a PV
domain.  Cool.  Not surprising, but cool nonetheless.

At the moment I'm having trouble running a network test (netperf) of the PV
xen-vnif driver within our testing framework.  I'll post those findings
when I get some reliable numbers.  Testing on the rev2 version of the
patches showed pretty much equal network performance between running on a
PV driver in an HVM domain and a PV domain.

I am noticing two odd behaviors with the rev8 patches, though.

1. When I try to create a PV domain, the domain hangs on bootup displaying
repeated messages to the console:
netfront: Bad rx response id 1.
netfront: Bad rx response id 0.
netfront: Bad rx response id 1.
netfront: Bad rx response id 0.
...

I had to reboot from an unpatched changeset 10968 build to get the
performance numbers for a PV domain.  (Hence, I am not comparing numbers
from the exact same code base, which is one reason why the tests are
"informal".)

I haven't dug into the cause of this problem yet.

2. When I destroy the HVM domain it stays in the zombie state.
dib:~ # xm list
Name                              ID Mem(MiB) VCPUs State  Time(s)
Domain-0                           0      768     1 r-----  2328.4
Zombie-hvm1                        1      768     1 -----d  1502.6

I'm not sure how to debug this one.  Any pointers would be helpful.

Steve D.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>