WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] domU using linux-2.6.37-xen-next pvops kernel with CONFI

To: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Subject: Re: [Xen-devel] domU using linux-2.6.37-xen-next pvops kernel with CONFIG_PARAVIRT_SPINLOCKS disabled results in 150% performance improvement (updated)
From: Dante Cinco <dantecinco@xxxxxxxxx>
Date: Tue, 21 Dec 2010 10:01:59 -0800
Cc: Jeremy Fitzhardinge <jeremy@xxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 21 Dec 2010 10:02:55 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:cc:content-type; bh=F+nDhBGVoRJ8/9vHM06iVaPvzvLv6T3M7XqqTayEhBM=; b=PoTC0oT61F3inj4fXSksp9SsbBW+0/WCg1j/O4Desw0qpWWqdTNVm3JIFCUABGMumN 7zs4nqnkB0qt4Srs8XsTxW9QSSQa+BJow6uq6x3sgpNeBAZch30glBdt6EA7wPCp0N/w qLJxq4iC/g/JDFNvtVbL5OeLvurjT3Hc7bLYk=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=YEdW2n1w6vyZ0LC7B0dCkPSSGIdHBgSVJz6DKdSkaWxcUkGFoyDLMiTszGJWVfYNSy HiL61dRnA/eH1s1pFu0JYEnzvaQBlK8pO6em8m44DHp03U25BuMoYJu3k27lWk3O7FYz 0PaOoXtEeJe7tdv3CHoyyXi9Rba1FCd5GM0KA=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20101221162247.GB3101@xxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <AANLkTi=kbSE3CRkvGU0hgL33i0QX+bQGLDJSpzbm7x2x@xxxxxxxxxxxxxx> <20101221162247.GB3101@xxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx


On Tue, Dec 21, 2010 at 8:22 AM, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> wrote:
On Mon, Dec 20, 2010 at 05:03:13PM -0800, Dante Cinco wrote:
> (Sorry, I accidentally sent the previous post before finishing the summary
> table)
>
> For a couple of months now, we've been trying to track down the slow I/O
> performance in pvops domU. Our system has 16 Fibre Channel devices, all
> PCI-passthrough to domU. We were previously using a 2.6.32 (Ubuntu version)
> HVM kernel and were getting 511k IOPS. We switched to pvops with Konrad's
> xen-pcifront-0.8.2 kernel and were disappointed to see the performance
> degrade to 11k IOPS. After disabling some kernel debug options including
> KMEMLEAK, the performance jumped to 186k IOPS but still well below what we
> were getting with the HVM kernel. We tried disabling spinlock debugging in
> the kernel but it actually resulted in a drop in performance to 70k IOPS.
>
> Last week we switched to linux-2.6.37-xen-next and with the same kernel
> debug options disabled, the I/O performance was slightly better at 211k
> IOPS. We tried disabling spinlock debugging again and saw a similar drop in
> performance to 58k IOPS. We searched around for any performance-related
> posts regarding pvops and found two references to CONFIG_PARAVIRT_SPINLOCKS
> (one from Jeremy and one from Konrad):
> http://lists.xensource.com/archives/html/xen-devel/2009-05/msg00660.html
> http://lists.xensource.com/archives/html/xen-devel/2010-11/msg01111.html
>
> Both posts recommended (Konrad strongly) enabling PARAVIRT_SPINLOCKS when
> running under Xen. Since it's enabled by default, we decided to see what
> would happen if we disabled CONFIG_PARAVIRT_SPINLOCKS. With the spinlock
> debugging enabled, we were getting 205k IOPS but with spinlock debugging
> disabled, the performance leaped to 522k IOPS !!!
>
> I'm assuming that this behavior is unexpected.

<scratches his head> You got me. I am really happy to find out that you guys
were able to solve this conundrum.

Are the guests contending for the CPUs (so say you have 4 logical CPUs and
you launch two guests, each wanting 4 vCPUs)? How many CPUs do the guests have?
Are the guests pinned to the CPUS? What is the scheduler in the Hypervisor? credit1?
>

We only have one guest which we assign 16 VCPUs, each pinned to its respective PCPU. The system has 24 PCPUs (dual Westmere). Each of the 16 Fibre Channel devices is affinitized to its own CPU.

xen_scheduler          : credit
(XEN) Using scheduler: SMP Credit Scheduler (credit)
xm sched-credit: Weight=256, Cap=0
 
> Here's a summary of the kernels, config changes and performance (in IOPS):
>
>                       pcifront   linux
>                       0.8.2      2.6.37-xen-next
>                       pvops      pvops
> Spinlock
> debugging enabled,     186k       205k
> PARAVIRT_SPINLOCKS=y
>
> Spinlock
> debugging disabled,     70k        58k
> PARAVIRT_SPINLOCKS=y
>
> Spinlock
> debugging disabled,    247k       522k
> PARAVIRT_SPINLOCKS=n

Whoa....  Thank you for the table. My first thought is that: "whoa, PV byte-locking
spinlocks sure sucks", but then I realized that there are some improvements in
2.6.37-xen-next. Like in the vmap flushing code ..


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel