WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Question Also regarding interrupt balancing

To: "Keir Fraser" <Keir.Fraser@xxxxxxxxxxxx>
Subject: Re: [Xen-devel] Question Also regarding interrupt balancing
From: harish <mvharish@xxxxxxxxx>
Date: Mon, 12 Jun 2006 16:42:56 -0700
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 12 Jun 2006 16:43:12 -0700
Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:references; b=oArmjnZ30nDRtwQn23e7TAu+EkWGN00eZAxlqvc/Yi943a2mkoZKRCcdwajKZUzxYqql+oDe3EveFcHozhBZcniv0//mm8hMi49Ynna6XODXqUUBe56uuIUiD9WO/fWwNViw0Hki3tumvi0h0j/62isY5mRSvTv6NnMuknVajJk=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <9bae6f237b7ec1b9cd78ca745bee1e41@xxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <a33d0a9f0605220943q75af2d77m8fbf1508a2ad9a88@xxxxxxxxxxxxxx> <3289dc4a0f8d31e751f7fb5edbb0066a@xxxxxxxxxxxx> <a33d0a9f0606091139w5e5178f7x28d98db5721ceb76@xxxxxxxxxxxxxx> <260b1f3e22421d58dd152a6969f106fc@xxxxxxxxxxxx> <a33d0a9f0606100958i75dd812at1fa420966f92189b@xxxxxxxxxxxxxx> <9bae6f237b7ec1b9cd78ca745bee1e41@xxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hi Keir,

I used -unstable tree and noticed the following:

cat /proc/interrupts | grep eth
 18:       7580          0          0          0        Phys-irq  eth0
 19:          1          0          0          0        Phys-irq  eth1
 20:       1982         78        117          0        Phys-irq  peth2
 21:         18          0          0       1129        Phys-irq  eth3
 22:         67       1077          0          0        Phys-irq  eth4
 23:         12       1135          0          0        Phys-irq  eth5

cat /proc/irq/20/smp_affinity
00000001

echo 2 > /proc/irq/20/smp_affinity [...works..]
echo 4 > /proc/irq/20/smp_affinity [...works..]
echo 8 > /proc/irq/20/smp_affinity [...works..]

But, a cumulative does not work...meaning...
echo 3>
echo 5>
echo f>  etc.... do not work.

Is that a bug or is it by design?

thanks,
harish


On 6/11/06, Keir Fraser <Keir.Fraser@xxxxxxxxxxxx> wrote:

On 10 Jun 2006, at 17:58, harish wrote:

>  echo "2" > /proc/irq/23/smp_affinity does not seem to change the
> value in smp_affinity
>  cat /proc/irq/23/smp_affinity still shows 0f
>
>  Could there some bug or configuration problem that you can think of?

It was a bug, which I've just fixed in -unstable and -testing staging
trees. When that reaches the public trees you should find that writing
to smp_affinity has the usual effect, but note:
  1. As when running on native, a request to change affinity is not
processed until the next interrupt occurs for that irq. If the
interrupt rate on that irq is very low, you may be able to observe old
value in smp_affinity before it is changed.
  2. You cannot change affinity of CPU-local interrupts (timer, resched,
callfunc). Requests to do so are silently ignored.
  3. If you try to set a multi-cpu cpumask for affinity, it will be
changed to a single-cpu cpumask automatically. Linux-on-Xen does not
automatically balance irq load across cpus -- that has to be done by a
user-space daemon (e.g., irqbalance).

  -- Keir


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel