[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Question Also regarding interrupt balancing


  • To: "Keir Fraser" <Keir.Fraser@xxxxxxxxxxxx>
  • From: harish <mvharish@xxxxxxxxx>
  • Date: Fri, 9 Jun 2006 11:39:01 -0700
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Fri, 09 Jun 2006 11:39:24 -0700
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:references; b=N0sgthu8tj3b1jWN9gOGVePkGSvzW3s/dVUojqfzMMK4IcU6guqyuT7//PhQTgtSHOjbkEyQsMIuF/UYb7+Tlk5v7cDphsr3wiqLMr65VhJxug0GRiRiCbWmfTTHAtsp1ulSTJGeSYf983nc4PC3M6+y1M9FhakOaZUsfIXUq9o=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Hi Keir,

Apologize for the delayed response.

Is it possible to use smp_afffinity to pin the interrupts to specific pcpus.

In my machine:

cat /proc/irq/23/smp_affinity
0f

I tried
echo "2" > /proc/irq/23/smp_affinity  [with the hope that the interrupts get routed to pcpu1]

But do not see the new value taking effect. I am missing something.
thanks in advance,
harish

On 5/22/06, Keir Fraser < Keir.Fraser@xxxxxxxxxxxx> wrote:

On 22 May 2006, at 17:43, harish wrote:

> Hi,
>  Was doing some netperf tests and noticed that all the interrupts (for
> network) were being serviced by pcp0 although dom0 was configured to
> use all the pcpus [4vcpus].
>  Questions:
>  1) Is there a way dom0 can be configured to process the interrupts
> using 4pcpus instead of just one?

Running irqbalance daemon in dom0 should do the trick. If there's no
other load in domain0 though, irqbalance may decide not to change irq
affinity. There's no way to do fine-grained interrupt balancing (e.g.,
round-robin interrupts).

>  2) What is the recommended scheduling policy for a network I/O
> intensive workload?

If delivering to a domU, you want dom0 and domU running on different
CPUs or CPU will be the bottleneck.

  -- Keir

>  Sample output:
>
>  cat /proc/xen/interrupts
>
>  cat /proc/interrupts
>   CPU0 CPU1 CPU2 CPU3
>   1: 8 0 0 0 Phys-irq
> i8042
>   8: 1 0 0 0 Phys-irq rtc
>   9: 0 0 0 0 Phys-irq acpi
>  11: 0 0 0 0 Phys-irq
> ohci_hcd:usb1
>  12: 105 0 0 0 Phys-irq
> i8042
>  14: 302432 0 0 0 Phys-irq ide0
>  16: 375 0 0 0 Phys-irq
> aic7xxx
>  17: 34719 0 0 0 Phys-irq
> cciss0
>  18: 53158 0 0 0 Phys-irq eth0
>  19: 2 0 0 0 Phys-irq
> peth1
>  20: 1062076 0 0 0 Phys-irq
> peth2 <<< --------Was using this
>  21: 25189 0 0 0 Phys-irq
> peth3
>  22: 18846 0 0 0 Phys-irq
> peth4
>  23: 18682 0 0 0 Phys-irq eth5
>  256: 1456444 0 0 0 Dynamic-irq
> timer0
>  257: 52873 0 0 0 Dynamic-irq
> resched0
>  258: 282 0 0 0 Dynamic-irq
> callfunc0
>  259: 0 7935 0 0 Dynamic-irq
> resched1
>  260: 0 33665 0 0 Dynamic-irq
> callfunc1
>  261: 0 508258 0 0 Dynamic-irq
> timer1
>  262: 0 0 3827 0 Dynamic-irq
> resched2
>  263: 0 0 33835 0 Dynamic-irq
> callfunc2
>  264: 0 0 390316 0 Dynamic-irq
> timer2
>  265: 0 0 0 43953 Dynamic-irq
> resched3
>  266: 0 0 0 33870 Dynamic-irq
> callfunc3
>  267: 0 0 0 311447 Dynamic-irq
> timer3
>  268: 5091 0 0 0 Dynamic-irq
> xenbus
>  269: 0 0 0 0 Dynamic-irq
> console
>  270: 31532 0 0 0 Dynamic-irq
> blkif-backend
>  271: 1107498 0 0 0 Dynamic-irq
> vif7.0
>  NMI: 0 0 0 0
>  LOC: 0 0 0 0
>  ERR: 0
>  MIS: 0
>
>  Ran multiple while loops to confirm that dom0 can use all the 4pcpus
> if required. And so believe it must be something to do with the way
> the interrupts are being handled.
>
>  Thanks in advance,
>  hmv
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.