[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] problem in setting cpumask for physical interrupt


  • To: "Keir Fraser" <Keir.Fraser@xxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "Agarwal, Lomesh" <lomesh.agarwal@xxxxxxxxx>
  • Date: Fri, 26 Oct 2007 11:06:50 -0700
  • Delivery-date: Fri, 26 Oct 2007 11:07:35 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcgXaDI5jZ0n5YLuSS2nMjrC8ba3VwAMx79eABfEgXA=
  • Thread-topic: [Xen-devel] problem in setting cpumask for physical interrupt

Function pirq_guest_bind is called for physical device IRQ. Right?

Even if event channel is bound to one VCPU why do we need to bind physical IRQ to a particular physical CPU. VCPU is not guaranteed to run on same physical processor anyway. So, if Xen sets interrupt affinity for physical IRQ to all the physical processor IOAPIC will send that physical IRQ to all physical processors in round robin manner. That should give better interrupt latency for physical IRQs.

 


From: Keir Fraser [mailto:Keir.Fraser@xxxxxxxxxxxx]
Sent: Thursday, October 25, 2007 11:42 PM
To: Agarwal, Lomesh; xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] problem in setting cpumask for physical interrupt

 

An event channel can only be bound to one VCPU at a time. The IRQ should be bound to the CPU that that VCPU runs on.

 -- Keir

On 26/10/07 01:36, "Agarwal, Lomesh" <lomesh.agarwal@xxxxxxxxx> wrote:

Why does function pirq_guest_bind (in arch/x86/irq.c) calls set_affinity with cpumask of current processor? If I understand correctly pirq_guest_bind is called in response to guest calling request_irq. So, if by chance all guests call request_irq on the same physical processor then Xen may end up setting interrupt affinity to one physical processor only.
I think Xen should set the affinity to all the processors available. VCPU is not guranteed to run on the same physical processor on which it called request_irq anyway.
I will send a patch if my understanding looks ok.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.