[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Interrupt Affinity Question


  • To: xen-devel@xxxxxxxxxxxxxxxxxxx
  • From: "Pradeep Vincent" <pradeep.vincent@xxxxxxxxx>
  • Date: Fri, 13 Apr 2007 20:33:45 -0700
  • Delivery-date: Fri, 13 Apr 2007 20:32:23 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition; b=cISkHMadqozA8PygYOwhNGxLbQXbEw67RIEM4i95mbDqnXN5caXNHIGrc4/TgXxoKCa3znGZnV61B+zdi9XbncxIF82ohkfrf/r5E5/s+iVZbfNCuPhTYjAEidKzcQaFgblMXi6biWJCdF+NCcf78bRidnfPZwZIsI895+Hin5c=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

I was trying to figure out how hardware irq smp affinity is set by the
hypervisor. Looks like at the time of bind request from the dom-0  for
a particular pirq, the processor that vcpu happens to be on is set to
receive the hardware interrupts corresponding to that irq channel.

If dom-0 vcpu to pcpu affinity is not set (dom0_vcpus_pin not set),
what happens when dom-0 vcpu migrates - is the  processor affinity of
the irq channels changed by some means to reflect the migration  or do
the hardware interrupts end up going to the old processor while the
pirq will be served by the dom-0 vcpu on a different processor ?


Thanks,

- Pradeep Vincent

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.